Skip to content

Task priorities #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
chescock opened this issue Jan 10, 2025 · 2 comments
Open

Task priorities #8

chescock opened this issue Jan 10, 2025 · 2 comments

Comments

@chescock
Copy link
Contributor

What problem does this solve or what need does it fill?

Bevy currently has three isolated thread pools, "compute", "async compute", and "IO", and allocates 50%, 25%, and 25% of the CPU cores to each pool. In addition to that, it runs one main thread and one rendering thread.

Most of the time, the async compute and IO pools are idle, so Bevy is only using half of the cores (plus one or two, if the main or render threads are working). The simplest solution would be to consolidate them into a single pool, but then background work like asset loading can wind up using all the threads, leaving none for foreground work!

What solution would you like?

Support task priorities, so that we can run a single task pool but not have background work block progress on foreground work.

I don't know enough about realistic use cases to know how much complexity is necessary here! It may be enough to have two hard-coded priorities for "foreground" and "background" tasks, but we may need more.

It may be enough to let any worker take any task and just prefer foreground ones. But we may need to ensure that some workers cannot run background tasks, so that they are available immediately when foreground tasks are spawned. And may need to have some workers prefer background tasks so that background work can still make progress under heavy load.

Additional context

bevyengine/bevy#12090
bevyengine/bevy#4740

@NthTensor
Copy link
Owner

NthTensor commented Jan 11, 2025

There's a talk I've been meaning to investigate that might have some interesting prioritization stuff: https://www.youtube.com/watch?v=oj-_vpZNMVw.

I am drawn to the idea of allowing different workers to employ different prioritization strategies. You might want to have two worker threads always select the highest priority work while the remainder of the pool focuses on fairness. Ensures a good mix of prioritization and throughput.

The big decision here is: do we keep queues or not. If we want to use priority queues that's a pretty standard option and there's prior work here in the switchyard crate we can steal. But we could look at more exotic job distribution structures, which might be better suited to the (inherently nonlinear) prioritization problem.

@NthTensor
Copy link
Owner

I thought about this a bit more last night. I think we will should probably just use two queues: A priority work queue and a backlock work queue. When pulling jobs to work on or share, we will pull from the priority queue first, then the backlog.

Blocking work (specifically join and stuff spawned through scope) will always go on the priority queue. Non-blocking work (everything else, including most asyncs/futures) will by default go on the backlog queue, but can be elevated to the priority queue at the user's digression. Bevy will want to elevate stuff that will block the frame.

These queues can be local only, part of the thread local worker logic. When sending work between threads we already have a strict precedence, and this won't change that (local jobs are more important than shared jobs, which are more important than global injected jobs). So it should not incur a huge cost in the same way that multiple async queues would.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants