You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most work-stealing schedulers work in a LIFO manner, the worker works on the task it just enqueued. The main reason is that this maximizes locality (the data needed for the just enqueued task is probably hot in cache).
As such, those schedulers are optimizing throughput (doing all the work as fast as possible) but are fundamentally unfair as the first task enqueued might not be done as soon as possible but only when there is nothing else to do. This is also how Weave works.
In many cases, for example:
(soft) realtime audio-processing or video-processing
game engines
services where FIFO is expected, for example a service that processes a stream of images or a services that have users post tasks to it with user expecting the first one to be the first scheduled.
we want to optimize latency:
Assume that for optimizing latency, the early tasks scheduled are those that are logically needed first, i.e. FIFO scheduling
We might want to support job priorities.
There are several papers on soft-real-time scheduler (i.e. "Earliest Deadline First" scheduling) see:
Supporting priorities in Weave should just require adding a per-thread priority queue for priority tasks (and keep the deque for "best-effort tasks"). No need to solve the complex lock-free concurrent priority queue problem (and the associated thread-safe memory reclamation) when using a message-passing based runtime ✌️.
The text was updated successfully, but these errors were encountered:
superceded by #123 which hopefully proposes an elegant path forward for working with Weave as a threadpool that plays well with FIFO needs.
And as it process jobs in the order they were submitted, users can do their own priority queue before enqueueing them into with job system and think about whether priority 1 or priority 10 is the highest ;).
Most work-stealing schedulers work in a LIFO manner, the worker works on the task it just enqueued. The main reason is that this maximizes locality (the data needed for the just enqueued task is probably hot in cache).
As such, those schedulers are optimizing throughput (doing all the work as fast as possible) but are fundamentally unfair as the first task enqueued might not be done as soon as possible but only when there is nothing else to do. This is also how Weave works.
In many cases, for example:
we want to optimize latency:
There are several papers on soft-real-time scheduler (i.e. "Earliest Deadline First" scheduling) see:
However it seems relatively straightforward to have a latency optimized Weave switch.
FIFO scheduling
Instead of pop-ing the last task enqueued from the deque we can just pop the first task enqueued.
By default Weave add from the front
weave/weave/scheduler.nim
Line 275 in bf2ec2f
and pops from the front
weave/weave/scheduler.nim
Lines 137 to 144 in bf2ec2f
We can just pop from the back instead.
Job priorities
Job priorities are important for certain workload for example game engines
Supporting priorities in Weave should just require adding a per-thread priority queue for priority tasks (and keep the deque for "best-effort tasks"). No need to solve the complex lock-free concurrent priority queue problem (and the associated thread-safe memory reclamation) when using a message-passing based runtime ✌️.
The text was updated successfully, but these errors were encountered: