-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(threads): multicore support #397
Conversation
Brief update: I was able to also add multicore support for the dual-core esp32s3, based on #399! It's in a separate branch for now, until #399 is merged.
I still need to do some more testing; the benchmarks that I am using for rp2040 only support |
f38b142
to
4b6ae5d
Compare
78171d8
to
ececc6c
Compare
2b0c3af
to
554cd8b
Compare
I have now rebased my branch and cleaned up my commit history quite a lot. Additionally, with #399 merged, the last commit now also adds SMP support for the dual-core xtensa ESP32S3 🎉 |
e1a224c
to
b722df1
Compare
Rebased again to include the recently merged dynamic priority feature & adapt it for multicore. |
Talked offline, no reason.
I'm trying to not jump in between versions (anymore ...). We already have a fix, if we unify use of intr for xtensa or riscv, we're good. Or, if we update the fix. (I'd prefer unifying). |
Initial symmetric multiprocessing support for the RP2040 dual-core chip.
Add `Multicore::schedule_on_core`, which uses the FIFO queues between the cores to trigger the scheduler on the other core.
When a new thread is ready, schedule it only if it has higher prio that on of the current running threads. Select the core with the lower prio thread and schedule it there. This avoids unnecessary invocations of the scheduler and reduces priority inversions.
Feature-gate all multicore related code and logic so that there is no overhead when the feature is not enabled.
When a thread becomes ready, the scheduler on the core with the lowest prio running thread will be triggered. If now another thread becomes ready as well and the scheduler didn't have a chance to run yet (e.g. because interrupts are still disabled), the same scheduler will be triggered again, but only one thread is then selected and can run. The other thread is "skipped". To solve this, the scheduler on the other core should be triggered as well in this scenario so that both schedulers get the most recent state and the two highest prio threads are run.
Solved now by re-enabling the correct CPU interrupt in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Good job!
Gratulations @elenaf9! |
Description
This PR adds support for symmetric multiprocessing (SMP) for
riot-rs-threads
. It's a refactor and continuation of #241.Right now, only the RP2040 is supported. I am planning to also add support for esp32s3 in a follow-up PR.
The implementation follows a global scheduling approach where the highest n ready threads are scheduled on the n available cores.
The main change in the scheduler logic is that it now removes the current thread from the runqueue, which is necessary so that a thread doesn't get picked twice. This also means that the thread has to be added to the runqueue again each time the scheduler is triggered (if it's still running).
This introduces some overhead if the scheduler is triggered unnecessarily. The PR therefore also includes optimizations that make sure that the scheduler is only triggered when a context switch is actually needed.
The implementation furthermore also supports affinity masks, which allow restricting a thread to certain cores.
All multicore logic is feature-gated, so that the single-core implementation largely remains the same. I still notice some difference in preliminary benchmark data, which I am still investigating.
Issues/PRs references
Tracking issue: #243
Depends on:
bench_sched_flags
#456riot-rs/threading
insw/threading
module #457critical-section
pick #458Open Questions
Idle threads
If there are no ready threads in the runqueue, the current (= on main)
sched
implementationWFI
s in a loop. The context of the previous thread is only saved after a new thread is ready and the context is actually switched.This causes some issues on multicore:
By adding idle threads, both of these issues are fixed and we furthermore avoid footguns related to interrupt priorities (e.g. right now we require that all embassy interrupts have a larger priority than
PendSV
so that they can preempt our scheduler when it's stuck in WFI). Wdyt?Open TODOS
multicore
not enabled) vs this PR (multicore
enabled)bench_sched_yield
,bench_sched_flags
(new)Change checklist