-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a way to forward accelerators to Docker containers #4492
Add a way to forward accelerators to Docker containers #4492
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two minor comments, otherwise LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
@@ -37,6 +38,37 @@ def have_working_nvidia_smi() -> bool: | |||
return False | |||
return True | |||
|
|||
@memoize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I usually forget to do this.
This fixes #4486 by adding a way for
apiDockerCall
to call Docker containers with GPUs, and to pass through the right GPUs according to the environment variables that Toil sees.We don't have any GPUs on CI to test this yet, sadly.
I looked and we don't have to do anything else with the Slurm environment variables for GPUs for WDL. MiniWDL's executor for Docker can't do GPUs, and the Singularity one runs the container under the current cgroup and can't escape Slurm's confinement.
Changelog Entry
To be copied to the draft changelog by merger:
apiDockerCall()
.Reviewer Checklist
issues/XXXX-fix-the-thing
in the Toil repo, or from an external repo.camelCase
that want to be insnake_case
.docs/running/{cliOptions,cwl,wdl}.rst
Merger Checklist