Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a way to forward accelerators to Docker containers #4492

Merged

Conversation

adamnovak
Copy link
Member

This fixes #4486 by adding a way for apiDockerCall to call Docker containers with GPUs, and to pass through the right GPUs according to the environment variables that Toil sees.

We don't have any GPUs on CI to test this yet, sadly.

I looked and we don't have to do anything else with the Slurm environment variables for GPUs for WDL. MiniWDL's executor for Docker can't do GPUs, and the Singularity one runs the container under the current cgroup and can't escape Slurm's confinement.

Changelog Entry

To be copied to the draft changelog by merger:

  • Toil can now send the right accelerators to containers launched with apiDockerCall().

Reviewer Checklist

  • Make sure it is coming from issues/XXXX-fix-the-thing in the Toil repo, or from an external repo.
    • If it is coming from an external repo, make sure to pull it in for CI with:
      contrib/admin/test-pr otheruser theirbranchname issues/XXXX-fix-the-thing
      
    • If there is no associated issue, create one.
  • Read through the code changes. Make sure that it doesn't have:
    • Addition of trailing whitespace.
    • New variable or member names in camelCase that want to be in snake_case.
    • New functions without type hints.
    • New functions or classes without informative docstrings.
    • Changes to semantics not reflected in the relevant docstrings.
    • New or changed command line options for Toil workflows that are not reflected in docs/running/{cliOptions,cwl,wdl}.rst
    • New features without tests.
  • Comment on the lines of code where problems exist with a review comment. You can shift-click the line numbers in the diff to select multiple lines.
  • Finish the review with an overall description of your opinion.

Merger Checklist

  • Make sure the PR passes tests.
  • Make sure the PR has been reviewed since its last modification. If not, review it.
  • Merge with the Github "Squash and merge" feature.
    • If there are multiple authors' commits, add Co-authored-by to give credit to all contributing authors.
  • Copy its recommended changelog entry to the Draft Changelog.
  • Append the issue number in parentheses to the changelog entry.

Copy link
Member

@DailyDreaming DailyDreaming left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two minor comments, otherwise LGTM.

src/toil/lib/docker.py Outdated Show resolved Hide resolved
src/toil/lib/docker.py Outdated Show resolved Hide resolved
Copy link
Member

@DailyDreaming DailyDreaming left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@@ -37,6 +38,37 @@ def have_working_nvidia_smi() -> bool:
return False
return True

@memoize
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I usually forget to do this.

@adamnovak adamnovak merged commit b301e91 into DataBiosphere:master Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support SLURM_STEP_GPUS and --gpus argument to Docker
2 participants