Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow CWL workflows to have jobs use all of a Slurm node's memory #5052

Open
wants to merge 24 commits into
base: master
Choose a base branch
from

Conversation

adamnovak
Copy link
Member

This should fix #4971. Instead of --defaultMemory 0, to run CWL jobs that lack their own ramMin with a full Slurm node's memory, you would now pass --no-cwl-default-ram --slurmDefaultAllMem=True.

This might cause some new problems:

  • Other internal CWL runner jobs that expected to use the default memory would now use all the memory on their node, if submitted to the cluster.
  • Now we use the CWL spec's required default memory for jobs that don't specify a limit, instead of --defaultMemory unless the user passes --no-cwl-default-ram. Previously I think we were ignoring the spec and always using the Toil --defaultMemory. This might break some workflow runs that used to work because of us giving them more memory than the spec said to.

Also, #4971 says we're supposed to implement a real framework for doing this kind of memory expansion across all batch systems that support it. But I didn't want to add a new bool flag onto Requirer for such a specific purpose. Probably if we need it we should combine it with preemptible somehow into a tag/flag system. Or we could implement memory range requirements and allow the top of the range to be unbounded, or treat some threshold upper limit as "all the node's memory" in the Slurm batch system.

Changelog Entry

To be copied to the draft changelog by merger:

  • Toil now has a --slurmDefaultAllMem option to run jobs lacking their own memory requirements with Slurm's --mem=0, so they get a whole node's memory.
  • toil-cwl-runner now has --no-cwl-default-ram (and --cwl-default-ram) to control whether the CWL spec's default ramMin is applied, or Toil's own default memory logic is used.
  • The --dont_allocate_mem and --allocate_mem options have been deprecated and replaced with --slurmAllocateMem, which can be True or False.

Reviewer Checklist

  • Make sure it is coming from issues/XXXX-fix-the-thing in the Toil repo, or from an external repo.
    • If it is coming from an external repo, make sure to pull it in for CI with:
      contrib/admin/test-pr otheruser theirbranchname issues/XXXX-fix-the-thing
      
    • If there is no associated issue, create one.
  • Read through the code changes. Make sure that it doesn't have:
    • Addition of trailing whitespace.
    • New variable or member names in camelCase that want to be in snake_case.
    • New functions without type hints.
    • New functions or classes without informative docstrings.
    • Changes to semantics not reflected in the relevant docstrings.
    • New or changed command line options for Toil workflows that are not reflected in docs/running/{cliOptions,cwl,wdl}.rst
    • New features without tests.
  • Comment on the lines of code where problems exist with a review comment. You can shift-click the line numbers in the diff to select multiple lines.
  • Finish the review with an overall description of your opinion.

Merger Checklist

  • Make sure the PR passes tests.
  • Make sure the PR has been reviewed since its last modification. If not, review it.
  • Merge with the Github "Squash and merge" feature.
    • If there are multiple authors' commits, add Co-authored-by to give credit to all contributing authors.
  • Copy its recommended changelog entry to the Draft Changelog.
  • Append the issue number in parentheses to the changelog entry.

@adamnovak adamnovak changed the title Issues/4971 slurm node memory Allow CWL workflows to have jobs use all of a Slurm node's memory Aug 8, 2024
@adamnovak
Copy link
Member Author

I still need to manually test this to make sure it actually does what it is meant to do.

@adamnovak
Copy link
Member Author

I wrote a test for this and it does indeed seem to issue jobs that ask for whole Slurm nodes when I use the two new options together.

I also fixed Slurm job cleanup when a workflow is killed; it wasn't doing that before because shutdown() wasn't doing any killing in AbstractGridEngineBatchSystem. I needed this for the test to not leave behind pending jobs when there aren't any free entire cluster nodes.

@adamnovak adamnovak marked this pull request as ready for review August 8, 2024 20:46
@adamnovak
Copy link
Member Author

@DailyDreaming Can you review this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

toil-cwl-runner used to allow --defaultMemory 0, which has special meaning to Slurm, but now no longer does
1 participant