Skip to content

Enable dynamic GPU scheduling #79

Open
@ksatzke

Description

@ksatzke

Currently, the resource limits for KNIX components, when using helm charts for deployments, are fixed at deployment time, like so:

resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 1Gi

For each workflow deployment, its allowance for GPU support should also be available for configuration at workflow deployment time, to enable dynamic definition of workflow requirements to run on GPUs instead of CPUs at workflow deployment time, and for KNIX to enable scheduling of the workflow on a node which still has sufficient GPUs cores available, like so:

resources:
      limits:
        cpu: 1
        memory: 2Gi
        nvidia.com/gpu: 1 # requesting 1 GPU
  • add the option to define GPU requirements per workflow to the GUI
  • store workflow requirement limits together with workflow data
  • extend management service to evaluate and handle workflow requirement limits for GPU and handle GPU scheduling
  • add node labelling capabilities to KNIX

Metadata

Metadata

Assignees

Labels

designThe issue is related to the high-level architectureenv/kubernetesTo indicate something specific to Kubernetes setup of KNIXfeature_requestNew feature requesthelp wantedExtra attention is neededin progressThis issue is already being fixed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions