Skip to content

A simple Python lib to schedule and execute deep learning tasks on available GPUs.

License

Notifications You must be signed in to change notification settings

sangyx/gpu_task_scheduler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GPU Task Scheduler

A simple Python lib to schedule and execute deep learning tasks on available GPUs. It supports:

  • Custom selection of GPUs.
  • Maximum concurrency per GPU.
  • Dynamic task dispatch using threading.

📦 Installation

pip install gpu_task_scheduler

🚀 Usage Example

from gpu_task_scheduler import GPUTaskScheduler

tasks = [
    "python train_model.py --epochs 10",
    "bash run_simulation.sh",
    "python evaluate.py --model checkpoint.pth"
]

scheduler = GPUTaskScheduler(
    wait_interval=10,
    allowed_gpu_ids=[0, 1],
    max_tasks_per_gpu=2,
    min_memory=0.8
)
scheduler.run_tasks(tasks)

⚙️ Parameters

GPUTaskScheduler(wait_interval=30, allowed_gpu_ids=None, max_tasks_per_gpu=1, min_memory=0.8)

Parameter Type Default Description
wait_interval int 30 Number of seconds to wait before checking again if no GPU is available.
allowed_gpu_ids list[int] None List of GPU indices (e.g., [0, 1]) that the scheduler is allowed to use. If None, all GPUs are considered.
max_tasks_per_gpu int 1 Maximum number of concurrent tasks allowed on a single GPU. Helps share GPU among multiple tasks.
min_memory int or float 0.8 Minimum free memory (in MiB or as a ratio) required on a GPU to be considered available.

🛠️ Requirements

  • Python 3.6+
  • nvidia-smi available in PATH (NVIDIA drivers installed)

About

A simple Python lib to schedule and execute deep learning tasks on available GPUs.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages