Skip to content
@NVIDIA-NeMo

NVIDIA-NeMo

NVIDIA NeMo Framework

NeMo Framework is NVIDIA's GPU accelerated, end-to-end training framework for large language models (LLMs), multi-modal models and speech models. It enables seamless scaling of training (both pretraining and post-training) workloads from single GPU to thousand-node clusters for both 🤗Hugging Face/PyTorch and Megatron models. This GitHub organization includes a suite of libraries and recipe collections to help users train models from end to end.

NeMo Framework is also a part of the NVIDIA NeMo software suite for managing the AI agent lifecycle.

image

Figure 1. NeMo Framework Repo Overview

Visit the individual repos to find out more 🔍, raise 🐛, contribute ✍️ and participate in discussion forums 🗣️!

Repo Summary Training Loop Training Backends Infernece Backends Model Coverage
NeMo Megatron-Bridge Pretraining, LoRA, SFT PyT native loop Megatron-core NA LLM & VLM
NeMo AutoModel Pretraining, LoRA, SFT PyT native loop PyTorch DTensor NA LLM, VLM, Omni, VFM
NeMo 1.x & 2.x (with Lightning)->will repurpose to focus on Speech Pretraining,SFT PyTorch Lightning Loop PyTorch RIVA Speech
NeMo RL SFT, RL PyT native loop Megatron-core, PyT DTensor vLLM LLM, VLM
NeMo Aligner (deprecated) SFT, RL PyT Lightning Loop Megatron-core TRTLLM LLM
NeMo Curator Data curation NA NA NA Agnostic
NeMo Eval Model evaluation NA NA Agnostic
NeMo Export-Deploy Export to Production NA NA vLLM, TRT, TRTLLM, ONNX Agnostic
NeMo Run Experiment launcher NA NA NA Agnostic
NeMo Guardrails (to be added to the Github Org) Guardrail model response NA NA NA
NeMo Skills (to be added to the Github Org) Reference pipeline for SDG & Eval NA NA NA Agnostic
NeMo VFM Video foundation model training PyT native loop Megatron-core and PyTorch PyTorch VFM, Diffusion
Table 1. NeMo Framework Repos

📢 Also take a look at our blogs for the latest exciting things that we are working on!

Some background contexts and motivations

The NeMo GitHub Org and its repo collections are created to address the following problems

  • Need for composability: The Previous NeMo is monolithic and encompasses too many things, making it hard for users to find what they need. Container size is also an issue. Breaking down the Monolithic repo into a series of functional-focused repos to facilitate code discovery.
  • Need for customizability: The Previous NeMo uses PyTorch Lighting as the default trainer loop, which provides some out of the box functionality but making it hard to customize. NeMo Megatron-Bridge, NeMo AutoModel, and NeMo RL have adopted pytorch native custom loop to improve flexibility and ease of use for developers.

Documentation

To learn more about NVIDIA NeMo Framework and all of its component libraries, please refer to the NeMo Framework User Guide, which includes quick start guide, tutorials, model-specific recipes, best practice guides and performance benchmarks.

License

Apache 2.0 licensed with third-party attributions documented in each repository.

Pinned Loading

  1. Curator Curator Public

    Scalable data pre processing and curation toolkit for LLMs

    Python 1.1k 167

  2. RL RL Public

    Scalable toolkit for efficient model reinforcement

    Python 833 122

  3. Automodel Automodel Public

    DTensor-native pretraining and fine-tuning for LLMs/VLMs with day-0 Hugging Face support, GPU-accelerated, and memory efficiency.

    Python 60 8

  4. Megatron-Bridge Megatron-Bridge Public

    Training library for Megatron-based models

    Python 55 15

Repositories

Showing 10 of 10 repositories
  • Curator Public

    Scalable data pre processing and curation toolkit for LLMs

    NVIDIA-NeMo/Curator’s past year of commit activity
    Python 1,122 Apache-2.0 167 65 28 Updated Sep 3, 2025
  • FW-CI-templates Public

    CI/CD templates for NeMo-FW libraries

    NVIDIA-NeMo/FW-CI-templates’s past year of commit activity
    Python 5 Apache-2.0 2 1 1 Updated Sep 3, 2025
  • Megatron-Bridge Public

    Training library for Megatron-based models

    NVIDIA-NeMo/Megatron-Bridge’s past year of commit activity
    Python 55 Apache-2.0 15 50 37 Updated Sep 3, 2025
  • RL Public

    Scalable toolkit for efficient model reinforcement

    NVIDIA-NeMo/RL’s past year of commit activity
    Python 833 Apache-2.0 122 192 (1 issue needs help) 66 Updated Sep 3, 2025
  • Run Public

    A tool to configure, launch and manage your machine learning experiments.

    NVIDIA-NeMo/Run’s past year of commit activity
    Python 187 Apache-2.0 70 9 4 Updated Sep 3, 2025
  • Automodel Public

    DTensor-native pretraining and fine-tuning for LLMs/VLMs with day-0 Hugging Face support, GPU-accelerated, and memory efficiency.

    NVIDIA-NeMo/Automodel’s past year of commit activity
    Python 60 Apache-2.0 8 38 (1 issue needs help) 29 Updated Sep 3, 2025
  • Eval Public

    A library for evaluating NeMo FW checkpoints with state-of-the-art evaluation harnesses

    NVIDIA-NeMo/Eval’s past year of commit activity
    Python 5 Apache-2.0 1 4 6 Updated Sep 3, 2025
  • Export-Deploy Public

    A library for exporting models including NeMo and Hugging Face to optimized inference backends, and deploying them for efficient querying

    NVIDIA-NeMo/Export-Deploy’s past year of commit activity
    Python 13 Apache-2.0 1 2 9 Updated Sep 3, 2025
  • NeMo Public

    A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

    NVIDIA-NeMo/NeMo’s past year of commit activity
    Python 15,581 Apache-2.0 3,080 75 72 Updated Sep 3, 2025
  • .github Public
    NVIDIA-NeMo/.github’s past year of commit activity
    0 0 0 0 Updated Sep 1, 2025