Skip to content

zenml-io/zenml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Beyond The Demo: Production-Grade AI Systems

ZenML brings battle-tested MLOps practices to your AI applications, handling evaluation, monitoring, and deployment at scale


Need help with documentation? Visit our docs site for comprehensive guides and tutorials, or browse the SDK reference to find specific functions and classes.

⭐️ Show Your Support

If you find ZenML helpful or interesting, please consider giving us a star on GitHub. Your support helps promote the project and lets others know that it's worth checking out.

Thank you for your support! 🌟

Star this project

🀸 Quickstart

Open In Colab

Install ZenML via PyPI. Python 3.9 - 3.12 is required:

pip install "zenml[server]" notebook

Take a tour with the guided quickstart by running:

zenml go

πŸͺ„ From Prototype to Production: AI Made Simple

Create AI pipelines with minimal code changes

ZenML is an open-source framework that handles MLOps and LLMOps for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelinesβ€”all while your RAG apps run like clockwork.

from zenml import pipeline, step

@step
def load_rag_documents() -> dict:
    # Load and chunk documents for RAG pipeline
    documents = extract_web_content(url="https://www.zenml.io/")
    return {"chunks": chunk_documents(documents)}

@step
def generate_embeddings(data: dict) -> None:
    # Generate embeddings for RAG pipeline
    embeddings = embed_documents(data['chunks'])
    return {"embeddings": embeddings}

@step
def index_generator(
    embeddings: dict,
) -> str:
    # Generate index for RAG pipeline
    index = create_index(embeddings)
    return index.id
    

@pipeline
def rag_pipeline() -> str:
    documents = load_rag_documents()
    embeddings = generate_embeddings(documents)
    index = index_generator(embeddings)
    return index

Running a ZenML pipeline

Easily provision an MLOps stack or reuse your existing infrastructure

The framework is a gentle entry point for practitioners to build complex ML pipelines with little knowledge required of the underlying infrastructure complexity. ZenML pipelines can be run on AWS, GCP, Azure, Airflow, Kubeflow and even on Kubernetes without having to change any code or know underlying internals.

ZenML provides different features to aid people to get started quickly on a remote setting as well. If you want to deploy a remote stack from scratch on your selected cloud provider, you can use the 1-click deployment feature either through the dashboard:

Running a ZenML pipeline

Or, through our CLI command:

zenml stack deploy --provider aws

Alternatively, if the necessary pieces of infrastructure are already deployed, you can register a cloud stack seamlessly through the stack wizard:

zenml stack register <STACK_NAME> --provider aws

Read more about ZenML stacks.

Run workloads easily on your production infrastructure

Once you have your MLOps stack configured, you can easily run workloads on it:

zenml stack set <STACK_NAME>
python run.py
from zenml.config import ResourceSettings, DockerSettings

@step(
  settings={
    "resources": ResourceSettings(memory="16GB", gpu_count="1", cpu_count="8"),
    "docker": DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
  }
)
def training(...):
	...

Workloads with ZenML

Track models, pipeline, and artifacts

Create a complete lineage of who, where, and what data and models are produced.

You'll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.

from zenml import Model

@step(model=Model(name="rag_llm", tags=["staging"]))
def deploy_rag(index_id: str) -> str:
    deployment_id = deploy_to_endpoint(index_id)
    return deployment_id

Exploring ZenML Models

πŸš€ Key LLMOps Capabilities

Continual RAG Improvement

Build production-ready retrieval systems

RAG Pipeline

ZenML tracks document ingestion, embedding versions, and query patterns. Implement feedback loops and:

  • Fix your RAG logic based on production logs
  • Automatically re-ingest updated documents
  • A/B test different embedding models
  • Monitor retrieval quality metrics

Reproducible Model Fine-Tuning

Confidence in model updates

Finetuning Pipeline

Maintain full lineage of SLM/LLM training runs:

  • Version training data and hyperparameters
  • Track performance across iterations
  • Automatically promote validated models
  • Roll back to previous versions if needed

Purpose built for machine learning with integrations to your favorite tools

While ZenML brings a lot of value out of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.

from bentoml._internal.bento import bento

@step(on_failure=alert_slack, experiment_tracker="mlflow")
def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento
	mlflow.autolog()
	...
	return bento

Exploring ZenML Integrations

πŸ”„ Your LLM Framework Isn't Enough for Production

While tools like LangChain and LlamaIndex help you build LLM workflows, ZenML helps you productionize them by adding:

βœ… Artifact Tracking - Every vector store index, fine-tuned model, and evaluation result versioned automatically
βœ… Pipeline History - See exactly what code/data produced each version of your RAG system
βœ… Stage Promotion - Move validated pipelines from staging β†’ production with one click

πŸ–ΌοΈ Learning

The best way to learn about ZenML is the docs. We recommend beginning with the Starter Guide to get up and running quickly.

If you are a visual learner, this 11-minute video tutorial is also a great start:

Introductory Youtube Video

And finally, here are some other examples and use cases for inspiration:

  1. E2E Batch Inference: Feature engineering, training, and inference pipelines for tabular machine learning.
  2. Basic NLP with BERT: Feature engineering, training, and inference focused on NLP.
  3. LLM RAG Pipeline with Langchain and OpenAI: Using Langchain to create a simple RAG pipeline.
  4. Huggingface Model to Sagemaker Endpoint: Automated MLOps on Amazon Sagemaker and HuggingFace
  5. LLMops: Complete guide to do LLM with ZenML

πŸ“š Learn from Books

LLM Engineer's Handbook Cover Β Β Β Β  Machine Learning Engineering with Python Cover

ZenML is featured in these comprehensive guides to modern MLOps and LLM engineering. Learn how to build production-ready machine learning systems with real-world examples and best practices.

πŸ”‹ Deploy ZenML

For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.

Read more about various deployment options here.

Or, sign up for ZenML Pro to get a fully managed server on a free trial.

Use ZenML with VS Code

ZenML has a VS Code extension that allows you to inspect your stacks and pipeline runs directly from your editor. The extension also allows you to switch your stacks without needing to type any CLI commands.

πŸ–₯️ VS Code Extension in Action!
ZenML Extension

πŸ—Ί Roadmap

ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.

ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:

πŸ™Œ Contributing and Community

We would love to develop ZenML together with our community! The best way to get started is to select any issue from the [good-first-issue label](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22) and open up a Pull Request!

If you would like to contribute, please review our Contributing Guide for all relevant details.

πŸ†˜ Getting Help

The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.

πŸ“š LLM-focused Learning Resources

  1. LL Complete Guide - Full RAG Pipeline - Document ingestion, embedding management, and query serving
  2. LLM Fine-Tuning Pipeline - From data prep to deployed model
  3. LLM Agents Example - Track conversation quality and tool usage

πŸ€– AI-Friendly Documentation with llms.txt

ZenML implements the llms.txt standard to make our documentation more accessible to AI assistants and LLMs. Our implementation includes:

This structured approach helps AI tools better understand and utilize ZenML's documentation, enabling more accurate code suggestions and improved documentation search.

πŸ“œ License

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Join our Slack Slack Community and be part of the ZenML family.

Features Β· Roadmap Β· Report Bug Β· Sign up for ZenML Pro Β· Read Blog Β· Contribute to Open Source Β· Projects Showcase

πŸŽ‰ Version 0.75.0 is out. Check out the release notes here.
πŸ–₯️ Download our VS Code Extension here.