Skip to content

Transform your pythonic research to an artifact that engineers can deploy easily.

License

Notifications You must be signed in to change notification settings

alonbabushka/raptor

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Go Report Card Go Reference E2E Tests CII Best Practices Forks Stargazers Issues MIT License LinkedIn


RaptorML - Production-ready feature engineering

From notebook to production

Transform your data science to production-ready artifacts


Raptor frees data scientists and ML engineers to build and deploy operational models and ML-driven functionality, without learning backend engineering.

It compiles your python research code to production artifacts, and takes care of the engineering concerns such as scalability and reliability using best-practices on Kubernetes.

Explore the docs Β»

Getting started in 5 minutes Β» Β· Report a Bug Β· Request a Feature

RaptorML Screen Shot

🧐 What is Raptor?

Raptor frees data scientists and ML engineers to focus on the data science and research work, and build operational models and ML-driven functionality without learning backend engineering. Focus on what you're good at, increase your end-to-end velocity, and close the gap between research and production.

With Raptor, you can export your Python research code as standard production artifacts, and deploy them to Kubernetes. Once they deployed, Raptor optimizes data processing and feature calculation for production, deploys models to Sagemaker or Docker containers, and connects to your production data sources, scaling, high availability, caching, monitoring, and all other backend concerns.

Colab

😍 Why people love Raptor? and how does it change their lives?

Raptor is made by and for data scientists and ML engineers. We know how hard it is to build and deploy models to be an integral part of your products, and we want to make it easier.

Before Raptor, data scientists had to work closely with backend engineers to build a "production version" of their work: connect to data sources, transform their data with Flink/Spark or even Java, create APIs, dockerizing the model, handle scaling and high availability, and more.

High-level view of Raptor

With Raptor, data scientists can focus only on their research and model development, then export their work to production. Raptor takes care of the rest, including connecting to data sources, transforming the data, deploying and connecting the model, etc. This means data scientists can focus on what they do best, and Raptor handles the rest.

⭐️ Key Features

  • Focus on your work: Raptor frees data scientists and ML engineers to focus on the model, without learning backend engineering. Stop worrying about the engineering concerns, and focus on what you're good at.
  • Eliminate serving/training skew: You can use the same code for training and production to avoid training serving skew.
  • Real-time/on-demand: Raptor optimizes feature calculations and predictions to be performed at the time of the request.
  • Seamless caching and storage: Raptor uses an integrated caching system, and store your historical data for training purposes. So you won't need any other data storage system such as "Feature Store".
  • Turns data science work into production artifacts: Raptor implements best-practice functionalities of Kubernetes solutions, such as scaling, health, auto-recovery, monitoring, logging, and more.
  • Integrates with R&D team: Raptor extends existing DevOps tools and infrastructure and allows you to connect your ML research to the rest of your organization's R&D ecosystem, utilizing tools such as CI/CD and monitoring.

(back to top)

πŸš€ Getting Started

To start, install Raptor LabSDK. The LabSDK is a Python package that help you develop models and features in notebooks or IDEs.

pip install raptor-labsdk

⚑ Quick Example

import pandas as pd
from raptor import *
from typing_extensions import TypedDict


@data_source(
    training_data=pd.read_csv(
        'https://gist.githubusercontent.com/AlmogBaku/8be77c2236836177b8e54fa8217411f2/raw/hello_world_transactions.csv'),
    production_config=StreamingConfig()
)
class BankTransaction(TypedDict):
    customer_id: str
    amount: float
    timestamp: str


# Define features πŸ§ͺ
@feature(keys='customer_id', data_source=BankTransaction)
@aggregation(function=AggregationFunction.Sum, over='10h', granularity='1h')
def total_spend(this_row: BankTransaction, ctx: Context) -> float:
    """total spend by a customer in the last hour"""
    return this_row['amount']


@feature(keys='customer_id', data_source=BankTransaction)
@freshness(max_age='5h', max_stale='1d')
def amount(this_row: BankTransaction, ctx: Context) -> float:
    """total spend by a customer in the last hour"""
    return this_row['amount']


# Train the model πŸ€“
@model(
    keys='customer_id',
    input_features=['total_spend+sum'],
    input_labels=[amount],
    model_framework='sklearn',
    model_server='sagemaker-ack',
)
@freshness(max_age='1h', max_stale='100h')
def amount_prediction(ctx: TrainingContext):
    from sklearn.linear_model import LinearRegression
    df = ctx.features_and_labels()
    trainer = LinearRegression()
    trainer.fit(df[ctx.input_features], df[ctx.input_labels])
    return trainer


amount_prediction.export()  # Export to production πŸŽ‰

This will generate a bunch of artifacts in the out directory. The out directory also includes a Makefile that can be used for integration in any CI/CD pipeline, or even invoked manually.

Colab

(back to top)

πŸ₯Š How does Raptor different than ___ ?

MLOps platforms (MLFlow, Kubeflow, Metaflow, Sagemaker, VertexAI, etc.)

Traditional MLOps platforms are focused on managing the ML resources lifecycle and are not designed for building operational models and features. Raptor is designed for building operational models and features, and can be integrated with MLOps platforms.

Feature Stores (Hopsworks, Feast, etc.)

Feature store is a data storage system that stores pre-computed features for training and online purposes. That means you need to orchestrate the pre-computation of the features, store them, connect them to your model, and write ad-hoc backend code.

Raptor takes a radically different approach. You focus on the model, and Raptor takes care of the rest. Raptor has a built-in caching system that allows you to achieve similar results to a feature store but without the need to orchestrate the data pipeline and the model deployment directly.

Model Servers (Sagemaker, BentoML, KServe, etc.)

Model servers are designed for serving models in production. They are not designed for building models and features for production. In fact, Raptor integrates seamlessly with Model Servers(such as Sagemaker, BentoML, etc.) to serve your models.

πŸ’‘ How does it work?

The work with Raptor starts in your research phase in your notebook or IDE. Raptor allows you to write your ML work in a translatable way for production purposes.

Models and Features in Raptor are composed of a declarative part(via Python's decorators) and a function code. This way, Raptor can translate the heavy-lifting engineering concerns(such as aggregations or caching) by implementing the "declarative part", and optimizing the implementation for production.

Features are composed from a declarative part and a function code

After you are satisfied with your research results, "export" these definitions, and deploy it to Kubernetes using standard tools; Once deployed, Raptor Core(the server-side part) is extending Kubernetes with the ability to implement them. It takes care of the engineering concerns by managing and controlling Kubernetes-native resources such as deployments to connect your production data sources and run your business logic at scale.

You can read more about Raptor's architecture in the docs.

(back to top)

⎈ Production Installation

Raptor installation is not required for training purposes. You only need to install Raptor when deploying to production (or staging).

Learn more about production installation at the docs.

πŸ—οΈ Prerequisites

  1. Kubernetes cluster (including EKS, GKE, etc.)
  2. Redis server (> 2.8.9)
  3. Optional: Snowflake or S3 bucket (to record historical data for retraining purposes)

(back to top)

πŸ” Roadmap

  • S3 historical storage plugins
    • S3 storing
    • S3 fetching data - Spark
  • Deploy models to model servers
    • Sagemaker ACK
    • VertexAI
    • Seldon
    • Kubeflow
    • KFServing
    • Standalone
  • Large-scale training
  • Support more data sources
    • Kafka
    • GCP Pub/Sub
    • Rest
    • Snowflake
    • BigQuery
    • gRPC
    • Redis
    • Postgres
    • GraphQL

See the open issues for a full list of proposed features (and known issues) .

(back to top)

πŸ‘·β€ Contributing

Contributions make the open-source community a fantastic place to learn, inspire, and create. Any contributions you make are greatly appreciated (not only code! but also documenting, blogging, or giving us feedback) 😍.

Please fork the repo and create a pull request if you have a suggestion. You can also simply open an issue and choose " Feature Request" to give us some feedback.

Don't forget to give the project a star! ⭐️

For more information about contributing code to the project, read the CONTRIBUTING.md file.

(back to top)

πŸ“ƒ License

Distributed under the Apache2 License. Read the LICENSE file for more information.

(back to top)

πŸ‘« Joining the community

You can join the Raptor community on Slack, follow us on Twitter, and participate in the issues and pull requests.

Don't forget to give the project a star! ⭐️

(back to top)

About

Transform your pythonic research to an artifact that engineers can deploy easily.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 53.0%
  • Python 40.1%
  • Makefile 3.1%
  • Jinja 1.5%
  • Starlark 1.1%
  • Shell 0.7%
  • Dockerfile 0.5%