Mavs Β Β Β Β [email protected] Β Β Β Β Documentation Β Β Β Β Slack
General Information | |
---|---|
Repository | |
Release | |
Compatibility | |
Build status | |
Code analysis |
During the exploration phase of a machine learning project, a data scientist tries to find the optimal pipeline for his specific use case. This usually involves applying standard data cleaning steps, creating or selecting useful features, trying out different models, etc. Testing multiple pipelines requires many lines of code, and writing it all in the same notebook often makes it long and cluttered. On the other hand, using multiple notebooks makes it harder to compare the results and to keep an overview. On top of that, refactoring the code for every test can be quite time-consuming. How many times have you conducted the same action to pre-process a raw dataset? How many times have you copy-and-pasted code from an old repository to re-use it in a new use case?
ATOM is here to help solve these common issues. The package acts as a wrapper of the whole machine learning pipeline, helping the data scientist to rapidly find a good model for his problem. Avoid endless imports and documentation lookups. Avoid rewriting the same code over and over again. With just a few lines of code, it's now possible to perform basic data cleaning steps, select relevant features and compare the performance of multiple models on a given dataset, providing quick insights on which pipeline performs best for the task at hand.
Example steps taken by ATOM's pipeline:
- Data Cleaning
- Handle missing values
- Encode categorical features
- Detect and remove outliers
- Balance the training set
- Feature engineering
- Create new non-linear features
- Select the most promising features
- Train and validate multiple models
- Apply hyperparameter tuning
- Fit the models on the training set
- Evaluate the results on the test set
- Analyze the results
- Get the scores on various metrics
- Make plots to compare the model performances
- Multiple data cleaning and feature engineering classes
- 55+ classification, regression and forecast models to choose from
- Possibility to train multiple models with one line of code
- Fast implementation of hyperparameter tuning
- Easy way to compare the results from different models
- 50+ plots to analyze the data and model performance
- Avoid refactoring to test new pipelines
- Native support for GPU training
- Integration with polars, pyspark and pyarrow
- 30+ example notebooks to get you started
- Full integration with multilabel and multioutput datasets
- Native support for sparse datasets
- Build-in transformers for NLP pipelines
- Avoid endless imports and documentation lookups
Install ATOM's newest release easily via pip
:
$ pip install -U atom-ml
or via conda
:
$ conda install -c conda-forge atom-ml
ATOM contains a variety of classes and functions to perform data cleaning, feature engineering, model training, plotting and much more. The easiest way to use everything ATOM has to offer is through one of the main classes:
- ATOMClassifier for binary or multiclass classification tasks.
- ATOMForecaster for forecasting tasks.
- ATOMRegressor for regression tasks.
Let's walk you through an example. Click on the SageMaker Studio Lab badge on top of this section to run this example yourself.
Make the necessary imports and load the data.
import pandas as pd
from atom import ATOMClassifier
# Load the Australian Weather dataset
X = pd.read_csv("https://raw.githubusercontent.com/tvdboom/ATOM/master/examples/datasets/weatherAUS.csv")
X.head()
Initialize the ATOMClassifier or ATOMRegressor class. These two classes are convenient wrappers for the whole machine learning pipeline. Contrary to sklearn's API, they are initialized providing the data you want to manipulate.
atom = ATOMClassifier(X, y="RainTomorrow", n_rows=1000, verbose=2)
Data transformations are applied through atom's methods. For example, calling the impute method will initialize an Imputer instance, fit it on the training set and transform the whole dataset. The transformations are applied immediately after calling the method (no fit and transform commands necessary).
atom.impute(strat_num="median", strat_cat="most_frequent")
atom.encode(strategy="target", max_onehot=8)
Similarly, models are trained and evaluated using the run method. Here, we fit both a LinearDiscriminantAnalysis and AdaBoost model, and apply hyperparameter tuning.
atom.run(models=["LDA", "AdaB"], metric="auc", n_trials=10)
And lastly, analyze the results.
atom.results
atom.plot_roc()
Relevant links | |
---|---|
β About | Learn more about the package. |
π Getting started | New to ATOM? Here's how to get you started! |
π¨βπ» User guide | How to use ATOM and its features. |
ποΈ API Reference | The detailed reference for ATOM's API. |
π Examples | Example notebooks show you what can be done and how. |
π’ Chagelog | What are the new features in the latest release? |
β FAQ | Get answers to frequently asked questions. |
π§ Contributing | Do you wan to contribute to the project? Read this before creating a PR. |
π³ Dependencies | Which other packages does ATOM depend on? |
π License | Copyright and permissions under the MIT license. |