Demonstrates the capabilities of MLflow using a keras classification model as an example.
Use python >= 3.8.
Install dependencies via poetry:
poetry install
Poetry will create a new virtual environment that can be activated
- either by prefixing all shell commands with
poetry run
- or by spawning a shell via
poetry shell
You can use pip as an alternative by installing the dependencies listed in the pyproject.toml under the [tool.poetry.dependencies]
section by hand.
When train.py is executed, the training progress is logged as an experiment run via automatic logging:
python train.py
The training progress and logs can be inspected via a local web UI:
mlflow ui
By default, all data (backend data and artifacts) are stored on your local file system (see docs). However, if you want to use the MLflow Model Registry, all backend data must be persisted in a database-backed store. A simple alternative variant is to configure the use of a SQLite database via the tracking URI, for example by setting it via the MLFLOW_TRACKING_URI
environment variable:
MLFLOW_TRACKING_URI="sqlite:///mlflow.db" python train.py
In this case, the UI must be started with the SQLite URI as backend-store-ui
:
mlflow ui --backend-store-uri "sqlite:///mlflow.db"