This project proveides examples on different ways to train, export and serve TensorFlow models using Docker, Kubernetes, and TensorFlow Serving. This repository contains conpanion code for the tutorial LINK TBD.
- Docker
- Kubernetes. Consider using Minikube for local experiments and kubeadm for simple cluster set up.
- For model training and export: Python 3,
pip install -r ./py3-train-requirements.txt
- For running the client: Python 2,
pip install -r ./py2-client-requirements.txt
You may use provided Makefile to
- Build the image with
make tfserve_image
. Notice that no compilation isperformed through the process, the image uses precompiled TensorFlow Serving Ubuntu package. - Train the classifier and export it with
make train_classifier
- Run TensorFlow Serving via docker with
make run_server
TensorFlow Serving examples are provided both for TensorFlow Estimator API and for plain TensorFlow clode since approaches for model export differ significantly between those.
You can use API_TO_USE
variable to set the API of interest (the default one
is estimator_api
):
make API_TO_USE=tf_api clean train_classifier
make API_TO_USE=estimator_api clean train_classifier
You must use python 2 on the client side as the tensorflow-serving-api
package is only provided for Python 2 at the current moment. You may call the example client using this command:
python2 client.py --model-name tf_model localhost:8500 0.5
Execute python2 client.py -h
if you want to kow more about the parameters.