TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
With the Bitnami Docker TensorFlow Serving image it is easy to server models like inception or MNIST. For a functional example you can check the TensorFlow Inception repository.
NOTE: This image needs access to trained data to actually works. Please check bitnami-docker-tensorflow-inception repository or follow the steps provided here
$ docker run --name tensorflow-serving bitnami/tensorflow-serving:latest
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-serving/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d
WARNING: This is a beta configuration, currently unsupported.
Get the raw URL pointing to the kubernetes.yml
manifest and use kubectl
to create the resources on your Kubernetes cluster like so:
$ kubectl create -f https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-serving/master/kubernetes.yml
- Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems.
- With Bitnami images the latest bug fixes and features are available as soon as possible.
- Bitnami containers, virtual machines and cloud images use the same components and configuration approach - making it easy to switch between formats based on your project needs.
- Bitnami images are built on CircleCI and automatically pushed to the Docker Hub.
- All our images are based on minideb a minimalist Debian based container image which gives you a small base container image and the familiarity of a leading linux distribution.
The recommended way to get the Bitnami TensorFlow Serving Docker Image is to pull the prebuilt image from the Docker Hub Registry.
$ docker pull bitnami/tensorflow-serving:latest
To use a specific version, you can pull a versioned tag. You can view the list of available versions in the Docker Hub Registry.
$ docker pull bitnami/tensorflow-serving:[TAG]
If you wish, you can also build the image yourself.
$ docker build -t bitnami/tensorflow-serving:latest https://github.com/bitnami/bitnami-docker-tensorflow-serving.git
If you remove the container all your data and configurations will be lost, and the next time you run the image the data and configurations will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.
For persistence you should mount a volume at the /bitnami
path for the TensorFlow Serving data and configurations. If the mounted directory is empty, it will be initialized on the first run.
$ docker run -v /path/to/tensorflow-serving-persistence:/bitnami bitnami/tensorflow-serving:latest
or using Docker Compose:
version: '2'
services:
tensorflow-serving:
image: 'bitnami/tensorflow-serving:latest'
ports:
- '9000:9000'
volumes:
- /path/to/tensorflow-serving-persistence:/bitnami
Using Docker container networking, a TensorFlow Serving server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
In this example, we will create a TensorFlow Inception client instance that will connect to the server instance that is running on the same docker network as the client. The Inception client will export an already trained data so the server can read it and you will be able to query the server with an image to get it categorized.
$ mkdir /tmp/model-data
$ curl -o '/tmp/model-data/inception-v3-2016-03-01.tar.gz' 'http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz'
$ cd /tmp/model-data
$ tar xzf inception-v3-2016-03-01.tar.gz
$ docker network create app-tier --driver bridge
Use the --network app-tier
argument to the docker run
command to attach the TensorFlow Serving container to the app-tier
network.
$ docker run -d --name tensorflow-serving \
--volume /tmp/model-data:/bitnami/model-data
--network app-tier \
bitnami/tensorflow-serving:latest
Finally we create a new container instance to launch the TensorFlow Serving client and connect to the server created in the previous step:
$ docker run -it --rm \
--volume /tmp/model-data:/bitnami/model-data
--network app-tier \
bitnami/tensorflow-inception:latest inception_client --server=tensorflow-serving:9000 --image=path/to/image.jpg
When not specified, Docker Compose automatically sets up a new network and attaches all deployed services to that network. However, we will explicitly define a new bridge
network named app-tier
. In this example we assume that you want to connect to the TensorFlow Serving server from your own custom application image which is identified in the following snippet by the service name myapp
.
version: '2'
networks:
app-tier:
driver: bridge
services:
tensorflow-serving:
image: 'bitnami/tensorflow-serving:latest'
networks:
- app-tier
myapp:
image: 'YOUR_APPLICATION_IMAGE'
networks:
- app-tier
IMPORTANT:
- Please update the YOUR_APPLICATION_IMAGE_ placeholder in the above snippet with your application image
- In your application container, use the hostname
tensorflow-serving
to connect to the TensorFlow Serving server
Launch the containers using:
$ docker-compose up -d
The image looks for configurations in /bitnami/tensorflow-serving/conf/
. As mentioned in Persisting your configuation you can mount a volume at /bitnami
and copy/edit the configurations in the /path/to/tensorflow-serving-persistence/tensorflow-serving/conf/
. The default configurations will be populated to the conf/
directory if it's empty.
Run the TensorFlow Serving image, mounting a directory from your host.
$ docker run --name tensorflow-serving -v /path/to/tensorflow-serving-persistence:/bitnami bitnami/tensorflow-serving:latest
or using Docker Compose:
version: '2'
services:
tensorflow-serving:
image: 'bitnami/tensorflow-serving:latest'
ports:
- '9000:9000'
volumes:
- /path/to/tensorflow-serving-persistence:/bitnami
Edit the configuration on your host using your favorite editor.
$ vi /path/to/tensorflow-serving-persistence/conf/tensorflow-serving.conf
After changing the configuration, restart your TensorFlow Serving container for changes to take effect.
$ docker restart tensorflow-serving
or using Docker Compose:
$ docker-compose restart tensorflow-serving
The Bitnami TensorFlow Serving Docker image sends the container logs to the stdout
. To view the logs:
$ docker logs tensorflow-serving
or using Docker Compose:
$ docker-compose logs tensorflow-serving
The logs are also stored inside the container in the /opt/bitnami/tensorflow-serving/logs/tensorflow-serving.log file.
You can configure the containers logging driver using the --log-driver
option if you wish to consume the container logs differently. In the default configuration docker uses the json-file
driver.
Bitnami provides up-to-date versions of TensorFlow Serving, including security patches, soon after they are made upstream. We recommend that you follow these steps to upgrade your container.
$ docker pull bitnami/tensorflow-serving:latest
or if you're using Docker Compose, update the value of the image property to
bitnami/tensorflow-serving:latest
.
Stop the currently running container using the command
$ docker stop tensorflow-serving
or using Docker Compose:
$ docker-compose stop tensorflow-serving
Next, take a snapshot of the persistent volume /path/to/tensorflow-serving-persistence
using:
$ rsync -a /path/to/tensorflow-serving-persistence /path/to/tensorflow-serving-persistence.bkp.$(date +%Y%m%d-%H.%M.%S)
You can use this snapshot to restore the database state should the upgrade fail.
$ docker rm -v tensorflow-serving
or using Docker Compose:
$ docker-compose rm -v tensorflow-serving
Re-create your container from the new image, restoring your backup if necessary.
$ docker run --name tensorflow-serving bitnami/tensorflow-serving:latest
or using Docker Compose:
$ docker-compose start tensorflow-serving
We'd love for you to contribute to this container. You can request new features by creating an issue, or submit a pull request with your contribution.
If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to include the following information in your issue:
- Host OS and version
- Docker version (
docker version
) - Output of
docker info
- Version of this container (
echo $BITNAMI_IMAGE_VERSION
inside the container) - The command you used to run the container, and any relevant output you saw (masking any sensitive information)
Most real time communication happens in the #containers
channel at bitnami-oss.slack.com; you can sign up at slack.oss.bitnami.com.
Discussions are archived at bitnami-oss.slackarchive.io.
Copyright 2017 Bitnami
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.