Skip to content

Latest commit

 

History

History
39 lines (29 loc) · 1.76 KB

File metadata and controls

39 lines (29 loc) · 1.76 KB

(Deprecated) End-to-End kubeflow tutorial using a Sequence-to-Sequence model

Note: This example does not currently work correctly, and has been deprecated. It will be updated or replaced soon.

This example demonstrates how you can use kubeflow end-to-end to train and serve a Sequence-to-Sequence model on an existing kubernetes cluster. This tutorial is based upon @hamelsmu's article "How To Create Data Products That Are Magical Using Sequence-to-Sequence Models".

Goals

There are two primary goals for this tutorial:

  • Demonstrate an End-to-End kubeflow example
  • Present an End-to-End Sequence-to-Sequence model

By the end of this tutorial, you should learn how to:

  • Setup a Kubeflow cluster on an existing Kubernetes deployment
  • Spawn a Jupyter Notebook on the cluster
  • Spawn a shared-persistent storage across the cluster to store large datasets
  • Train a Sequence-to-Sequence model using TensorFlow and GPUs on the cluster
  • Serve the model using Seldon Core
  • Query the model from a simple front-end application

Steps:

  1. Setup a Kubeflow cluster
  2. Training the model. You can train the model using any of the following methods using Jupyter Notebook or using TFJob:
  3. Serving the model
  4. Querying the model
  5. Teardown