diff --git a/workshop/docs/modules/ROOT/assets/images/projects/ds-project-my-storage-form.png b/workshop/docs/modules/ROOT/assets/images/projects/ds-project-my-storage-form.png index af0a7f8..3afa7c4 100644 Binary files a/workshop/docs/modules/ROOT/assets/images/projects/ds-project-my-storage-form.png and b/workshop/docs/modules/ROOT/assets/images/projects/ds-project-my-storage-form.png differ diff --git a/workshop/docs/modules/ROOT/assets/images/projects/ds-project-pipeline-artifacts-form.png b/workshop/docs/modules/ROOT/assets/images/projects/ds-project-pipeline-artifacts-form.png index 3e16427..318b943 100644 Binary files a/workshop/docs/modules/ROOT/assets/images/projects/ds-project-pipeline-artifacts-form.png and b/workshop/docs/modules/ROOT/assets/images/projects/ds-project-pipeline-artifacts-form.png differ diff --git a/workshop/docs/modules/ROOT/assets/images/projects/launch-jupyter-link.png b/workshop/docs/modules/ROOT/assets/images/projects/launch-jupyter-link.png index 958cb8f..42d5e78 100644 Binary files a/workshop/docs/modules/ROOT/assets/images/projects/launch-jupyter-link.png and b/workshop/docs/modules/ROOT/assets/images/projects/launch-jupyter-link.png differ diff --git a/workshop/docs/modules/ROOT/assets/images/workbenches/create-workbench-form-env-storage.png b/workshop/docs/modules/ROOT/assets/images/workbenches/create-workbench-form-env-storage.png index 7bc76bc..dfe81b6 100644 Binary files a/workshop/docs/modules/ROOT/assets/images/workbenches/create-workbench-form-env-storage.png and b/workshop/docs/modules/ROOT/assets/images/workbenches/create-workbench-form-env-storage.png differ diff --git a/workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc b/workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc index 3e15ae7..2a0c592 100644 --- a/workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc +++ b/workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc @@ -19,7 +19,7 @@ image::pipelines/wb-pipeline-launcher.png[Pipeline buttons] + image::pipelines/wb-pipeline-editor-button.png[Pipeline Editor button, 100] + -You've created a blank pipeline! +You've created a blank pipeline. . Set the default runtime image for when you run your notebook or Python code. @@ -35,7 +35,7 @@ image::pipelines/wb-pipeline-properties-tab.png[Pipeline Properties Tab] + image::pipelines/wb-pipeline-runtime-image.png[Pipeline Runtime Image0, 400] -. Save the pipeline. +. Select *File* -> *Save Python File*. == Add nodes to your pipeline @@ -55,7 +55,7 @@ image::pipelines/wb-pipeline-connect-nodes.png[Connect Nodes, 400] Set node properties to specify the training file as a dependency. -Note: If you don't set this file dependency, the file is not included in the node when it runs and the training job fails. +NOTE: If you don't set this file dependency, the file is not included in the node when it runs and the training job fails. . Click the `1_experiment_train.ipynb` node. + @@ -103,7 +103,7 @@ The secret is named `aws-connection-my-storage`. [NOTE] ==== -If you named your data connection something other than `My Storage`, you can obtain the secret name in the {productname-short} dashboard by hovering over the resource information icon *?* in the *Data Connections* tab. +If you named your data connection something other than `My Storage`, you can obtain the secret name in the {productname-short} dashboard by hovering over the help (?) icon in the *Data Connections* tab. image::pipelines/dsp-dc-secret-name.png[My Storage Secret Name, 400] ==== @@ -136,16 +136,17 @@ image::pipelines/wb-pipeline-node-remove-env-var.png[Remove Env Var] .. Under *Kubernetes Secrets*, click *Add*. + -image::pipelines/wb-pipeline-add-kube-secret.png[Add Kube Secret] +image::pipelines/wb-pipeline-add-kube-secret.png[Add Kubernetes Secret] .. Enter the following values and then click *Add*. -** *Environment Variable*: `AWS_ACCESS_KEY_ID` ++ +* *Environment Variable*: `AWS_ACCESS_KEY_ID` ** *Secret Name*: `aws-connection-my-storage` ** *Secret Key*: `AWS_ACCESS_KEY_ID` + image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 400] -.. Repeat Steps 2a and 2b for each set of these Kubernetes secrets: +. Repeat Step 2 for each of the following Kubernetes secrets: * *Environment Variable*: `AWS_SECRET_ACCESS_KEY` ** *Secret Name*: `aws-connection-my-storage` @@ -163,7 +164,7 @@ image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 400] ** *Secret Name*: `aws-connection-my-storage` ** *Secret Key*: `AWS_S3_BUCKET` -. *Save* and *Rename* the `.pipeline` file. +. Select *File* -> *Save Python File As* to save and rename the pipeline. For example, rename it to `My Train Save.pipeline`. == Run the Pipeline diff --git a/workshop/docs/modules/ROOT/pages/conclusion.adoc b/workshop/docs/modules/ROOT/pages/conclusion.adoc index 7e2d661..40efedf 100644 --- a/workshop/docs/modules/ROOT/pages/conclusion.adoc +++ b/workshop/docs/modules/ROOT/pages/conclusion.adoc @@ -5,9 +5,7 @@ [.text-center.strong] == Conclusion -Congratulations! - -In this {deliverable}, you learned how to incorporate data science and artificial intelligence (AI) and machine learning (ML) into an OpenShift development workflow. +Congratulations. In this {deliverable}, you learned how to incorporate data science, artificial intelligence, and machine learning into an OpenShift development workflow. You used an example fraud detection model and completed the following tasks: diff --git a/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc b/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc index e93492c..2dc362a 100644 --- a/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc +++ b/workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc @@ -16,7 +16,7 @@ A workbench is an instance of your development and experimentation environment. . Click the *Workbenches* tab, and then click the *Create workbench* button. + -image::workbenches/ds-project-create-workbench.png[Create workbench button] +image::workbenches/ds-project-create-workbench.png[Create workbench button, 300] . Fill out the name and description. + diff --git a/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc b/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc index 9131529..6471132 100644 --- a/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc +++ b/workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc @@ -1,7 +1,9 @@ [id='creating-data-connections-to-storage'] = Creating data connections to your own S3-compatible object storage -NOTE: If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. +If you have existing S3-compatible storage buckets that you want to use for this {deliverable}, you must create a data connection to one storage bucket for saving your data and models and, if you want to complete the pipelines section of this {deliverable}, create another data connection to a different storage bucket for saving pipeline artifacts. + +NOTE: If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two data connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality. .Prerequisite @@ -15,38 +17,41 @@ To create data connections to your existing S3-compatible storage buckets, you n If you don't have this information, contact your storage administrator. -.Procedures - -Create data connections to your two storage buckets. +.Procedure -*Create a data connection for saving your data and models* +. Create a data connection for saving your data and models: -. In the {productname-short} dashboard, navigate to the page for your data science project. +.. In the {productname-short} dashboard, navigate to the page for your data science project. -. Click the *Data connections* tab, and then click *Add data connection*. +.. Click the *Data connections* tab, and then click *Add data connection*. + image::projects/ds-project-add-dc.png[Add data connection] -. Fill out the *Add data connection* form and name your connection *My Storage*. This connection is for saving your personal work, including data and models. +.. Fill out the *Add data connection* form and name your connection *My Storage*. This connection is for saving your personal work, including data and models. ++ +NOTE: Skip the *Connected workbench* item. You add data connections to a workbench in a later section. + image::projects/ds-project-my-storage-form.png[Add my storage form] -. Click *Add data connection*. - -*Create a data connection for saving pipeline artifacts* +.. Click *Add data connection*. +. Create a data connection for saving pipeline artifacts: ++ NOTE: If you do not intend to complete the pipelines section of the {deliverable}, you can skip this step. -. Click *Add data connection*. +.. Click *Add data connection*. -. Fill out the form and name your connection *Pipeline Artifacts*. +.. Fill out the form and name your connection *Pipeline Artifacts*. ++ +NOTE: Skip the *Connected workbench* item. You add data connections to a workbench in a later section. + image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts form] -. Click *Add data connection*. +.. Click *Add data connection*. .Verification + In the *Data connections* tab for the project, check to see that your data connections are listed. image::projects/ds-project-dc-list.png[List of project data connections] @@ -54,6 +59,6 @@ image::projects/ds-project-dc-list.png[List of project data connections] .Next steps -* Configure a pipeline server as described in xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines] +If you want to complete the pipelines section of this {deliverable}, go to xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines]. -* Create a workbench and select a notebook image as described in xref:creating-a-workbench.adoc[Creating a workbench] +Otherwise, skip to xref:creating-a-workbench.adoc[Creating a workbench]. diff --git a/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc b/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc index 3c952bb..173e5b1 100644 --- a/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc +++ b/workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc @@ -3,7 +3,7 @@ {productname-short} multi-model servers can host several models at once. You create a new model server and deploy your model to it. -.Prerequiste +.Prerequisite * A user with `admin` privileges has enabled the multi-model serving platform on your OpenShift cluster. @@ -13,7 +13,7 @@ + image::model-serving/ds-project-model-list-add.png[Models] + -*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. +NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. . In the *Multi-model serving platform* tile, click *Add model server*. @@ -43,7 +43,7 @@ image::model-serving/deploy-model-form-mm.png[Deploy model from for multi-model .Verification -Wait for the model to deploy and for the *Status* to show a green checkmark. +Notice the loading symbol under the *Status* section. It will change to a green checkmark when the deployment is completes successfully. image::model-serving/ds-project-model-list-status-mm.png[Deployed model status] diff --git a/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc b/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc index 8945321..00845ef 100644 --- a/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc +++ b/workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc @@ -3,10 +3,10 @@ {productname-short} single-model servers host only one model. You create a new model server and deploy your model to it. -*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. +NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. -.Prerequiste +.Prerequisite * A user with `admin` privileges has enabled the single-model serving platform on your OpenShift cluster. @@ -16,7 +16,7 @@ + image::model-serving/ds-project-model-list-add.png[Models] + -*Note:* Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. +NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. . In the *Single-model serving platform* tile, click *Deploy model*. . In the form, provide the following values: @@ -33,7 +33,7 @@ image::model-serving/deploy-model-form-sm.png[Deploy model from for single-model .Verification -Wait for the model to deploy and for the *Status* to show a green checkmark. +Notice the loading symbol under the *Status* section. It will change to a green checkmark when the deployment is completes successfully. image::model-serving/ds-project-model-list-status-sm.png[Deployed model status] diff --git a/workshop/docs/modules/ROOT/pages/deploying-a-model.adoc b/workshop/docs/modules/ROOT/pages/deploying-a-model.adoc index b626dbb..f55b9ca 100644 --- a/workshop/docs/modules/ROOT/pages/deploying-a-model.adoc +++ b/workshop/docs/modules/ROOT/pages/deploying-a-model.adoc @@ -8,7 +8,7 @@ Now that the model is accessible in storage and saved in the portable ONNX forma * *Single-model serving* - Each model in the project is deployed on its own model server. This platform works well for large models or models that need dedicated resources. * *Multi-model serving* - All models in the project are deployed on the same model server. This platform is suitable for sharing resources amongst deployed models. Multi-model serving is the only option offered in the {org-name} Developer Sandbox environment. -*Note:* For each project, you can specify only one model serving platform. If you want to change to the other model serving platform, you must create a new project. +NOTE: For each project, you can specify only one model serving platform. If you want to change to the other model serving platform, you must create a new project. For this {deliverable}, since you are only deploying only one model, you can select either serving type. The steps for deploying the fraud detection model depend on the type of model serving platform you select: diff --git a/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc b/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc index 17229d1..9da8ce7 100644 --- a/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc +++ b/workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc @@ -29,7 +29,7 @@ image::projects/ds-project-create-pipeline-server-form.png[Selecting the Pipelin . Click *Configure pipeline server*. -. Wait until the spinner disappears and *No pipelines yet* is displayed. +. Wait until the spinner disappears and *Start by importing a pipeline* is displayed. + [IMPORTANT] ==== diff --git a/workshop/docs/modules/ROOT/pages/index.adoc b/workshop/docs/modules/ROOT/pages/index.adoc index 5ae4820..b3aca6a 100644 --- a/workshop/docs/modules/ROOT/pages/index.adoc +++ b/workshop/docs/modules/ROOT/pages/index.adoc @@ -6,9 +6,7 @@ [.text-center.strong] == Introduction -Welcome! - -In this {deliverable}, you learn how to incorporate data science and artificial intelligence and machine learning (AI/ML) into an OpenShift development workflow. +Welcome. In this {deliverable}, you learn how to incorporate data science and artificial intelligence and machine learning (AI/ML) into an OpenShift development workflow. You will use an example fraud detection model to complete the following tasks: @@ -16,7 +14,7 @@ You will use an example fraud detection model to complete the following tasks: * Deploy the model by using {productname-short} model serving. * Refine and train the model by using automated pipelines. -And you do not have to install anything on your own computer, thanks to https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science[{productname-long}]. +And you do not have to install anything on your own computer, thanks to https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai[{productname-long}]. == About the example fraud detection model @@ -30,11 +28,13 @@ Based on this data, the model outputs the likelihood of the transaction being fr == Before you begin -If you don't already have an instance of {productname-long}, see the https://developers.redhat.com/products/red-hat-openshift-ai/download[{productname-long} page on the {org-name} Developer website]. There, you can create an account and access the *free {org-name} Developer Sandbox* or you can learn how to install {productname-short} on *your own OpenShift cluster*. +You should have access to an OpenShift cluster where {productname-long} is installed. + +If don't have access to a cluster that includes an instance of {productname-short}, see the https://developers.redhat.com/products/red-hat-openshift-ai/download[{productname-long} page on the {org-name} Developer website]. There, you can create an account and access the https://console.redhat.com/openshift/sandbox[*free {org-name} Developer Sandbox*] or you can learn how to install {productname-short} on *your own OpenShift cluster*. [IMPORTANT] ==== If your cluster uses self-signed certificates, before you begin the {deliverable}, your {productname-short} administrator must add self-signed certificates for {productname-short} as described in link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs[Working with certificates]. ==== -If you're ready, xref:navigating-to-the-dashboard.adoc[start the {deliverable}!] +If you're ready, xref:navigating-to-the-dashboard.adoc[start the {deliverable}]. diff --git a/workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc b/workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc index 39ff694..0ebe17f 100644 --- a/workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc +++ b/workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc @@ -23,7 +23,7 @@ image::projects/login-with-openshift.png[OpenShift login, 300] The {productname-short} dashboard shows the *Home* page. -*Note:* You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console. +NOTE: You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console. image::projects/ds-console-ocp-tile.png[OCP console link] diff --git a/workshop/docs/modules/ROOT/pages/preparing-a-model-for-deployment.adoc b/workshop/docs/modules/ROOT/pages/preparing-a-model-for-deployment.adoc index d1fe233..b3f7444 100644 --- a/workshop/docs/modules/ROOT/pages/preparing-a-model-for-deployment.adoc +++ b/workshop/docs/modules/ROOT/pages/preparing-a-model-for-deployment.adoc @@ -3,7 +3,11 @@ After you train a model, you can deploy it by using the {productname-short} model serving capabilities. -To prepare a model for deployment, you must move the model from your workbench to your S3-compatible object storage. You use the data connection that you created in the xref:storing-data-with-data-connections.adoc[Storing data with data connections] section and upload the model from a notebook. You also convert the model to the portable ONNX format. ONNX allows you to transfer models between frameworks with minimal preparation and without the need for rewriting the models. +To prepare a model for deployment, you must complete the following tasks: + +* Move the model from your workbench to your S3-compatible object storage. You use the data connection that you created in the xref:storing-data-with-data-connections.adoc[Storing data with data connections] section and upload the model from a notebook. + +* Convert the model to the portable ONNX format. ONNX allows you to transfer models between frameworks with minimal preparation and without the need for rewriting the models. .Prerequisites diff --git a/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc b/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc index 6bb670b..465598c 100644 --- a/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc +++ b/workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc @@ -1,7 +1,7 @@ [id='running-a-pipeline-generated-from-python-code'] = Running a data science pipeline generated from Python code -In the previous section, you created a simple pipeline by using the GUI pipeline editor. It's often desirable to create pipelines by using code that can be version-controlled and shared with others. The https://github.com/kubeflow/pipelines[kfp] SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the `pip install kfp` command. With this package, you can use Python code to create a pipeline and then compile it to YAML format. Then you can import the YAML code into {productname-short}. +In the previous section, you created a simple pipeline by using the GUI pipeline editor. It's often desirable to create pipelines by using code that can be version-controlled and shared with others. The https://github.com/kubeflow/pipelines[Kubeflow pipelines (kfp)] SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the `pip install kfp` command. With this package, you can use Python code to create a pipeline and then compile it to YAML format. Then you can import the YAML code into {productname-short}. This {deliverable} does not delve into the details of how to use the SDK. Instead, it provides the files for you to view and upload. diff --git a/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc b/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc index 33081d2..7c19433 100644 --- a/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc +++ b/workshop/docs/modules/ROOT/pages/running-a-script-to-install-storage.adoc @@ -53,7 +53,7 @@ image::projects/ocp-console-project-selected.png[Selected project] . Copy the following code and paste it into the *Import YAML* editor. + -*Note:* This code gets and applies the `setup-s3-no-sa.yaml` file. +NOTE: This code gets and applies the `setup-s3-no-sa.yaml` file. + [.lines_space] [.console-input] @@ -113,6 +113,6 @@ You should see a "Resources successfully created" message and the following reso .Next steps -* Configure a pipeline server as described in xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines] +If you want to complete the pipelines section of this {deliverable}, go to xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines]. -* Create a workbench and select a notebook image as described in xref:creating-a-workbench.adoc[Creating a workbench] \ No newline at end of file +Otherwise, skip to xref:creating-a-workbench.adoc[Creating a workbench]. \ No newline at end of file diff --git a/workshop/docs/modules/ROOT/pages/running-code-in-a-notebook.adoc b/workshop/docs/modules/ROOT/pages/running-code-in-a-notebook.adoc index 123cf3a..a965195 100644 --- a/workshop/docs/modules/ROOT/pages/running-code-in-a-notebook.adoc +++ b/workshop/docs/modules/ROOT/pages/running-code-in-a-notebook.adoc @@ -33,7 +33,7 @@ Notebooks are so named because they are like a physical _notebook_: you can take == Try it -Now that you know the basics, give it a try! +Now that you know the basics, give it a try. .Prerequisite diff --git a/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc b/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc index 976b5a4..34bf38e 100644 --- a/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc +++ b/workshop/docs/modules/ROOT/pages/setting-up-your-data-science-project.adoc @@ -11,9 +11,9 @@ image::projects/launch-jupyter-link.png[Launch Jupyter link] + Note that it is possible to start a Jupyter notebook by clicking the *Launch Jupyter* link. However, it would be a one-off Jupyter notebook run in isolation. To implement a data science workflow, you must create a data science project (as described in the following procedure). Projects allow you and your team to organize and collaborate on resources within separated namespaces. From a project you can create multiple workbenches, each with their own IDE environment (for example, JupyterLab), and each with their own data connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers. -. If you are using the {org-name} Developer Sandbox, you are provided with a default data science project (for example, `myname-dev`). Select it and skip over the next step to the *Verification* section. +. If you are using your own OpenShift cluster, click *Create data science project*. + -If you are using your own OpenShift cluster, click *Create data science project*. +NOTE: If you are using the {org-name} Developer Sandbox, you are provided with a default data science project (for example, `myname-dev`). Select it and skip over the next step to the *Verification* section. . Enter a display name and description. Based on the display name, a resource name is automatically generated, but you can change if you prefer. + diff --git a/workshop/docs/modules/ROOT/pages/storing-data-with-data-connections.adoc b/workshop/docs/modules/ROOT/pages/storing-data-with-data-connections.adoc index 6667761..c4783b9 100644 --- a/workshop/docs/modules/ROOT/pages/storing-data-with-data-connections.adoc +++ b/workshop/docs/modules/ROOT/pages/storing-data-with-data-connections.adoc @@ -1,17 +1,15 @@ [id='storing-data-with-data-connections'] = Storing data with data connections -For this {deliverable}, you need two S3-compatible object storage buckets, such as Ceph, Minio, or AWS S3: +Add data connections to workbenches to connect your project to data inputs and object storage buckets. A data connection is a resource that contains the configuration parameters needed to connect to an object storage bucket. + +For this {deliverable}, you need two S3-compatible object storage buckets, such as Ceph, Minio, or AWS S3. You can use your own storage buckets or run a provided script that creates the following local Minio storage buckets for you: * *My Storage* - Use this bucket for storing your models and data. You can reuse this bucket and its connection for your notebooks and model servers. * *Pipelines Artifacts* - Use this bucket as storage for your pipeline artifacts. A pipeline artifacts bucket is required when you create a pipeline server. For this {deliverable}, create this bucket to separate it from the first storage bucket for clarity. -You can use your own storage buckets or run a provided script that creates local Minio storage buckets for you. - -Also, you must create a data connection to each storage bucket. A data connection is a resource that contains the configuration parameters needed to connect to an object storage bucket. - -You have two options for this {deliverable}, depending on whether you want to use your own storage buckets or use a script to create local Minio storage buckets: +Also, you must create a data connection to each storage bucket. You have two options for this {deliverable}, depending on whether you want to use your own storage buckets or use a script to create local Minio storage buckets: * If you want to use your own S3-compatible object storage buckets, create data connections to them as described in xref:creating-data-connections-to-storage.adoc[Creating data connections to your own S3-compatible object storage]. -* If you want to run a script that installs local Minio storage buckets and creates data connections to them, for the purposes of this {deliverable}, follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. \ No newline at end of file +* If you want to run a script that installs local Minio storage buckets and creates data connections to them, follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. \ No newline at end of file diff --git a/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc b/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc index ef9b194..96d216e 100644 --- a/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc +++ b/workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc @@ -6,11 +6,13 @@ Now that you've deployed the model, you can test its API endpoints. .Procedure -. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab. +. In the {productname-short} dashboard, navigate to the project details page and click the *Models* tab. -. Take note of the model's Inference endpoint. You need this information when you test the model API. +. Take note of the model's Inference endpoint URL. You need this information when you test the model API. + image::model-serving/ds-project-model-inference-endpoint.png[Model inference endpoint] ++ +If the *Inference endpoint* field contains an *Internal Service* link, click the link to open a text box that shows the URL. . Return to the Jupyter environment and try out your new endpoint. + diff --git a/workshop/docs/modules/ROOT/pages/training-a-model.adoc b/workshop/docs/modules/ROOT/pages/training-a-model.adoc index c743052..17c4ff7 100644 --- a/workshop/docs/modules/ROOT/pages/training-a-model.adoc +++ b/workshop/docs/modules/ROOT/pages/training-a-model.adoc @@ -1,7 +1,7 @@ [id='training-a-model'] = Training a model -Now that you know how the Jupyter notebook environment works, the real work can begin! +Now that you know how the Jupyter notebook environment works, the real work can begin. In your notebook environment, open the `1_experiment_train.ipynb` file and follow the instructions directly in the notebook. The instructions guide you through some simple data exploration, experimentation, and model training tasks.