diff --git a/notebooks/vertex_genai/solutions/grounding_vertex_agent_builder.ipynb b/notebooks/vertex_genai/solutions/grounding_vertex_agent_builder.ipynb new file mode 100644 index 00000000..df0e9460 --- /dev/null +++ b/notebooks/vertex_genai/solutions/grounding_vertex_agent_builder.ipynb @@ -0,0 +1,414 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "JAPoU8Sm5E6e", + "tags": [] + }, + "source": [ + "# Grounding PaLM with Vertex AI Agent Builder" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tvgnzT1CKxrO" + }, + "source": [ + "## Overview\n", + "\n", + "[Grounding in Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/ground-language-models) lets you use generative text models to generate content grounded in your own documents and data. This capability lets the model access information at runtime that goes beyond its training data. By grounding model responses in data stores within [Vertex AI Agent Builder](https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction), LLMs that are grounded in data can produce more accurate, up-to-date, and relevant responses.\n", + "\n", + "Grounding provides the following benefits:\n", + "\n", + "- Reduces model hallucinations (instances where the model generates content that isn't factual)\n", + "- Anchors model responses to specific information, documents, and data sources\n", + "- Enhances the trustworthiness, accuracy, and applicability of the generated content" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "d975e698c9a4" + }, + "source": [ + "### Learning Objectives\n", + "\n", + "In this notebook, you learn how to:\n", + "\n", + "- Compare the results of ungrounded LLM responses with grounded LLM responses\n", + "- Create and use a data store in Vertex AI Search to ground responses in custom documents and data\n", + "- Generate LLM text and chat model responses grounded in Vertex AI Search results\n", + "\n", + "This tutorial uses the following Google Cloud AI services and resources:\n", + "\n", + "- Vertex AI\n", + "- Vertex AI Search and Conversation\n", + "\n", + "The steps performed include:\n", + "\n", + "- Configuring the LLM and prompt for various examples\n", + "- Sending example prompts to generative text and chat models in Vertex AI\n", + "- Setting up a data store in Vertex AI Search with your own data\n", + "- Sending example prompts with various levels of grounding (no grounding, data store grounding)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Acknowledgement** \n", + "This notebook is based on a notebook by [Holt Skinner](https://github.com/holtskinner) and [Kristopher Overholt](https://github.com/koverholt)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "oM1iC_MfAts1", + "tags": [] + }, + "outputs": [], + "source": [ + "PROJECT_ID = !gcloud config list --format 'value(core.project)'\n", + "PROJECT_ID = PROJECT_ID[0]\n", + "REGION = \"us-central1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "init_aip:mbsdk,all", + "tags": [] + }, + "outputs": [], + "source": [ + "import vertexai\n", + "\n", + "vertexai.init(project=PROJECT_ID, location=REGION)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "960505627ddf" + }, + "source": [ + "### Import libraries" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "PyQmSRbKA8r-", + "tags": [] + }, + "outputs": [], + "source": [ + "from IPython.display import Markdown, display\n", + "from vertexai.language_models import (\n", + " ChatModel,\n", + " GroundingSource,\n", + " TextGenerationModel,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Initialize the text and chat palm model from Vertex AI:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "parameters = {\n", + " \"temperature\": 0.2, # Temperature controls the degree of randomness in token selection.\n", + " \"top_p\": 0.8, # Tokens are selected from most probable to least until the sum of their probabilities equals the top_p value.\n", + " \"top_k\": 40, # A top_k of 1 means the selected token is the most probable among all tokens.\n", + "}\n", + "\n", + "text_model = TextGenerationModel.from_pretrained(\"text-bison@002\")\n", + "chat_model = ChatModel.from_pretrained(\"chat-bison@002\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Grounding with custom documents and data\n", + "\n", + "In this example, you'll compare LLM responses with no grounding with responses that are grounded in the [results of a data store in Vertex AI Search](https://cloud.google.com/generative-ai-app-builder/docs/create-datastore-ingest). You'll ask a question about a GoogleSQL query to create an [object table in BigQuery](https://cloud.google.com/bigquery/docs/object-table-introduction)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Creating a data store in Vertex AI Search\n", + "\n", + "Follow the steps below to create a data store in Vertex AI Search with sample data. In this example, you'll use a website-based data store that contains content from the Google Cloud website, including documentation.\n", + "\n", + "1. In the Google Cloud console, go to the [Agent Builder](https://console.cloud.google.com/gen-app-builder/engines?_ga=2.175627042.1438957058.1715191726-353778574.1715191726) page.\n", + "2. Click **New** app.\n", + "3. In the Select app type pane, select **Search**.\n", + "4. If you are offered a choice of generic or media content, click **Generic**.\n", + "5. In order to create a website search app, make sure **Enterprise features** is turned on.\n", + "6. If you don't plan to use **Advanced LLM** features for this app, turn off the **Advanced LLM** features option.\n", + "7. In the **Your app name** field, enter a name for your app. Your app ID appears under the app name.\n", + "8. In the **External name** of your company or organization field, enter the company or organization name.\n", + "9. Select **global (Global)** as the location for your app, and then click Continue.\n", + "10. In the **Data stores** pane, click **Create new data store**.\n", + "11. In the Select a data source pane, select **Website URLs**.\n", + "12. Make sure that **Advanced website indexing** is turned off.\n", + "13. In the **Specify the websites** for your data store pane, in the Sites to include field, enter `cloud.google.com/*` and then click Continue.\n", + "14. In the **Configure your data store** pane, enter a name for your data store, and then click **Create**.\n", + "15. On the **Data stores** page, select your new data store, and then click **Create**.\n", + "\n", + "Once you've created a data store, obtain the Data Store ID and input it below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "DATA_STORE_PROJECT_ID = PROJECT_ID # @param {type:\"string\"}\n", + "DATA_STORE_REGION = \"global\" # @param {type:\"string\"}\n", + "# Replace this with your data store ID from Vertex AI Search\n", + "DATA_STORE_ID = \"\" # TODO: ENTER YOUR DATASTORE ID" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now you can ask a question about object tables in BigQuery and when to use them:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "PROMPT = \"When to use an object table in BigQuery?\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Text generation without grounding\n", + "\n", + "Make a prediction request to the LLM with no grounding:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "response = text_model.predict(\n", + " PROMPT,\n", + " **parameters,\n", + ")\n", + "print(f\"Response from model without grounding:\\n {response.text}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Text generation grounded in Vertex AI Search results\n", + "\n", + "Now we can add the `tools` keyword arg with a grounding tool of `grounding.VertexAISearch()` to instruct the LLM to first perform a search within your custom data store, then construct an answer based on the relevant documents:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "if DATA_STORE_ID and DATA_STORE_REGION:\n", + " # Use Vertex AI Search data store\n", + " grounding_source = GroundingSource.VertexAISearch(\n", + " data_store_id=DATA_STORE_ID, location=DATA_STORE_REGION\n", + " )\n", + "else:\n", + " print(\"Please provide DATA_STORE_ID and DATA_STORE_REGION\")\n", + "\n", + "response = text_model.predict(\n", + " PROMPT,\n", + " grounding_source=grounding_source,\n", + " **parameters,\n", + ")\n", + "print(f\"Response from Model:\\n{response.text}\")\n", + "print(f\"\\n\\nGrounding Metadata:\\n{response.grounding_metadata}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that the response without grounding only has limited information from the LLM about object tables in BigQuery that might not be accurate. Whereas the response that was grounded in Vertex AI Search results contains the most up to date information from the Google Cloud documentation about BigQuery, along with citations of the information." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Grounded Chat Responses" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also use grounding when working with chat models in Vertex AI. In this example, you'll compare LLM responses with no grounding with responses that are grounded in the results of a Google Search and a data store in Vertex AI Search.\n", + "\n", + "You'll ask a question about Vertex AI and a follow up question about managed datasets in Vertex AI:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "PROMPT = \"How can I ground LLM responses in Vertex AI?\"\n", + "PROMPT_FOLLOWUP = \"Is grounding available in PaLM models?\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Chat session without grounding\n", + "\n", + "Start a chat session and send messages to the LLM with no grounding:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "chat = chat_model.start_chat()\n", + "\n", + "print(f\"PROMPT: {PROMPT}\")\n", + "response = chat.send_message(PROMPT)\n", + "print(response.text)\n", + "\n", + "print(f\"PROMPT: {PROMPT_FOLLOWUP}\")\n", + "response = chat.send_message(PROMPT_FOLLOWUP)\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Chat session grounded in Vertex AI Search results\n", + "\n", + "Now you can add the `grounding_source` keyword arg with a grounding source of `GroundingSource.VertexAISearch()` to instruct the chat model to first perform a search within your custom data store, then construct an answer based on the relevant documents:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "chat = chat_model.start_chat()\n", + "grounding_source = GroundingSource.VertexAISearch(\n", + " data_store_id=DATA_STORE_ID, location=DATA_STORE_REGION\n", + ")\n", + "\n", + "print(f\"PROMPT: {PROMPT}\")\n", + "response = chat.send_message(\n", + " PROMPT,\n", + " grounding_source=grounding_source,\n", + ")\n", + "print(response.text)\n", + "print(response.grounding_metadata)\n", + "\n", + "print(f\"PROMPT: {PROMPT_FOLLOWUP}\")\n", + "response = chat.send_message(\n", + " PROMPT_FOLLOWUP,\n", + " grounding_source=grounding_source,\n", + ")\n", + "print(response.text)\n", + "print(response.grounding_metadata)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright 2024 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" + ] + } + ], + "metadata": { + "colab": { + "collapsed_sections": [], + "name": "notebook_template.ipynb", + "toc_visible": true + }, + "environment": { + "kernel": "conda-base-py", + "name": "workbench-notebooks.m120", + "type": "gcloud", + "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m120" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel) (Local)", + "language": "python", + "name": "conda-base-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.14" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}