diff --git a/docs/docs/integrations/providers/nimble.ipynb b/docs/docs/integrations/providers/nimble.ipynb new file mode 100644 index 0000000000000..b2d8f98dbf2b0 --- /dev/null +++ b/docs/docs/integrations/providers/nimble.ipynb @@ -0,0 +1,107 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": { + "id": "afaf8039" + }, + "source": [ + "---\n", + "sidebar_label: Nimble\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", + "metadata": { + "id": "72ee0c4b-9764-423a-9dbf-95129e185210" + }, + "source": [ + "# Nimbleway\n", + "\n", + " [Nimble](https://www.linkedin.com/company/nimbledata) is the first business external data platform, making data decision-making easier than ever, with our award-winning AI-powered data structuring technology Nimble connects business users with the public web knowledge.\n", + "We empower businesses with mission-critical real-time external data to unlock advanced business intelligence, price comparison, and other public data for sales and marketing. We translate data into immediate business value.\n", + "\n", + "If you'd like to learn more about Nimble, visit us at [nimbleway.com](https://www.nimbleway.com/).\n", + "\n", + "\n", + "## Currently we expose the following components\n", + "\n", + "* **Retriever** - Allow us to query the internet and get parsed textual results utilizing several search engines.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "source": [ + "## Usage" + ], + "metadata": { + "id": "AuMFgVFrKbNH" + }, + "id": "AuMFgVFrKbNH" + }, + { + "cell_type": "markdown", + "source": [ + "In order to use our provider you have to provide an API key like so" + ], + "metadata": { + "id": "sFlPjZX9KdK6" + }, + "id": "sFlPjZX9KdK6" + }, + { + "cell_type": "code", + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"NIMBLE_API_KEY\"] = getpass.getpass()" + ], + "metadata": { + "id": "eAqSHZ-Z8R3F" + }, + "id": "eAqSHZ-Z8R3F", + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "For more information about the Authentication process, see [Nimble APIs Authentication Documentation](https://docs.nimbleway.com/nimble-sdk/web-api/nimble-web-api-quick-start-guide/nimble-apis-authentication)." + ], + "metadata": { + "id": "WfwnI_RS8PO5" + }, + "id": "WfwnI_RS8PO5" + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.4" + }, + "colab": { + "provenance": [] + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/docs/integrations/retrievers/nimble.ipynb b/docs/docs/integrations/retrievers/nimble.ipynb new file mode 100644 index 0000000000000..dd890ad44b92a --- /dev/null +++ b/docs/docs/integrations/retrievers/nimble.ipynb @@ -0,0 +1,526 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": { + "id": "afaf8039" + }, + "source": [ + "---\n", + "sidebar_label: Nimble\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", + "metadata": { + "id": "72ee0c4b-9764-423a-9dbf-95129e185210" + }, + "source": [ + "# NimbleSearchRetriever\n", + "\n", + " `NimbleSearchRetriever` enables developers to build RAG applications and AI Agents that can search, access, and retrieve online information from anywhere on the web.\n", + "\n", + " `NimbleSearchRetriever` harnesses Nimble's Data APIs to execute search queries and retrieve web data in an efficient, scalable, and effective fashion.\n", + " It has two modes:\n", + "\n", + "* **Search & Retrieve**: Execute a search query, get the top result URLs, and retrieve the text from those URLs.\n", + "* **Retrieve**: Provide a list of URLs, and retrieve the text/data from those URLs\n", + "\n", + "\n", + "If you'd like to learn more about the underlying Nimble APIs, visit the [documentation here](https://docs.nimbleway.com/nimble-sdk/web-api/web-api-overview).\n", + "\n", + "\n", + "## Setup\n", + "\n", + "To begin using `NimbleSearchRetriever`, you'll first need to open an account with Nimble and subscribe to a plan. Nimble offers free trials, [which you can register for here](https://app.nimbleway.com/signup?returnTo=/pipelines/nimbleapi).\n", + "\n", + "For more information about available plans, see our [Pricing page.](https://www.nimbleway.com/pricing)\n", + "\n", + "Once you have registered, you'll receive your API credentials, which you can use to generate the authentication credential string by Base64 encoding them in the following fashion:\n", + "\n", + "```\n", + "base64(username:password)\n", + "```\n", + "\n", + "You can set your credential string as an environment variable so `NimbleSearchRetriever` will capture it automatically without having to pass it each time inline.\n", + "\n", + "`NimbleSearchRetriever`.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"NIMBLE_API_KEY\"] = getpass.getpass()" + ], + "metadata": { + "id": "eAqSHZ-Z8R3F" + }, + "id": "eAqSHZ-Z8R3F", + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "For more information about the Authentication process, see [Nimble APIs Authentication Documentation](https://docs.nimbleway.com/nimble-sdk/web-api/nimble-web-api-quick-start-guide/nimble-apis-authentication).\n", + "\n", + "If you want to get automated tracing for individual queries, you can set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n" + ], + "metadata": { + "id": "WfwnI_RS8PO5" + }, + "id": "WfwnI_RS8PO5" + }, + { + "cell_type": "code", + "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", + "metadata": { + "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de" + }, + "source": [ + "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n", + "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "id": "0730d6a1-c893-4840-9817-5e5251676d5d", + "metadata": { + "id": "0730d6a1-c893-4840-9817-5e5251676d5d" + }, + "source": [ + "### Installation\n", + "\n", + "This retriever lives in the `langchain-community` package." + ] + }, + { + "cell_type": "code", + "id": "652d6238-1f87-422a-b135-f5abbb8652fc", + "metadata": { + "id": "652d6238-1f87-422a-b135-f5abbb8652fc", + "colab": { + "base_uri": "https://localhost:8080/" + }, + "outputId": "d4d68bd0-6b7d-450b-9861-8d92172ca727" + }, + "source": [ + "%pip install -U langchain-nimble\n", + "%pip install -U langchain_openai" + ], + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Requirement already satisfied: langchain-nimble in /usr/local/lib/python3.11/dist-packages (0.1.1)\n", + "Requirement already satisfied: langchain-core<0.4.0,>=0.3.15 in /usr/local/lib/python3.11/dist-packages (from langchain-nimble) (0.3.32)\n", + "Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (6.0.2)\n", + "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (1.33)\n", + "Requirement already satisfied: langsmith<0.4,>=0.1.125 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (0.3.2)\n", + "Requirement already satisfied: packaging<25,>=23.2 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (24.2)\n", + "Requirement already satisfied: pydantic<3.0.0,>=2.5.2 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (2.10.6)\n", + "Requirement already satisfied: tenacity!=8.4.0,<10.0.0,>=8.1.0 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (9.0.0)\n", + "Requirement already satisfied: typing-extensions>=4.7 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain-nimble) (4.12.2)\n", + "Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.11/dist-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (3.0.0)\n", + "Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (0.28.1)\n", + "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (3.10.15)\n", + "Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (2.32.3)\n", + "Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (1.0.0)\n", + "Requirement already satisfied: zstandard<0.24.0,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (0.23.0)\n", + "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0.0,>=2.5.2->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (0.7.0)\n", + "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0.0,>=2.5.2->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (2.27.2)\n", + "Requirement already satisfied: anyio in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (3.7.1)\n", + "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (2024.12.14)\n", + "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (1.0.7)\n", + "Requirement already satisfied: idna in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (3.10)\n", + "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (0.14.0)\n", + "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (3.4.1)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests<3,>=2->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (2.3.0)\n", + "Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.11/dist-packages (from anyio->httpx<1,>=0.23.0->langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.15->langchain-nimble) (1.3.1)\n", + "Collecting langchain_openai\n", + " Downloading langchain_openai-0.3.3-py3-none-any.whl.metadata (2.7 kB)\n", + "Collecting langchain-core<0.4.0,>=0.3.33 (from langchain_openai)\n", + " Downloading langchain_core-0.3.33-py3-none-any.whl.metadata (6.3 kB)\n", + "Requirement already satisfied: openai<2.0.0,>=1.58.1 in /usr/local/lib/python3.11/dist-packages (from langchain_openai) (1.59.9)\n", + "Collecting tiktoken<1,>=0.7 (from langchain_openai)\n", + " Downloading tiktoken-0.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)\n", + "Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (6.0.2)\n", + "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (1.33)\n", + "Requirement already satisfied: langsmith<0.4,>=0.1.125 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (0.3.2)\n", + "Requirement already satisfied: packaging<25,>=23.2 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (24.2)\n", + "Requirement already satisfied: pydantic<3.0.0,>=2.5.2 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (2.10.6)\n", + "Requirement already satisfied: tenacity!=8.4.0,<10.0.0,>=8.1.0 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (9.0.0)\n", + "Requirement already satisfied: typing-extensions>=4.7 in /usr/local/lib/python3.11/dist-packages (from langchain-core<0.4.0,>=0.3.33->langchain_openai) (4.12.2)\n", + "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (3.7.1)\n", + "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (1.9.0)\n", + "Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (0.28.1)\n", + "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (0.8.2)\n", + "Requirement already satisfied: sniffio in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (1.3.1)\n", + "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.11/dist-packages (from openai<2.0.0,>=1.58.1->langchain_openai) (4.67.1)\n", + "Requirement already satisfied: regex>=2022.1.18 in /usr/local/lib/python3.11/dist-packages (from tiktoken<1,>=0.7->langchain_openai) (2024.11.6)\n", + "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.11/dist-packages (from tiktoken<1,>=0.7->langchain_openai) (2.32.3)\n", + "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.11/dist-packages (from anyio<5,>=3.5.0->openai<2.0.0,>=1.58.1->langchain_openai) (3.10)\n", + "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->openai<2.0.0,>=1.58.1->langchain_openai) (2024.12.14)\n", + "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx<1,>=0.23.0->openai<2.0.0,>=1.58.1->langchain_openai) (1.0.7)\n", + "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai<2.0.0,>=1.58.1->langchain_openai) (0.14.0)\n", + "Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.11/dist-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.4.0,>=0.3.33->langchain_openai) (3.0.0)\n", + "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.33->langchain_openai) (3.10.15)\n", + "Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.33->langchain_openai) (1.0.0)\n", + "Requirement already satisfied: zstandard<0.24.0,>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from langsmith<0.4,>=0.1.125->langchain-core<0.4.0,>=0.3.33->langchain_openai) (0.23.0)\n", + "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0.0,>=2.5.2->langchain-core<0.4.0,>=0.3.33->langchain_openai) (0.7.0)\n", + "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<3.0.0,>=2.5.2->langchain-core<0.4.0,>=0.3.33->langchain_openai) (2.27.2)\n", + "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->tiktoken<1,>=0.7->langchain_openai) (3.4.1)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->tiktoken<1,>=0.7->langchain_openai) (2.3.0)\n", + "Downloading langchain_openai-0.3.3-py3-none-any.whl (54 kB)\n", + "\u001B[2K \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m54.5/54.5 kB\u001B[0m \u001B[31m4.5 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0m\n", + "\u001B[?25hDownloading langchain_core-0.3.33-py3-none-any.whl (412 kB)\n", + "\u001B[2K \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m412.7/412.7 kB\u001B[0m \u001B[31m21.3 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0m\n", + "\u001B[?25hDownloading tiktoken-0.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n", + "\u001B[2K \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m1.2/1.2 MB\u001B[0m \u001B[31m46.3 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0m\n", + "\u001B[?25hInstalling collected packages: tiktoken, langchain-core, langchain_openai\n", + " Attempting uninstall: langchain-core\n", + " Found existing installation: langchain-core 0.3.32\n", + " Uninstalling langchain-core-0.3.32:\n", + " Successfully uninstalled langchain-core-0.3.32\n", + "Successfully installed langchain-core-0.3.33 langchain_openai-0.3.3 tiktoken-0.8.0\n" + ] + } + ], + "execution_count": 10 + }, + { + "cell_type": "markdown", + "id": "a38cde65-254d-4219-a441-068766c0d4b5", + "metadata": { + "id": "a38cde65-254d-4219-a441-068766c0d4b5" + }, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our retriever:\n" + ] + }, + { + "cell_type": "code", + "id": "70cc8e65-2a02-408a-bbc6-8ef649057d82", + "metadata": { + "ExecuteTime": { + "end_time": "2025-01-20T08:59:49.717836Z", + "start_time": "2025-01-20T08:59:49.712341Z" + }, + "id": "70cc8e65-2a02-408a-bbc6-8ef649057d82" + }, + "source": [ + "from langchain_nimble import NimbleSearchRetriever\n", + "\n", + "retriever = NimbleSearchRetriever(k=3)" + ], + "outputs": [], + "execution_count": 4 + }, + { + "cell_type": "markdown", + "id": "5c5f2839-4020-424e-9fc9-07777eede442", + "metadata": { + "id": "5c5f2839-4020-424e-9fc9-07777eede442" + }, + "source": [ + "## Usage" + ] + }, + { + "cell_type": "markdown", + "source": [ + "`NimbleSearchRetriever` has these arguments:\n", + "\n", + "\n", + "* `k` (optional) integer - Number of results to return (less than or equal to 20)\n", + "* `api_key` (optional) string - Nimble's API key, can be sent directly when instantiating the retriever or with the environment variable (`NIMBLE_API_KEY`)\n", + "* `search_engine` (optional) string - The search engine your query will be executed through, you can choose from\n", + " * `google_search` (default value) - Google's search engine\n", + " * `google_sge` - Google’s search generative experience (read more [here](https://www.nimbleway.com/blog/google-sge))\n", + " * `bing_search` - Bing's search engine\n", + " * `yandex_search` - Yandex search engine\n", + "* `render` (optional) boolean - Enables or disables Javascript rendering on the target page (if enabled the results might return more slowly)\n", + "* `locale` (optional) string - LCID standard locale used for the URL request. Alternatively, user can use auto for automatic locale based on country targeting.\n", + "* `country` (optional) string - Country used to access the target URL, use ISO Alpha-2 Country Codes i.e. US, DE, GB\n", + "* `parsing_type` (optional) string - The text structure of the returned `page_content`\n", + " * `markdown` - Markdown format\n", + " * `simplified_html` (default value) - Compressed version of the original html document (~8% of the orignial html size)\n", + " * `plain_text` - Extracts just the text from the html\n", + "* `links` (optional) Array of strings - Array of links to the requested websites to scrape, if chosen will return the raw html content from these html **(THIS WILL ACTIVATE THE SECOND MODE)**\n", + "\n", + "You can read more about each argument in [Nimble's docs](https://docs.nimbleway.com/nimble-sdk/web-api/vertical-endpoints/serp-api/real-time-search-request#request-options).\n", + "\n" + ], + "metadata": { + "id": "bwyne9zpZrad" + }, + "id": "bwyne9zpZrad" + }, + { + "cell_type": "markdown", + "source": [ + "### Example of Search & Retrieve Mode with a search query string\n" + ], + "metadata": { + "id": "e7FHmVL-rprU" + }, + "id": "e7FHmVL-rprU" + }, + { + "cell_type": "code", + "id": "51a60dbe-9f2e-4e04-bb62-23968f17164a", + "metadata": { + "ExecuteTime": { + "end_time": "2025-01-20T09:02:10.025272Z", + "start_time": "2025-01-20T09:01:43.352667Z" + }, + "id": "51a60dbe-9f2e-4e04-bb62-23968f17164a", + "outputId": "22ad2629-0e2e-40de-a826-580be9709475", + "colab": { + "base_uri": "https://localhost:8080/" + }, + "collapsed": true + }, + "source": [ + "query = \"Latest trends in artificial intelligence\"\n", + "\n", + "retriever.invoke(query)" + ], + "outputs": [ + { + "output_type": "execute_result", + "data": { + "text/plain": [ + "[Document(metadata={'title': '8 AI and machine learning trends to watch in 2025', 'snippet': 'Jan 3, 2025 — 1. Hype gives way to more pragmatic approaches · 2. Generative AI moves beyond chatbots · 3. AI agents are the next frontier · 4. Generative AI\\xa0...', 'url': 'https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends', 'position': 1, 'entity_type': 'OrganicResult'}, page_content='8 AI and machine learning trends to watch in 2025 | TechTarget\\nSearch Enterprise AI\\nSearch the TechTarget Network\\nLogin\\nRegister\\nExplore the Network\\nTechTarget Network\\nBusiness Analytics\\nCIO\\nData Management\\nERP\\nSearch Enterprise AI\\nAI Business Strategies\\nAI Careers\\nAI Infrastructure\\nAI Platforms\\nAI Technologies\\nMore Topics\\nApplications of AI\\nML Platforms\\nOther Content\\nNews\\nFeatures\\nTips\\nWebinars\\n2024 IT Salary Survey Results\\nSponsored Sites\\nMore\\nAnswers\\nConference Guides\\nDefinitions\\nOpinions\\nPodcasts\\nQuizzes\\nTech Accelerators\\nTutorials\\nVideos\\nFollow:\\nHome\\nAI business strategies\\nTech Accelerator\\nWhat is enterprise AI? A complete guide for businesses\\nPrev\\nNext\\n8 jobs that AI can\\'t replace and why\\n10 top artificial intelligence certifications and courses for 2025\\nDownload this guide1\\nX\\nFree Download\\nA guide to artificial intelligence in the enterprise\\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI\\'s history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI\\'s key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\\nFeature\\n8 AI and machine learning trends to watch in 2025\\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\\nShare this item with your network:\\nBy\\nLev Craig,\\nSite Editor\\nPublished: 03 Jan 2025\\nGenerative AI is at a crossroads. It\\'s now more than two years since ChatGPT\\'s launch, and the initial optimism about AI\\'s potential is decidedly tempered by an awareness of its limitations and costs.\\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it\\'s also poised to be a year of growing pains.\\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That\\'s no easy feat for a technology that\\'s often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\\nHere are eight of the top AI trends to prepare for in 2025.\\n1. Hype gives way to more pragmatic approaches\\nSince 2022, there\\'s been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\\nThis article is part of\\nWhat is enterprise AI? A complete guide for businesses\\nWhich also includes:\\nHow can AI drive revenue? Here are 10 approaches\\n8 jobs that AI can\\'t replace and why\\n8 AI and machine learning trends to watch in 2025\\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget\\'s Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\\n\"The most surprising thing for me [in 2024] is actually the lack of adoption that we\\'re seeing,\" said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. \"When you look across businesses, companies are investing in AI. They\\'re building their own custom tools. They\\'re buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn\\'t been this groundswell of adoption within companies.\"\\nOne reason for this is AI\\'s uneven impact across roles and job functions. Organizations are discovering what Stave termed the \"jagged technological frontier,\" where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\\n\"Managers don\\'t know where that line is, and employees don\\'t know where that line is,\" Stave said. \"So, there\\'s a lot of uncertainty and experimentation.\"\\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\\n2. Generative AI moves beyond chatbots\\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\\n\"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything,\" said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\\n\"[A chatbot] can help an individual be more effective ... but it\\'s very one on one,\" Sydell said. \"So, how do you scale that in an enterprise-grade way?\"\\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI\\'s text-to-video Sora and ElevenLabs\\' AI voice generator, which can handle nontext data types, such as audio, video and images.\\n\"AI has become synonymous with large language models, but that\\'s just one type of AI,\" Stave said. \"It\\'s this multimodal approach to AI [where] we\\'re going to start seeing some major technological advancements.\"\\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\\n\"Think about all of the different ways we interact with the physical world,\" she said. \"I mean, the applications are just infinite.\"\\n3. AI agents are the next frontier\\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce\\'s Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\\nAutonomous functionality isn\\'t totally new, of course; by now, it\\'s a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of \"the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks.\" Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. \"When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher,\" he said.\\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\\n4. Generative AI models become commodities\\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today\\'s generative AI models are evaluated on niche technical benchmarks.\\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\\n5. AI applications and data sets become more domain-specific\\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today\\'s foundation models -- is far from necessary for most business applications.\\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn\\'t require the degree of versatility necessary for a consumer-facing chatbot.\\n\"There\\'s a lot of focus on the general-purpose AI models,\" Yee said. \"But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?\"\\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. \"Who\\'s the audience?\" Yee said. \"What\\'s the intended use case? What\\'s the domain it\\'s being used in?\"\\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\\n\"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance,\" authors Fernando Diaz and Michael Madaio wrote in their paper \"Scaling Laws Do Not Scale.\" \"That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models.\"\\n6. AI literacy becomes essential\\nGenerative AI\\'s ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn\\'t need to mean learning to code or train models. \"You don\\'t necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them,\" Sydell said. \"Experimenting, exploring, using the tools is massively helpful.\"\\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven\\'t used it at all or don\\'t use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\\nThat\\'s a faster pace of adoption compared with the PC or the internet, as the paper\\'s authors pointed out, but it\\'s still not a majority. There\\'s also a gap between businesses\\' official stances on generative AI and how real workers are using it in their day-to-day tasks.\\n\"If you look at how many companies say they\\'re using it, it\\'s actually a pretty low share who are formally incorporating it into their operations,\" David Deming, professor at Harvard University and one of the paper\\'s authors, told The Harvard Gazette. \"People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something.\"\\nStave sees a role for both companies and educational institutions in closing the AI skills gap. \"When you look at companies, they understand the on-the-job training that workers need,\" she said. \"They always have because that\\'s where the work takes place.\"\\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that\\'s available on an ongoing basis and applicable across multiple jobs. \"The business landscape is changing so fast. You can\\'t just quit and go back and get a master\\'s and learn everything new,\" Stave said. \"We have to figure out how to modularize the learning and get it out to people in real time.\"\\n7. Businesses adjust to an evolving regulatory environment\\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\\n\"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools,\" Sydell said. \"It seems like that\\'s not going to happen anytime soon at this point.\" Stave likewise said she\\'s \"not expecting significant regulation from the new administration.\"\\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\\nTo minimize harm without stifling innovation, Yee said she\\'d like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, \"low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process.\"\\nStave also pointed out that minimal oversight in the U.S. doesn\\'t necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU\\'s AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\\n8. AI-related security concerns escalate\\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today\\'s versions aren\\'t perfect, they\\'re significantly better, especially if an anxious or time-pressured victim isn\\'t looking or listening too closely.\\nAudio generators can enable hackers to impersonate a victim\\'s trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it\\'s more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company\\'s CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\\nLev Craig covers AI and machine learning as site editor for TechTarget\\'s Enterprise AI site. Craig graduated from Harvard University with a bachelor\\'s degree in English and has previously written about enterprise IT, software development and cybersecurity.\\nNext Steps\\nThe year in AI: Catch up on the top AI news of 2024\\nWays enterprise AI will transform IT infrastructure this year\\nRelated Resources\\nAI business strategies for successful transformation\\n–Video\\nRedesigning Productivity in the Age of Cognitive Acceleration\\n–Replay\\nDig Deeper on AI business strategies\\nGoogle Gemini 2.0 explained: Everything you need to know\\nBy: Sean\\xa0Kerner\\nServiceNow intros AI agent studio and orchestrator\\nBy: Esther\\xa0Shittu\\nNvidia\\'s new model aims to move GenAI to physical world\\nBy: Esther\\xa0Shittu\\nNot-so-obvious AI predictions for 2025\\nSponsored News\\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\\n–Dell Technologies and Intel\\nPrivate AI Demystified\\n–Equinix\\nSustainability, AI and Dell PowerEdge Servers\\n–Dell Technologies and Intel\\nSee More\\nRelated Content\\nNvidia\\'s new model aims to move GenAI to physical ...\\n– Search Enterprise AI\\nOracle boosts generative AI service and intros new ...\\n– Search Enterprise AI\\nNew Google Gemini AI tie-ins dig into local codebases\\n– Search Software Quality\\nLatest TechTarget resources\\nBusiness Analytics\\nCIO\\nData Management\\nERP\\nSearch Business Analytics\\nDomo platform a difference-maker for check guarantee vendor\\nIngo Money succeeded with the analytics specialist\\'s suite after years of struggling to gain insights from spreadsheets and a ...\\nAgentic AI, data as a product among growing analytics trends\\nCollibra\\'s founder and chief data citizen reveals his predictions for 2025, and underpinning all is the need for strong ...\\nTrusted data at the core of successful GenAI adoption\\nA new study finds that only a third of organizations are successfully developing GenAI tools. The problem preventing success is ...\\nSearch CIO\\nBusinesses need to prepare as EU AI Act enforcement begins\\nThe EU AI Act\\'s Sunday enforcement deadline will be a test for EU enforcers as they begin assessing companies for compliance.\\nU.S. freeze on foreign aid may give China a leg up\\nAs the U.S. steps back on foreign aid, experts worry China may step in to fill the void.\\nOMB memo creates confusion for federal IT contractors\\nMass confusion caused by an Office of Management and Budget memo to freeze federal agency spending has federal IT contractors on ...\\nSearch Data Management\\nTop trends in big data for 2025 and beyond\\nBig data initiatives are being affected by various trends. Here are seven notable ones and what they mean for organizations ...\\nPinecone provides Assistant for generative AI development\\nThe vector database specialist is expanding beyond managing data with a suite of APIs and other tools that enable users to tap ...\\n18 top data catalog software tools to consider using in 2025\\nNumerous tools can be used to build and manage data catalogs. Here\\'s a look at the key features, capabilities and components of ...\\nSearch ERP\\nDemand vs. supply planning: Learn about the differences\\nDemand planning helps companies predict future demand for goods, while supply planning enables companies to order enough goods. ...\\nTop 10 essential skills for ERP professionals in 2025\\nBoth hard and soft skills are essential for ERP professionals, including project management and being up to date with technology.\\nAcumatica cloud ERP aims for industry-focused AI value\\nNew AI functionality in the company\\'s cloud ERP platform could help customers evolve back-office transactional systems into ...\\nAbout Us\\nEditorial Ethics Policy\\nMeet The Editors\\nContact Us\\nAdvertisers\\nPartner with Us\\nMedia Kit\\nCorporate Site\\nContributors\\nReprints\\nAnswers\\nDefinitions\\nE-Products\\nEvents\\nFeatures\\nGuides\\nOpinions\\nPhoto Stories\\nQuizzes\\nTips\\nTutorials\\nVideos\\nAll Rights Reserved,\\nCopyright 2018 - 2025, TechTarget\\nPrivacy Policy\\nCookie Preferences\\nCookie Preferences\\nDo Not Sell or Share My Personal Information\\nClose'),\n", + " Document(metadata={'title': '5 AI Trends to Watch in 2025', 'snippet': 'Jan 6, 2025 — 5 trends in artificial intelligence · 1. Generative AI and democratization · 2. AI for workplace productivity · 3. Multimodal AI · 4. AI in science\\xa0...', 'url': 'https://www.coursera.org/articles/ai-trends', 'position': 2, 'entity_type': 'OrganicResult'}, page_content=\"5 AI Trends to Watch in 2025 | Coursera\\nFor IndividualsFor BusinessesFor UniversitiesFor Governments ExploreOnline DegreesCareersLog InJoin for Free 0DataAI and Machine Learning5 AI Trends to Watch in 20255 AI Trends to Watch in 2025Written by Coursera Staff • Updated on Jan 6, 2025Get to know some of the top trends in artificial intelligence in 2024. Artificial intelligence (AI) has taken the technology industry by storm, and it’s only growing from here. According to PricewaterhouseCoopers, 73 percent of US companies use AI in some capacity in their business [1].One of the most recent trends is generative AI, which has the potential to generate trillions of dollars in value across industries [2]. People worldwide have begun to incorporate generative AI into their workflows, adding to the popularity and penetration of AI.Learn about some of the top trends in AI.5 trends in artificial intelligenceArtificial intelligence is quickly transforming how we live and the business landscape in which we work. Wondering what some of the potential impacts of this exciting technology might be?Here are five of the top AI trends you can expect to see in 2024.1. Generative AI and democratizationGenerative AI is arguably the biggest trend in AI this year. When ChatGPT and other text and image generators became accessible to the general public, it was widely used and adopted by business teams worldwide. Along with this is the democratization of AI, enabling it to be available to everyone—even those without technical knowledge.\\xa0Generative AI is just one example of democratization. Hundreds of AI tools today allow us to create content faster, translate between languages, and populate search engines. It is changing how we communicate with each other, whether it’s between friends or between the media and the general public.Read more: How To Write ChatGPT Prompts: Your 2024 Guide2. AI for workplace productivityAnother trend we see in AI is its place in workplace productivity. Artificial intelligence can speed up and enhance how we work—in particular, how it automates time-consuming or repetitive tasks. Whether inputting data in a spreadsheet, writing an outline for a business plan, or controlling quality at a manufacturing plant, AI has massive potential to increase our productivity at work.For those who may be concerned about AI replacing jobs, this technology is often simply acting as a tool for automating repetition, leaving room for humans to make space for creativity, emotional intelligence, and moral judgment.3. Multimodal AIMany large language models (LLMs) process only text data. Multimodel models in AI can grasp information from different data types, like audio, video, and images, in addition to text. This technology is enabling search and content creation tools to become more seamless and intuitive and integrate more easily into other applications we already use.\\xa0For example, iPhones can now figure out who and what objects are in your photographs because they can process images, metadata text, and search data. Similar to how a human can look at a photo and identify what’s in it, multimodals enable that same characteristic.4. AI in science and health careBesides AI’s influence in the business workplace, AI tools have great potential in science and health care. Researchers, such as those at Microsoft, are now using AI to build tools to predict weather, estimate carbon emissions, and enable sustainable farming practices [3]. This trend aims to address and mitigate the effects of climate change.\\xa0Chatbots are being deployed in agriculture and health care, to help farmers identify a type of weed and to help medical professionals diagnose patients. While the accuracy of this AI is in progress, these steps can accelerate scientific discoveries and medical breakthroughs.AI in computer science: AI has become an in-demand skill for computer science professionals. You can get ahead of the curve by learning to leverage an AI coding partner for efficiency with Microsoft's Copilot for Software Development Specialization:5. Regulation and ethicsWith the proliferation of AI worldwide, the trend of mitigating any risks associated with AI is paramount. Government agencies and organizations like OpenAI are ensuring AI is used and deployed responsibly and ethically. In March 2024, the European Union debated a landmark comprehensive AI bill designed to regulate AI and address concerns for consumers. It is expected to become law this year.If AI is not regulated, data manipulation, misinformation, bias, and privacy risks can arise and pose greater societal risks. For example, tools can be susceptible to discrimination or legal risk if AI doesn’t collect data representative of a population. Generators like ChatGPT pull information from internet searches worldwide, but companies and publications have sued OpenAI for copyright infringement.Read more: AI Ethics: What It Is and Why It MattersTrends in AI security\\nCybersecurity is a major concern for AI, particularly as more and more business processes rely on computing resources with access to vast amounts of sensitive information. Some of the top cybersecurity artificial intelligence trends include:\\nPrivacy concerns: Generative models can help organizations be more productive, but business owners should ensure platforms are secure before sharing private information or trade secrets with them. Data breaches: While AI-driven systems can be used to better detect data breaches, they can also facilitate them. Improved analytics: AI can help organizations improve their security with improved analytics, capable of spotting trends in vast amounts of incident report data.\\nRead more: AI in Cybersecurity: How Businesses are Adapting Start learning AI todayAI is quickly changing how we work today. Learn the skills you need to thrive alongside this transformative technology by building your AI skills on Coursera today.Enroll in one of Coursera’s most popular courses AI for Everyone, from DeepLearning.AI. You’ll learn what artificial intelligence is, its impact on our lives, and how it can be applied to your job function. In Vanderbilt University's ChatGPT: Master Free AI Tools to Supercharge Productivity Specialization, you'll learn how to leverage ChatGPT's free AI to excel at project management, writing, data analytics, marketing, social media, and more for work and life.\\nArticle sources1.\\xa0PricewaterhouseCoopers. “2024 AI Business Predictions, https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html.” Accessed March 10, 2024.2.\\xa0McKinsey. “The economic potential of generative AI: The next productivity frontier, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction.” Accessed March 10, 2024.3.\\xa0Microsoft. “3 big AI trends to watch in 2024, https://news.microsoft.com/three-big-ai-trends-to-watch-in-2024/.” Accessed March 10, 2024.View all sources\\nKeep readingUpdated on Jan 6, 2025Written by:CCoursera StaffEditorial TeamCoursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.\\nCoursera FooterLearn Key TechnologiesPythonSQLMicrosoft ExcelPower BITableauR ProgrammingGitDockerAWSTensorFlowEssential SkillsData AnalyticsArtificial IntelligenceCybersecurityDigital MarketingMachine LearningStatistical AnalysisDatabase ManagementWeb DevelopmentFinancial ModelingBusiness AnalysisIndustry SolutionsHealthcare AnalyticsSalesDigital TransformationSupply ChainMarketing AnalyticsHR AnalyticsSocial Media MarketingRisk ManagementSustainabilityE-commerceCareer PathsData ScientistData AnalystMachine Learning EngineerFull Stack DeveloperProject ManagerProduct ManagerData EngineerDigital Marketing SpecialistCybersecurity AnalystCareer Aptitude TestCourseraAboutWhat We OfferLeadershipCareersCatalogCoursera PlusProfessional CertificatesMasterTrack® CertificatesDegreesFor EnterpriseFor GovernmentFor CampusBecome a PartnerSocial ImpactFree CoursesECTS Credit RecommendationsCommunityLearnersPartnersBeta TestersBlogThe Coursera PodcastTech BlogTeaching CenterMorePressInvestorsTermsPrivacyHelpAccessibilityContactArticlesDirectoryAffiliatesModern Slavery StatementDo Not Sell/ShareLearn Anywhere © 2025 Coursera Inc. All rights reserved.\"),\n", + " Document(metadata={'title': 'AI Index Report 2024 - Stanford University', 'snippet': 'This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology,\\xa0...', 'url': 'https://aiindex.stanford.edu/report/', 'position': 3, 'entity_type': 'OrganicResult'}, page_content='AI Index Report 2024 – Artificial Intelligence Index\\nHome\\nAbout\\nAI Index Report\\nResearch\\nPeople\\nGlobal AI Vibrancy Tool\\nHAI\\nTHE AI INDEX REPORTMeasuring trends in AI\\nai iNDEX anNUAL rEPORTWelcome to the 2024 AI Index Report\\nDOWNLOAD THE FULL REPORT\\nDOWNLOAD INDIVIDUAL CHAPTERS\\nACCESS THE PUBLIC DATA\\nWelcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.\\nTOP TAKEAWAYS\\n1. AI beats humans on some tasks, but not on all.AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.\\n2. Industry continues to dominate frontier AI research.\\xa0In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.\\n3. Frontier models get way more expensive.According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.\\n4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.\\xa0In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.\\n5. Robust and standardized evaluations for LLM responsibility are seriously lacking.New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.\\n6. Generative AI investment skyrockets.Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.\\n7. The data is in: AI makes workers more productive and leads to higher quality work.In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.\\n8. Scientific progress accelerates even further, thanks to AI.In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.\\n9. The number of AI regulations in the United States sharply increases.The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.\\n10. People across the globe are more cognizant of AI’s potential impact—and more nervous.A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.\\nCHAPTERS\\nChapter 1: Research and Development\\nThis chapter studies trends in AI research and development. It begins by examining trends in AI publications and patents, and then examines trends in notable AI systems and foundation models. It concludes by analyzing AI conference attendance and open-source AI software projects.\\nDOWNLOAD CHAPTER 1\\n1. Industry continues to dominate frontier AI research.\\n2. More foundation models and more open foundation models.\\n3. Frontier models get way more expensive.\\n4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.\\n5. The number of AI patents skyrockets.\\n6. China dominates AI patents.\\n7. Open-source AI research explodes.\\n8. The number of AI publications continues to rise.\\nIn 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.\\nIn 2023, a total of 149 foundation models were released, more than double the amount released in 2022. Of these newly released models, 65.7% were open-source, compared to only 44.4% in 2022 and 33.3% in 2021.\\nAccording to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.\\nIn 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.\\nFrom 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.\\nIn 2022, China led global AI patent origins with 61.1%, significantly outpacing the United States, which accounted for 20.9% of AI patent origins. Since 2010, the U.S. share of AI patents has decreased from 54.1%.\\nSince 2011, the number of AI-related projects on GitHub has seen a consistent increase, growing from 845 in 2011 to approximately 1.8 million in 2023. Notably, there was a sharp 59.3% rise in the total number of GitHub AI projects in 2023 alone. The total number of stars for AI-related projects on GitHub also significantly increased in 2023, more than tripling from 4.0 million in 2022 to 12.2 million.\\nBetween 2010 and 2022, the total number of AI publications nearly tripled, rising from approximately 88,000 in 2010 to more than 240,000 in 2022. The increase over the last year was a modest 1.1%.\\nChapter 2: Technical Performance\\nThe technical performance section of this year’s AI Index offers a comprehensive overview of AI advancements in 2023. It starts with a high-level overview of AI technical performance, tracing its broad evolution over time. The chapter then examines the current state of a wide range of AI capabilities, including language processing, coding, computer vision (image and video analysis), reasoning, audio processing, autonomous agents, robotics, and reinforcement learning. It also shines a spotlight on notable AI research breakthroughs from the past year, exploring methods for improving LLMs through prompting, optimization, and fine-tuning, and wraps up with an exploration of AI systems’ environmental footprint.\\nDOWNLOAD CHAPTER 2\\n1. AI beats humans on some tasks, but not on all.\\n2. Here comes multimodal AI.\\n3. Harder benchmarks emerge.\\n4. Better AI means better data which means … even better AI.\\n5. Human evaluation is in.\\n6. Thanks to LLMs, robots have become more flexible.\\n7. More technical research in agentic AI.\\n8. Closed LLMs significantly outperform open ones.\\nAI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.\\nTraditionally AI systems have been limited in scope, with language models excelling in text comprehension but faltering in image processing, and vice versa. However, recent advancements have led to the development of strong multimodal models, such as Google’s Gemini and OpenAI’s GPT-4. These models demonstrate flexibility and are capable of handling images and text and, in some instances, can even process audio.\\nAI models have reached performance saturation on established benchmarks such as ImageNet, SQuAD, and SuperGLUE, prompting researchers to develop more challenging ones. In 2023, several challenging new benchmarks emerged, including SWE-bench for coding, HEIM for image generation, MMMU for general reasoning, MoCa for moral reasoning, AgentBench for agent-based behavior, and HaluEval for hallucinations.\\nNew AI models such as SegmentAnything and Skoltech are being used to generate specialized data for tasks like image segmentation and 3D reconstruction. Data is vital for AI technical improvements. The use of AI to create more data enhances current capabilities and paves the way for future algorithmic improvements, especially on harder tasks.\\nWith generative models producing high-quality text, images, and more, benchmarking has slowly started shifting toward incorporating human evaluations like the Chatbot Arena Leaderboard rather than computerized rankings like ImageNet or SQuAD. Public feeling about AI is becoming an increasingly important consideration in tracking AI progress.\\nThe fusion of language modeling with robotics has given rise to more flexible robotic systems like PaLM-E and RT-2. Beyond their improved robotic capabilities, these models can ask questions, which marks a significant step toward robots that can interact more effectively with the real world.\\nCreating AI agents, systems capable of autonomous operation in specific environments, has long challenged computer scientists. However, emerging research suggests that the performance of autonomous AI agents is improving. Current agents can now master complex games like Minecraft and effectively tackle real-world tasks, such as online shopping and research assistance.\\nOn 10 select AI benchmarks, closed models outperformed open ones, with a median performance advantage of 24.2%. Differences in the performance of closed and open models carry important implications for AI policy debates.\\nChapter 3: Responsible AI\\nAI is increasingly woven into nearly every facet of our lives. This integration is occurring in sectors such as education, finance, and healthcare, where critical decisions are often based on algorithmic insights. This trend promises to bring many advantages; however, it also introduces potential risks. Consequently, in the past year, there has been a significant focus on the responsible development and deployment of AI systems. The AI community has also become more concerned with assessing the impact of AI systems and mitigating risks for those affected.This chapter explores key trends in responsible AI by examining metrics, research, and benchmarks in four key responsible AI areas: privacy and data governance, transparency and explainability, security and safety, and fairness. Given that 4 billion people are expected to vote globally in 2024, this chapter also features a special section on AI and elections and more broadly explores the potential impact of AI on political processes.\\nDOWNLOAD CHAPTER 3\\n1. Robust and standardized evaluations for LLM responsibility are seriously lacking.\\n2. Political deepfakes are easy to generate and difficult to detect.\\n3. Researchers discover more complex vulnerabilities in LLMs.\\n4. Risks from AI are a concern for businesses across the globe.\\n5. LLMs can output copyrighted material.\\n6. AI developers score low on transparency, with consequences for research.\\n7. Extreme AI risks are difficult to analyze.\\n8. The number of AI incidents continues to rise.\\n9. ChatGPT is politically biased.\\nNew research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.\\nPolitical deepfakes are already affecting elections across the world, with recent research suggesting that existing AI deepfake detection methods perform with varying levels of accuracy. In addition, new projects like CounterCloud demonstrate how easily AI can create and disseminate fake content.\\nPreviously, most efforts to red team AI models focused on testing adversarial prompts that intuitively made sense to humans. This year, researchers found less obvious strategies to get LLMs to exhibit harmful behavior, like asking the models to infinitely repeat random words.\\nA global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, security, and reliability. The survey shows that organizations are beginning to take steps to mitigate these risks. However, globally, most companies have so far only mitigated a portion of these risks.\\nMultiple researchers have shown that the generative outputs of popular LLMs may contain copyrighted material, such as excerpts from The New York Times or scenes from movies. Whether such output constitutes copyright violations is becoming a central legal question.\\nThe newly introduced Foundation Model Transparency Index shows that AI developers lack transparency, especially regarding the disclosure of training data and methodologies. This lack of openness hinders efforts to further understand the robustness and safety of AI systems.\\nOver the past year, a substantial debate has emerged among AI scholars and practitioners regarding the focus on immediate model risks, like algorithmic discrimination, versus potential long-term existential threats. It has become challenging to distinguish which claims are scientifically founded and should inform policymaking. This difficulty is compounded by the tangible nature of already present short-term risks in contrast with the theoretical nature of existential threats.\\nAccording to the AI Incident Database, which tracks incidents related to the misuse of AI, 123 incidents were reported in 2023, a 32.3% increase from 2022. Since 2013, AI incidents have grown by over twentyfold. A notable example includes AI-generated, sexually explicit deepfakes of Taylor Swift that were widely shared online.\\nResearchers find a significant bias in ChatGPT toward Democrats in the United States and the Labour Party in the U.K. This finding raises concerns about the tool’s potential to influence users’ political views, particularly in a year marked by major global elections.\\nChapter 4: Economy\\nThe integration of AI into the economy raises many compelling questions. Some predict that AI will drive productivity improvements, but the extent of its impact remains uncertain. A major concern is the potential for massive labor displacement—to what degree will jobs be automated versus augmented by AI? Companies are already utilizing AI in various ways across industries, but some regions of the world are witnessing greater investment inflows into this transformative technology. Moreover, investor interest appears to be gravitating toward specific AI subfields like natural language processing and data management.This chapter examines AI-related economic trends using data from Lightcast, LinkedIn, Quid, McKinsey, Stack Overflow, and the International Federation of Robotics (IFR). It begins by analyzing AI-related occupations, covering labor demand, hiring trends, skill penetration, and talent availability. The chapter then explores corporate investment in AI, introducing a new section focused specifically on generative AI. It further examines corporate adoption of AI, assessing current usage and how developers adopt these technologies. Finally, it assesses AI’s current and projected economic impact and robot installations across various sectors.\\nDOWNLOAD CHAPTER 4\\n1. Generative AI investment skyrockets.\\n2. Already a leader, the United States pulls even further ahead in AI private investment.\\n3. Fewer AI jobs, in the United States and across the globe.\\n4. AI decreases costs and increases revenues.\\n5. Total AI private investment declines again, while the number of newly funded AI companies increases.\\n6. AI organizational adoption ticks up.\\n7. China dominates industrial robotics.\\n8. Greater diversity in robot installations.\\n9. The data is in: AI makes workers more productive and leads to higher quality work.\\n10. Fortune 500 companies start talking a lot about AI, especially generative AI.\\nDespite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.\\nIn 2023, the United States saw AI investments reach $67.2 billion, nearly 8.7 times more than China, the next highest investor. While private AI investment in China and the European Union, including the United Kingdom, declined by 44.2% and 14.1%, respectively, since 2022, the United States experienced a notable increase of 22.1% in the same time frame.\\nIn 2022, AI-related positions made up 2.0% of all job postings in America, a figure that decreased to 1.6% in 2023. This decline in AI job listings is attributed to fewer postings from leading AI firms and a reduced proportion of tech roles within these companies.\\nA new McKinsey survey reveals that 42% of surveyed organizations report cost reductions from implementing AI (including generative AI), and 59% report revenue increases. Compared to the previous year, there was a 10 percentage point increase in respondents reporting decreased costs, suggesting AI is driving significant business efficiency gains.\\nGlobal private AI investment has fallen for the second year in a row, though less than the sharp decrease from 2021 to 2022. The count of newly funded AI companies spiked to 1,812, up 40.6% from the previous year.\\nA 2023 McKinsey report reveals that 55% of organizations now use AI (including generative AI) in at least one business unit or function, up from 50% in 2022 and 20% in 2017.\\nSince surpassing Japan in 2013 as the leading installer of industrial robots, China has significantly widened the gap with the nearest competitor nation. In 2013, China’s installations accounted for 20.8% of the global total, a share that rose to 52.4% by 2022.\\nIn 2017, collaborative robots represented a mere 2.8% of all new industrial robot installations, a figure that climbed to 9.9% by 2022. Similarly, 2022 saw a rise in service robot installations across all application categories, except for medical robotics. This trend indicates not just an overall increase in robot installations but also a growing emphasis on deploying robots for human-facing roles.\\nIn 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.\\nIn 2023, AI was mentioned in 394 earnings calls (nearly 80% of all Fortune 500 companies), a notable increase from 266 mentions in 2022. Since 2018, mentions of AI in Fortune 500 earnings calls have nearly doubled. The most frequently cited theme, appearing in 19.7% of all earnings calls, was generative AI.\\nChapter 5: Science and Medicine\\nThis year’s AI Index introduces a new chapter on AI in science and medicine in recognition of AI’s growing role in scientific and medical discovery. It explores 2023’s standout AI-facilitated scientific achievements, including advanced weather forecasting systems like GraphCast and improved material discovery algorithms like GNoME. The chapter also examines medical AI system performance, important 2023 AI-driven medical innovations like SynthSR and ImmunoSEIRA, and trends in the approval of FDA AI-related medical devices.\\nDOWNLOAD CHAPTER 5\\n1. Scientific progress accelerates even further, thanks to AI.\\n2. AI helps medicine take significant strides forward.\\n3. Highly knowledgeable medical AI has arrived.\\n4. The FDA approves more and more AI-related medical devices.\\nIn 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.\\nIn 2023, several significant medical systems were launched, including EVEscape, which enhances pandemic prediction, and AlphaMissence, which assists in AI-driven mutation classification. AI is increasingly being utilized to propel medical advancements.\\nOver the past few years, AI systems have shown remarkable improvement on the MedQA benchmark, a key test for assessing AI’s clinical knowledge. The standout model of 2023, GPT-4 Medprompt, reached an accuracy rate of 90.2%, marking a 22.6 percentage point increase from the highest score in 2022. Since the benchmark’s introduction in 2019, AI performance on MedQA has nearly tripled.\\nIn 2022, the FDA approved 139 AI-related medical devices, a 12.1% increase from 2021. Since 2012, the number of FDA-approved AI-related medical devices has increased by more than 45-fold. AI is increasingly being used for real-world medical purposes.\\nChapter 6: Education\\nThis chapter examines trends in AI and computer science (CS) education, focusing on who is learning, where they are learning, and how these trends have evolved over time. Amid growing concerns about AI’s impact on education, it also investigates the use of new AI tools like ChatGPT by teachers and students.The analysis begins with an overview of the state of postsecondary CS and AI education in the United States and Canada, based on the Computing Research Association’s annual Taulbee Survey. It then reviews data from Informatics Europe regarding CS education in Europe. This year introduces a new section with data from Studyportals on the global count of AI-related English-language study programs.\\xa0The chapter wraps up with insights into K–12 CS education in the United States from Code.org and findings from the Walton Foundation survey on ChatGPT’s use in schools.\\nDOWNLOAD CHAPTER 6\\n1. The number of American and Canadian CS bachelor’s graduates continues to rise, new CS master’s graduates stay relatively flat, and PhD graduates modestly grow.\\n2. The migration of AI PhDs to industry continues at an accelerating pace.\\n3. Less transition of academic talent from industry to academia.\\n4. CS education in the United States and Canada becomes less international.\\n5. More American high school students take CS courses, but access problems remain.\\n6. AI-related degree programs are on the rise internationally.\\n7. The United Kingdom and Germany lead in European informatics, CS, CE, and IT graduate production.\\nWhile the number of new American and Canadian bachelor’s graduates has consistently risen for more than a decade, the number of students opting for graduate education in CS has flattened. Since 2018, the number of CS master’s and PhD graduates has slightly declined.\\nIn 2011, roughly equal percentages of new AI PhDs took jobs in industry (40.9%) and academia (41.6%). However, by 2022, a significantly larger proportion (70.7%) joined industry after graduation compared to those entering academia (20.0%). Over the past year alone, the share of industry-bound AI PhDs has risen by 5.3 percentage points, indicating an intensifying brain drain from universities into industry.\\nIn 2019, 13% of new AI faculty in the United States and Canada were from industry. By 2021, this figure had declined to 11%, and in 2022, it further dropped to 7%. This trend indicates a progressively lower migration of high-level AI talent from industry into academia.\\nProportionally fewer international CS bachelor’s, master’s, and PhDs graduated in 2022 than in 2021. The drop in international students in the master’s category was especially pronounced.\\nIn 2022, 201,000 AP CS exams were administered. Since 2007, the number of students taking these exams has increased more than tenfold. However, recent evidence indicates that students in larger high schools and those in suburban areas are more likely to have access to CS courses.\\nThe number of English-language, AI-related postsecondary degree programs has tripled since 2017, showing a steady annual increase over the past five years. Universities worldwide are offering more AI-focused degree programs.\\nThe United Kingdom and Germany lead Europe in producing the highest number of new informatics, CS, CE, and information bachelor’s, master’s, and PhD graduates. On a per capita basis, Finland leads in the production of both bachelor’s and PhD graduates, while Ireland leads in the production of master’s graduates.\\nChapter 7: Policy and Governance\\nAI’s increasing capabilities have captured policymakers’ attention. Over the past year, several nations and political bodies, such as the United States and the European Union, have enacted significant AI-related policies. The proliferation of these policies reflect policymakers’ growing awareness of the need to regulate AI and improve their respective countries’ ability to capitalize on its transformative potential.This chapter begins examining global AI governance starting with a timeline of significant AI policymaking events in 2023. It then analyzes global and U.S. AI legislative efforts, studies AI legislative mentions, and explores how lawmakers across the globe perceive and discuss AI. Next, the chapter profiles national AI strategies and regulatory efforts in the United States and the European Union. Finally, it concludes with a study of public investment in AI within the United States.\\nDOWNLOAD CHAPTER 7\\n1. The number of AI regulations in the United States sharply increases.\\n2. The United States and the European Union advance landmark AI policy action.\\n3. AI captures U.S. policymaker attention.\\n4. Policymakers across the globe cannot stop talking about AI.\\n5. More regulatory agencies turn their attention toward AI.\\nThe number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.\\nIn 2023, policymakers on both sides of the Atlantic put forth substantial AI regulatory proposals. The European Union reached a deal on the terms of the AI Act, a landmark piece of legislation enacted in 2024. Meanwhile, President Biden signed an Executive Order on AI, the most notable AI policy initiative in the United States that year.\\nThe year 2023 witnessed a remarkable increase in AI-related legislation at the federal level, with 181 bills proposed, more than double the 88 proposed in 2022.\\nMentions of AI in legislative proceedings across the globe have nearly doubled, rising from 1,247 in 2022 to 2,175 in 2023. AI was mentioned in the legislative proceedings of 49 countries in 2023. Moreover, at least one country from every continent discussed AI in 2023, underscoring the truly global reach of AI policy discourse.\\nThe number of U.S. regulatory agencies issuing AI regulations increased to 21 in 2023 from 17 in 2022, indicating a growing concern over AI regulation among a broader array of American regulatory bodies. Some of the new regulatory agencies that enacted AI-related regulations for the first time in 2023 include the Department of Transportation, the Department of Energy, and the Occupational Safety and Health Administration.\\nChapter 8: Diversity\\nThe demographics of AI developers often differ from those of users. For instance, a considerable number of prominent AI companies and the datasets utilized for model training originate from Western nations, thereby reflecting Western perspectives. The lack of diversity can perpetuate or even exacerbate societal inequalities and biases.This chapter delves into diversity trends in AI. The chapter begins by drawing on data from the Computing Research Association (CRA) to provide insights into the state of diversity in American and Canadian computer science (CS) departments. A notable addition to this year’s analysis is data sourced from Informatics Europe, which sheds light on diversity trends within European CS education. Next, the chapter examines participation rates at the Women in Machine Learning (WiML) workshop held annually at NeurIPS. Finally, the chapter analyzes data from Code.org, offering insights into the current state of diversity in secondary CS education across the United States.\\xa0The AI Index is dedicated to enhancing the coverage of data shared in this chapter. Demographic data regarding AI trends, particularly in areas such as sexual orientation, remains scarce. The AI Index urges other stakeholders in the AI domain to intensify their endeavors to track diversity trends associated with AI and hopes to comprehensively cover such trends in future reports.\\nDOWNLOAD CHAPTER 8\\n1. U.S. and Canadian bachelor’s, master’s, and PhD CS students continue to grow more ethnically diverse.\\n2. Substantial gender gaps persist in European informatics, CS, CE, and IT graduates at all educational levels.\\n3. U.S. K–12 CS education is growing more diverse, reflecting changes in both gender and ethnic representation.\\nWhile white students continue to be the most represented ethnicity among new resident graduates at all three levels, the representation from other ethnic groups, such as Asian, Hispanic, and Black or African American students, continues to grow. For instance, since 2011, the proportion of Asian CS bachelor’s degree graduates has increased by 19.8 percentage points, and the proportion of Hispanic CS bachelor’s degree graduates has grown by 5.2 percentage points.\\nEvery surveyed European country reported more male than female graduates in bachelor’s, master’s, and PhD programs for informatics, CS, CE, and IT. While the gender gaps have narrowed in most countries over the last decade, the rate of this narrowing has been slow.\\nThe proportion of AP CS exams taken by female students rose from 16.8% in 2007 to 30.5% in 2022. Similarly, the participation of Asian, Hispanic/Latino/Latina, and Black/African American students in AP CS has consistently increased year over year.\\nChapter 9: Public Opinion\\nAs AI becomes increasingly ubiquitous, it is important to understand how public perceptions regarding the technology evolve. Understanding this public opinion is vital in better anticipating AI’s societal impacts and how the integration of the technology may differ across countries and demographic groups.This chapter examines public opinion on AI through global, national, demographic, and ethnic perspectives. It draws upon several data sources: longitudinal survey data from Ipsos profiling global AI attitudes over time, survey data from the University of Toronto exploring public perception of ChatGPT, and data from Pew examining American attitudes regarding AI. The chapter concludes by analyzing mentions of significant AI models on Twitter, using data from Quid.\\nDOWNLOAD CHAPTER 9\\n1. People across the globe are more cognizant of AI’s potential impact—and more nervous.\\n2. AI sentiment in Western nations continues to be low, but is slowly improving.\\n3. The public is pessimistic about AI’s economic impact.\\n4. Demographic differences emerge regarding AI optimism.\\n5. ChatGPT is widely known and widely used.\\nA survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.\\nIn 2022, several developed Western nations, including Germany, the Netherlands, Australia, Belgium, Canada, and the United States, were among the least positive about AI products and services. Since then, each of these countries has seen a rise in the proportion of respondents acknowledging the benefits of AI, with the Netherlands experiencing the most significant shift.\\nIn an Ipsos survey, only 37% of respondents feel AI will improve their job. Only 34% anticipate AI will boost the economy, and 32% believe it will enhance the job market.\\nSignificant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic. For instance, 59% of Gen Z respondents believe AI will improve entertainment options, versus only 40% of baby boomers. Additionally, individuals with higher incomes and education levels are more optimistic about AI’s positive impacts on entertainment, health, and the economy than their lower-income and less-educated counterparts.\\nAn international survey from the University of Toronto suggests that 63% of respondents are aware of ChatGPT. Of those aware, around half report using ChatGPT at least once weekly.\\nPast Reports\\n2023\\n2022\\n2021\\n2019\\n2018\\n2017\\nArtificial Intelligence Index\\nStanford Institute for Human-Centered Artificial Intelligence\\nCordura Hall\\n201 Panama Street\\nStanford University\\nStanford, CA 94305\\nSUBSCRIBE TO THE HAI NEWSLETTER\\nEmail\\nTwitter\\nLinkedIn\\nStanford Home\\nMaps & Directions\\nSearch Stanford\\nEmergency Info\\nTerms of Use\\nPrivacy\\nCopyright\\nTrademarks\\nNon-Discrimination\\nAccessibility\\n© Stanford University. Stanford, California 94305.')]" + ] + }, + "metadata": {}, + "execution_count": 6 + } + ], + "execution_count": 6 + }, + { + "cell_type": "markdown", + "source": [ + "**A single document from within the above results looks like the following:**" + ], + "metadata": { + "id": "XXpJWZM7qDHl" + }, + "id": "XXpJWZM7qDHl" + }, + { + "cell_type": "code", + "source": [ + "import json\n", + "\n", + "example_doc = retriever.invoke(query)[0]\n", + "print(\"Page Content: \\n\", json.dumps(example_doc.page_content, indent=2))\n", + "print(\"Metadata: \\n\", json.dumps(example_doc.metadata, indent=2))" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "PDrj532XpdnM", + "outputId": "3e13b288-417d-4764-c2b1-34dd67c1e77e" + }, + "id": "PDrj532XpdnM", + "execution_count": 7, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Page Content: \n", + " \"8 AI and machine learning trends to watch in 2025 | TechTarget\\nSearch Enterprise AI\\nSearch the TechTarget Network\\nLogin\\nRegister\\nExplore the Network\\nTechTarget Network\\nBusiness Analytics\\nCIO\\nData Management\\nERP\\nSearch Enterprise AI\\nAI Business Strategies\\nAI Careers\\nAI Infrastructure\\nAI Platforms\\nAI Technologies\\nMore Topics\\nApplications of AI\\nML Platforms\\nOther Content\\nNews\\nFeatures\\nTips\\nWebinars\\n2024 IT Salary Survey Results\\nSponsored Sites\\nMore\\nAnswers\\nConference Guides\\nDefinitions\\nOpinions\\nPodcasts\\nQuizzes\\nTech Accelerators\\nTutorials\\nVideos\\nFollow:\\nHome\\nAI business strategies\\nTech Accelerator\\nWhat is enterprise AI? A complete guide for businesses\\nPrev\\nNext\\n8 jobs that AI can't replace and why\\n10 top artificial intelligence certifications and courses for 2025\\nDownload this guide1\\nX\\nFree Download\\nA guide to artificial intelligence in the enterprise\\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI's history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI's key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\\nFeature\\n8 AI and machine learning trends to watch in 2025\\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\\nShare this item with your network:\\nBy\\nLev Craig,\\nSite Editor\\nPublished: 03 Jan 2025\\nGenerative AI is at a crossroads. It's now more than two years since ChatGPT's launch, and the initial optimism about AI's potential is decidedly tempered by an awareness of its limitations and costs.\\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it's also poised to be a year of growing pains.\\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That's no easy feat for a technology that's often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\\nHere are eight of the top AI trends to prepare for in 2025.\\n1. Hype gives way to more pragmatic approaches\\nSince 2022, there's been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\\nThis article is part of\\nWhat is enterprise AI? A complete guide for businesses\\nWhich also includes:\\nHow can AI drive revenue? Here are 10 approaches\\n8 jobs that AI can't replace and why\\n8 AI and machine learning trends to watch in 2025\\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget's Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\\n\\\"The most surprising thing for me [in 2024] is actually the lack of adoption that we're seeing,\\\" said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. \\\"When you look across businesses, companies are investing in AI. They're building their own custom tools. They're buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn't been this groundswell of adoption within companies.\\\"\\nOne reason for this is AI's uneven impact across roles and job functions. Organizations are discovering what Stave termed the \\\"jagged technological frontier,\\\" where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\\n\\\"Managers don't know where that line is, and employees don't know where that line is,\\\" Stave said. \\\"So, there's a lot of uncertainty and experimentation.\\\"\\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\\n2. Generative AI moves beyond chatbots\\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\\n\\\"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything,\\\" said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\\n\\\"[A chatbot] can help an individual be more effective ... but it's very one on one,\\\" Sydell said. \\\"So, how do you scale that in an enterprise-grade way?\\\"\\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI's text-to-video Sora and ElevenLabs' AI voice generator, which can handle nontext data types, such as audio, video and images.\\n\\\"AI has become synonymous with large language models, but that's just one type of AI,\\\" Stave said. \\\"It's this multimodal approach to AI [where] we're going to start seeing some major technological advancements.\\\"\\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\\n\\\"Think about all of the different ways we interact with the physical world,\\\" she said. \\\"I mean, the applications are just infinite.\\\"\\n3. AI agents are the next frontier\\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce's Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\\nAutonomous functionality isn't totally new, of course; by now, it's a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of \\\"the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks.\\\" Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. \\\"When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher,\\\" he said.\\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\\n4. Generative AI models become commodities\\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today's generative AI models are evaluated on niche technical benchmarks.\\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\\n5. AI applications and data sets become more domain-specific\\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today's foundation models -- is far from necessary for most business applications.\\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn't require the degree of versatility necessary for a consumer-facing chatbot.\\n\\\"There's a lot of focus on the general-purpose AI models,\\\" Yee said. \\\"But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?\\\"\\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. \\\"Who's the audience?\\\" Yee said. \\\"What's the intended use case? What's the domain it's being used in?\\\"\\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\\n\\\"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance,\\\" authors Fernando Diaz and Michael Madaio wrote in their paper \\\"Scaling Laws Do Not Scale.\\\" \\\"That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models.\\\"\\n6. AI literacy becomes essential\\nGenerative AI's ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn't need to mean learning to code or train models. \\\"You don't necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them,\\\" Sydell said. \\\"Experimenting, exploring, using the tools is massively helpful.\\\"\\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven't used it at all or don't use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\\nThat's a faster pace of adoption compared with the PC or the internet, as the paper's authors pointed out, but it's still not a majority. There's also a gap between businesses' official stances on generative AI and how real workers are using it in their day-to-day tasks.\\n\\\"If you look at how many companies say they're using it, it's actually a pretty low share who are formally incorporating it into their operations,\\\" David Deming, professor at Harvard University and one of the paper's authors, told The Harvard Gazette. \\\"People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something.\\\"\\nStave sees a role for both companies and educational institutions in closing the AI skills gap. \\\"When you look at companies, they understand the on-the-job training that workers need,\\\" she said. \\\"They always have because that's where the work takes place.\\\"\\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that's available on an ongoing basis and applicable across multiple jobs. \\\"The business landscape is changing so fast. You can't just quit and go back and get a master's and learn everything new,\\\" Stave said. \\\"We have to figure out how to modularize the learning and get it out to people in real time.\\\"\\n7. Businesses adjust to an evolving regulatory environment\\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\\n\\\"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools,\\\" Sydell said. \\\"It seems like that's not going to happen anytime soon at this point.\\\" Stave likewise said she's \\\"not expecting significant regulation from the new administration.\\\"\\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\\nTo minimize harm without stifling innovation, Yee said she'd like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, \\\"low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process.\\\"\\nStave also pointed out that minimal oversight in the U.S. doesn't necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU's AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\\n8. AI-related security concerns escalate\\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today's versions aren't perfect, they're significantly better, especially if an anxious or time-pressured victim isn't looking or listening too closely.\\nAudio generators can enable hackers to impersonate a victim's trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it's more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company's CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\\nLev Craig covers AI and machine learning as site editor for TechTarget's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.\\nNext Steps\\nThe year in AI: Catch up on the top AI news of 2024\\nWays enterprise AI will transform IT infrastructure this year\\nRelated Resources\\nAI business strategies for successful transformation\\n\\u2013Video\\nRedesigning Productivity in the Age of Cognitive Acceleration\\n\\u2013Replay\\nDig Deeper on AI business strategies\\nGoogle Gemini 2.0 explained: Everything you need to know\\nBy: Sean\\u00a0Kerner\\nServiceNow intros AI agent studio and orchestrator\\nBy: Esther\\u00a0Shittu\\nNvidia's new model aims to move GenAI to physical world\\nBy: Esther\\u00a0Shittu\\nNot-so-obvious AI predictions for 2025\\nSponsored News\\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\\n\\u2013Dell Technologies and Intel\\nPrivate AI Demystified\\n\\u2013Equinix\\nSustainability, AI and Dell PowerEdge Servers\\n\\u2013Dell Technologies and Intel\\nSee More\\nRelated Content\\nNvidia's new model aims to move GenAI to physical ...\\n\\u2013 Search Enterprise AI\\nOracle boosts generative AI service and intros new ...\\n\\u2013 Search Enterprise AI\\nNew Google Gemini AI tie-ins dig into local codebases\\n\\u2013 Search Software Quality\\nLatest TechTarget resources\\nBusiness Analytics\\nCIO\\nData Management\\nERP\\nSearch Business Analytics\\nDomo platform a difference-maker for check guarantee vendor\\nIngo Money succeeded with the analytics specialist's suite after years of struggling to gain insights from spreadsheets and a ...\\nAgentic AI, data as a product among growing analytics trends\\nCollibra's founder and chief data citizen reveals his predictions for 2025, and underpinning all is the need for strong ...\\nTrusted data at the core of successful GenAI adoption\\nA new study finds that only a third of organizations are successfully developing GenAI tools. The problem preventing success is ...\\nSearch CIO\\nBusinesses need to prepare as EU AI Act enforcement begins\\nThe EU AI Act's Sunday enforcement deadline will be a test for EU enforcers as they begin assessing companies for compliance.\\nU.S. freeze on foreign aid may give China a leg up\\nAs the U.S. steps back on foreign aid, experts worry China may step in to fill the void.\\nOMB memo creates confusion for federal IT contractors\\nMass confusion caused by an Office of Management and Budget memo to freeze federal agency spending has federal IT contractors on ...\\nSearch Data Management\\nTop trends in big data for 2025 and beyond\\nBig data initiatives are being affected by various trends. Here are seven notable ones and what they mean for organizations ...\\nPinecone provides Assistant for generative AI development\\nThe vector database specialist is expanding beyond managing data with a suite of APIs and other tools that enable users to tap ...\\n18 top data catalog software tools to consider using in 2025\\nNumerous tools can be used to build and manage data catalogs. Here's a look at the key features, capabilities and components of ...\\nSearch ERP\\nTop 10 essential skills for ERP professionals in 2025\\nBoth hard and soft skills are essential for ERP professionals, including project management and being up to date with technology.\\nAcumatica cloud ERP aims for industry-focused AI value\\nNew AI functionality in the company's cloud ERP platform could help customers evolve back-office transactional systems into ...\\n7 benefits of using a 3PL provider for reverse logistics\\nA 3PL with experience working with supply chain partners and expertise in returns can help simplify a company's operations. Learn...\\nAbout Us\\nEditorial Ethics Policy\\nMeet The Editors\\nContact Us\\nAdvertisers\\nPartner with Us\\nMedia Kit\\nCorporate Site\\nContributors\\nReprints\\nAnswers\\nDefinitions\\nE-Products\\nEvents\\nFeatures\\nGuides\\nOpinions\\nPhoto Stories\\nQuizzes\\nTips\\nTutorials\\nVideos\\nAll Rights Reserved,\\nCopyright 2018 - 2025, TechTarget\\nPrivacy Policy\\nCookie Preferences\\nCookie Preferences\\nDo Not Sell or Share My Personal Information\\nClose\"\n", + "Metadata: \n", + " {\n", + " \"title\": \"8 AI and machine learning trends to watch in 2025\",\n", + " \"snippet\": \"Jan 3, 2025 \\u2014 1. Hype gives way to more pragmatic approaches \\u00b7 2. Generative AI moves beyond chatbots \\u00b7 3. AI agents are the next frontier \\u00b7 4. Generative AI\\u00a0...\",\n", + " \"url\": \"https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends\",\n", + " \"position\": 1,\n", + " \"entity_type\": \"OrganicResult\"\n", + "}\n" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "### Example of retrieval mode with an array of URLs" + ], + "metadata": { + "id": "6qAIqxHQr85B" + }, + "id": "6qAIqxHQr85B" + }, + { + "cell_type": "code", + "source": [ + "retriever = NimbleSearchRetriever(links=[\"example.com\"])\n", + "retriever.invoke(input=\"\")" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "y1IG7IvBsX5T", + "outputId": "b26350b1-aede-4afc-da5c-04a376ba5655" + }, + "id": "y1IG7IvBsX5T", + "execution_count": 8, + "outputs": [ + { + "output_type": "execute_result", + "data": { + "text/plain": [ + "[Document(metadata={'title': None, 'snippet': None, 'url': 'https://example.com', 'position': None, 'entity_type': 'HtmlContent'}, page_content='\\n\\n\\n Example Domain\\n\\n \\n \\n \\n \\n\\n\\n\\n
\\n

Example Domain

\\n

This domain is for use in illustrative examples in documents. You may use this\\n domain in literature without prior coordination or asking for permission.

\\n

More information...

\\n
\\n\\n\\n')]" + ] + }, + "metadata": {}, + "execution_count": 8 + } + ] + }, + { + "cell_type": "markdown", + "id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e", + "metadata": { + "id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e" + }, + "source": [ + "## Use within a chain\n", + "\n", + "Like other retrievers, NimbleSearchRetriever can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n", + "\n", + "We will need an LLM or chat model:\n" + ] + }, + { + "cell_type": "code", + "id": "25b647a3-f8f2-4541-a289-7a241e43f9df", + "metadata": { + "ExecuteTime": { + "end_time": "2025-01-20T09:17:41.637763Z", + "start_time": "2025-01-20T09:17:41.252298Z" + }, + "id": "25b647a3-f8f2-4541-a289-7a241e43f9df" + }, + "source": [ + "from langchain_openai import ChatOpenAI\n", + "\n", + "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "code", + "id": "23e11cc9-abd6-4855-a7eb-799f45ca01ae", + "metadata": { + "ExecuteTime": { + "end_time": "2025-01-20T09:17:49.477668Z", + "start_time": "2025-01-20T09:17:49.474380Z" + }, + "id": "23e11cc9-abd6-4855-a7eb-799f45ca01ae" + }, + "source": [ + "from langchain_core.output_parsers import StrOutputParser\n", + "from langchain_core.prompts import ChatPromptTemplate\n", + "from langchain_core.runnables import RunnablePassthrough\n", + "\n", + "prompt = ChatPromptTemplate.from_template(\n", + " \"\"\"Answer the question based only on the context provided.\n", + "\n", + "Context: {context}\n", + "\n", + "Question: {question}\"\"\"\n", + ")\n", + "\n", + "\n", + "def format_docs(docs):\n", + " return \"\\n\\n\".join(doc.page_content for doc in docs)\n", + "\n", + "\n", + "chain = (\n", + " {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n", + " | prompt\n", + " | llm\n", + " | StrOutputParser()\n", + ")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "code", + "id": "d47c37dd-5c11-416c-a3b6-bec413cd70e8", + "metadata": { + "ExecuteTime": { + "end_time": "2025-01-20T09:19:07.231965Z", + "start_time": "2025-01-20T09:18:30.011452Z" + }, + "id": "d47c37dd-5c11-416c-a3b6-bec413cd70e8" + }, + "source": [ + "chain.invoke(\"Who is the CEO of Nimbleway?\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", + "metadata": { + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3" + }, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all NimbleSearchRetriever features and configurations head to the [API reference](https://python.langchain.com/api_reference/nimble/)." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.4" + }, + "colab": { + "provenance": [] + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/libs/packages.yml b/libs/packages.yml index 7568581eb060f..4d045b55cb90d 100644 --- a/libs/packages.yml +++ b/libs/packages.yml @@ -380,3 +380,7 @@ packages: repo: keenanpepper/langchain-goodfire downloads: 51 downloads_updated_at: '2025-01-30T00:00:00+00:00' +- name: langchain-nimble + repo: Nimbleway/langchain-nimble + path: . + downloads: 0