Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.3.43 #124

Merged
merged 8 commits into from
Feb 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 76 additions & 0 deletions cookbook/gemini/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Gemini Cookbook

> Note: Fork and clone this repository if needed

## Prerequisites

1. [Install](https://cloud.google.com/sdk/docs/install) the Google Cloud SDK
2. [Create a Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
3. [Enable the AI Platform API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com)
4. [Authenticate](https://cloud.google.com/sdk/docs/initializing) with Google Cloud

```shell
gcloud auth application-default login
```

## Build Assistants using Gemini

1. Create and activate a virtual environment

```shell
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate
```

2. Install libraries

```shell
pip install -U google-cloud-aiplatform phidata
```

3. Export the following environment variables

```shell
export PROJECT_ID=your-project-id
export LOCATION=us-central1
```

4. Run Assistant

```shell
python cookbook/gemini/assistant.py
```

5. Run Assistant with Tool calls

```shell
pip install duckduckgo-search

python cookbook/gemini/tool_call.py
```

## Build RAG AI App using Gemini + PgVector

1. Start pgvector

```shell
phi start cookbook/gemini/resources.py -y
```

2. Install libraries

```shell
pip install -U pgvector pypdf psycopg sqlalchemy google-cloud-aiplatform phidata
```

3. Run RAG App

```shell
python cookbook/gemini/app.py
```

4. Stop pgvector

```shell
phi stop cookbook/gemini/resources.py -y
```
Empty file added cookbook/gemini/__init__.py
Empty file.
14 changes: 14 additions & 0 deletions cookbook/gemini/assistant.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from os import getenv

import vertexai
from phi.assistant import Assistant
from phi.llm.gemini import Gemini

# *********** Initialize VertexAI ***********
vertexai.init(project=getenv("PROJECT_ID"), location=getenv("LOCATION"))

assistant = Assistant(
llm=Gemini(model="gemini-1.0-pro-vision"),
description="You help people with their health and fitness goals.",
)
assistant.print_response("Share a quick healthy breakfast recipe.", markdown=True)
54 changes: 54 additions & 0 deletions cookbook/gemini/data_analyst.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import json
from textwrap import dedent
from os import getenv

import vertexai
from phi.assistant import Assistant
from phi.tools.duckdb import DuckDbTools
from phi.llm.gemini import Gemini

# *********** Initialize VertexAI ***********
vertexai.init(project=getenv("PROJECT_ID"), location=getenv("LOCATION"))

duckdb_assistant = Assistant(
llm=Gemini(model="gemini-pro"),
tools=[DuckDbTools()],
description="You are an expert data engineer that writes DuckDb queries to analyze data.",
instructions=[
"Using the `semantic_model` below, find which tables and columns you need to accomplish the task.",
"If you need to run a query, run `show_tables` to check the tables you need exist.",
"If the tables do not exist, RUN `create_table_from_path` to create the table using the path from the `semantic_model`",
"Once you have the tables and columns, create one single syntactically correct DuckDB query.",
"If you need to join tables, check the `semantic_model` for the relationships between the tables.",
"If the `semantic_model` contains a relationship between tables, use that relationship to join the tables even if the column names are different.",
"Inspect the query using `inspect_query` to confirm it is correct.",
"If the query is valid, RUN the query using the `run_query` function",
"Analyse the results and return the answer to the user.",
"Continue till you have accomplished the task.",
"Show the user the SQL you ran",
],
add_to_system_prompt=dedent(
"""
You have access to the following semantic_model:
<semantic_model>
{}
</semantic_model>
"""
).format(
json.dumps(
{
"tables": [
{
"name": "movies",
"description": "Contains information about movies from IMDB.",
"path": "https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
}
]
}
)
),
show_tool_calls=True,
debug_mode=True,
)

duckdb_assistant.print_response("What is the average rating of movies? Show me the SQL.", markdown=True)
12 changes: 12 additions & 0 deletions cookbook/gemini/resources.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from phi.docker.app.postgres import PgVectorDb
from phi.docker.resources import DockerResources

# -*- PgVector running on port 5432:5432
vector_db = PgVectorDb(
pg_user="ai",
pg_password="ai",
pg_database="ai",
)

# -*- DockerResources
dev_docker_resources = DockerResources(apps=[vector_db])
43 changes: 43 additions & 0 deletions cookbook/gemini/samples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Gemini Code Samples

This directory contains code samples for directly querying the Gemini API.
While these code samples don't use phidata, they are intended to help you test and get started with the Gemini API.

## Prerequisites

1. [Install](https://cloud.google.com/sdk/docs/install) the Google Cloud SDK
2. [Create a Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
3. [Enable the AI Platform API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com)
4. [Authenticate](https://cloud.google.com/sdk/docs/initializing) with Google Cloud

```shell
gcloud auth application-default login
```

5. Create and activate a virtual environment

```shell
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate
```

6. Install `google-cloud-aiplatform` library

```shell
pip install -U google-cloud-aiplatform
```

7. Export the following environment variables

```shell
export PROJECT_ID=your-project-id
export LOCATION=us-central1
```

## Run the code samples

1. Multimodal example

```shell
python cookbook/gemini/samples/multimodal.py
```
Empty file.
37 changes: 37 additions & 0 deletions cookbook/gemini/samples/multimodal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from os import getenv
from typing import Optional

import vertexai
from vertexai.generative_models import GenerativeModel, Part


def multimodal_example(project: Optional[str], location: Optional[str]) -> str:
# Initialize Vertex AI
vertexai.init(project=project, location=location)
# Load the model
multimodal_model = GenerativeModel("gemini-1.0-pro-vision")
# Query the model
response = multimodal_model.generate_content(
[
# Add an example image
Part.from_uri("gs://generativeai-downloads/images/scones.jpg", mime_type="image/jpeg"),
# Add an example query
"what is shown in this image?",
]
)
print("============= RESPONSE =============")
print(response)
print("============= RESPONSE =============")
return response.text


# *********** Get project and location ***********
PROJECT_ID = getenv("PROJECT_ID")
LOCATION = getenv("LOCATION")

# *********** Run the example ***********
if __name__ == "__main__":
result = multimodal_example(project=PROJECT_ID, location=LOCATION)
print("============= RESULT =============")
print(result)
print("============= RESULT =============")
27 changes: 27 additions & 0 deletions cookbook/gemini/samples/text_stream.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from os import getenv
from typing import Iterable, Optional

import vertexai
from vertexai.generative_models import GenerativeModel, GenerationResponse


def generate(project: Optional[str], location: Optional[str]) -> None:
# Initialize Vertex AI
vertexai.init(project=project, location=location)
# Load the model
model = GenerativeModel("gemini-1.0-pro-vision")
# Query the model
responses: Iterable[GenerationResponse] = model.generate_content("Who are you?", stream=True)
# Process the response
for response in responses:
print(response.text, end="")
print(" ")


# *********** Get project and location ***********
PROJECT_ID = getenv("PROJECT_ID")
LOCATION = getenv("LOCATION")

# *********** Run the example ***********
if __name__ == "__main__":
generate(project=PROJECT_ID, location=LOCATION)
16 changes: 16 additions & 0 deletions cookbook/gemini/tool_call.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
from os import getenv

import vertexai
from phi.assistant import Assistant
from phi.llm.gemini import Gemini
from phi.tools.duckduckgo import DuckDuckGo

# *********** Initialize VertexAI ***********
vertexai.init(project=getenv("PROJECT_ID"), location=getenv("LOCATION"))

assistant = Assistant(
llm=Gemini(model="gemini-pro"),
tools=[DuckDuckGo()],
show_tool_calls=True,
)
assistant.print_response("Whats happening in France? Summarize top 10 stories with sources", markdown=True)
78 changes: 78 additions & 0 deletions cookbook/mistral/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Mistral AI

> Note: Fork and clone this repository if needed

## RAG AI App with Mistral & PgVector

1. Create and activate a virtual environment

```shell
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate
```

2. Export your Mistral API Key

```shell
export MISTRAL_API_KEY=xxx
```

3. Start pgvector

```shell
phi start cookbook/mistral/resources.py -y
```

4. Install libraries

```shell
pip install -r cookbook/mistral/requirements.txt
```

5. Run RAG App

```shell
streamlit run cookbook/mistral/app.py
```

6. Stop pgvector

```shell
phi stop cookbook/mistral/resources.py -y
```

## Build AI Assistants with Mistral

1. Install libraries

```shell
pip install -U mistralai phidata
```

2. Run Assistant

```shell
python cookbook/mistral/simple_assistant.py
```

3. Output Pydantic models

```shell
python cookbook/mistral/pydantic_output.py
```

4. Run Assistant with Tool calls

> NOTE: currently not working

```shell
pip install duckduckgo-search

python cookbook/mistral/tool_call.py
```

Optional: View Mistral models

```shell
python cookbook/mistral/list_models.py
```
Empty file added cookbook/mistral/__init__.py
Empty file.
Loading
Loading