Skip to content

Commit

Permalink
Merge branch 'main' into update_signin_url
Browse files Browse the repository at this point in the history
  • Loading branch information
ashpreetbedi committed Jan 2, 2024
2 parents a0051ac + 073bc5f commit 9526989
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 101 deletions.
127 changes: 31 additions & 96 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,47 +19,52 @@
</a>
</p>

## 🎯 Goal: Provide a paved path to production-ready AI

Phidata is a toolkit for building AI powered software. It enables you to build:
- **RAG Apps**: Connect LLMs to your knowledge base and build context-aware applications.
- **Autonomous Apps**: Give LLMs the ability to call functions and build autonomous applications.
- **Multi-Modal Apps**: Build apps that can process text, images, audio and video.
- **Workflow Specific AI**: Build AI for workflows like data engineering, customer support, sales, marketing, etc.
## ✨ What is phidata?

It achieves this by providing:
- Building blocks: `Conversations`, `Tools`, `Agents`, `KnowledgeBase`, `Storage`
- Tools for serving AI Apps: `FastApi`, `Django`, `Streamlit`, `PgVector`
- Infrastructure for running AI Apps: `Docker`, `AWS`
Phidata is a toolkit for building AI products. It gives you production-ready AI Apps that can run locally on docker or be deployed to AWS with 1 command.

To simplify development further, phidata provides pre-built templates for common use-cases that you can clone and run with 1 command. ⭐️ for when you need to spin up an AI project quickly.
Its goal is to provide a paved-path for building AI products, to anyone with basic python skills.

## ✨ Motivation
## 🎖 Use it to build

Most AI Apps are built as a house of cards because engineers have to build the Software, Application and Infrastructure layer separately and then glue them together.
This leads to brittle systems that are hard to maintain, monitor and productionize.
- **AI Apps** (RAG, autonomous or multimodal applications)
- **AI Assistants** (automate data engineering, python or snowflake tasks)
- **Rest Apis** (with FastApi, PostgreSQL)
- **Web Apps** (with Django, PostgreSQL)
- **Data Platforms** (with Airflow, Superset, Jupyter)

Phidata bridges the 3 layers of software development and provides a paved path to production-ready AI.
## 💡 What you get

## 🚀 How it works
**Production ready codebases** for AI Apps, Web Apps and RestAPIs built with:

- **Building blocks** like conversations, agents, knowledge bases defined as pydantic objects
- **Applications** like FastApi, Streamlit, Django, Postgres defined as pydantic objects
- **Infrastructure** components (docker, AWS) also defined as pydantic objects

Phidata applications run locally using docker and can be deployed to AWS with 1 command.

## 👩‍💻 How it works

- Create your codebase using a template: `phi ws create`
- Run your app locally: `phi ws up dev:docker`
- Run your app on AWS: `phi ws up prd:aws`

## ⭐ Features:

- **Powerful:** Get a production-ready AI App with 1 command.
- **Simple**: Built using a human-like `Conversation` interface to language models.
- **Production Ready:** Your app can be deployed to aws with 1 command.

## 📚 More Information:

- Read the <a href="https://docs.phidata.com" target="_blank" rel="noopener noreferrer">documentation</a>
- Chat with us on <a href="https://discord.gg/4MtYHHrgA8" target="_blank" rel="noopener noreferrer">Discord</a>
- Email us at <a href="mailto:[email protected]" target="_blank" rel="noopener noreferrer">[email protected]</a>

## 💻 Quickstart
## 🚀 Quickstart: Build a RAG LLM App

Let's build a **RAG LLM App** with GPT-4. We'll use:
- Streamlit for the front-end
- FastApi for the back-end
- PgVector for Knowledge Base and Storage
- Read the full tutorial <a href="https://docs.phidata.com/quickstart" target="_blank" rel="noopener noreferrer">here</a>.

> Install <a href="https://docs.docker.com/desktop/install/mac-install/" target="_blank" rel="noopener noreferrer">docker desktop</a> to run this app locally.
### Create a virtual environment

Expand All @@ -77,79 +82,9 @@ source aienv/bin/activate
Install phidata

```shell
pip install phidata
```

### Create a conversation

Conversations are a human-like interface to language models and the starting point for every AI App.
We send the LLM a message and get a model-generated output as a response.

Conversations come with built-in Memory, Knowledge, Storage and access to Tools.
Giving LLMs the ability to have long-term, knowledge-based Conversations is the first step in our journey to AGI.

- Copy the following code to a file `conversation.py`

```python
from phi.conversation import Conversation

conversation = Conversation()
conversation.print_response('Share a quick healthy breakfast recipe.')
```

- Install openai

```shell
pip install openai
pip install -U phidata
```

- Run your conversation

```shell
python conversation.py
```

### Get structured output from LLM

- Update the `conversation.py` file to:

```python
from pydantic import BaseModel, Field
from phi.conversation import Conversation
from rich.pretty import pprint

class Recipe(BaseModel):
title: str = Field(..., description='Title of the recipe.')
ingredients: str = Field(..., description='Ingredients for the recipe.')
instructions: str = Field(..., description='Instructions for the recipe.')

conversation = Conversation(output_model=Recipe)
breakfast_recipe = conversation.run('Quick healthy breakfast recipe.')
pprint(breakfast_recipe)
```

- Run your conversation again:

```shell
python conversation.py

Recipe(
│ title='Banana and Almond Butter Toast',
│ ingredients='2 slices of whole-grain bread, 1 ripe banana, 2 tablespoons almond butter, 1 teaspoon chia seeds, 1 teaspoon honey (optional)',
│ instructions='Toast the bread slices to desired crispness. Spread 1 tablespoon of almond butter on each slice of toast. Slice the banana and arrange the slices on top of the almond butter. Sprinkle chia seeds over the banana slices. Drizzle honey on top if preferred. Serve immediately.'
)
```

## 🤖 Full Example: Build a RAG LLM App

Let's build a **RAG LLM App** with GPT-4. We'll use:
- PgVector for Knowledge Base and Storage
- Streamlit for the front-end
- FastApi for the back-end
- Read the full tutorial <a href="https://docs.phidata.com/examples/rag-llm-app" target="_blank" rel="noopener noreferrer">here</a>.

> Install <a href="https://docs.docker.com/desktop/install/mac-install/" target="_blank" rel="noopener noreferrer">docker desktop</a> to run this app locally.
### Create your codebase

Create your codebase using the `llm-app` template pre-configured with FastApi, Streamlit and PgVector.
Expand All @@ -158,7 +93,7 @@ Create your codebase using the `llm-app` template pre-configured with FastApi, S
phi ws create -t llm-app -n llm-app
```

This will create a folder named `llm-app` in the current directory.
This will create a folder `llm-app` with a pre-built LLM App that you can customize and make your own.

### Serve your LLM App using Streamlit

Expand Down Expand Up @@ -237,7 +172,7 @@ phi ws down

### Run your LLM App on AWS

Read how to <a href="https://docs.phidata.com/guides/llm-app#run-on-aws" target="_blank" rel="noopener noreferrer">run your LLM App on AWS here</a>.
Read how to <a href="https://docs.phidata.com/quickstart/run-aws" target="_blank" rel="noopener noreferrer">run your LLM App on AWS</a>.

## 📚 More Information:

Expand Down
4 changes: 2 additions & 2 deletions phi/cli/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ def patch_resources(
print_info("No resources to patch")
return

logger.debug(f"Deploying {num_rgs_to_patch} resource groups")
logger.debug(f"Patching {num_rgs_to_patch} resource groups")
for rg in resource_groups_to_patch:
_num_resources_patched, _num_resources_to_patch = rg.update_resources(
group_filter=target_group,
Expand All @@ -373,7 +373,7 @@ def patch_resources(
num_rgs_patched += 1
num_resources_patched += _num_resources_patched
num_resources_to_patch += _num_resources_to_patch
logger.debug(f"Deployed {num_resources_patched} resources in {num_rgs_patched} resource groups")
logger.debug(f"Patched {num_resources_patched} resources in {num_rgs_patched} resource groups")

if dry_run:
return
Expand Down
4 changes: 4 additions & 0 deletions phi/conversation/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -1027,6 +1027,10 @@ def print_response(
from rich.box import ROUNDED
from rich.markdown import Markdown

if self.output_model is not None:
markdown = False
stream = False

if stream:
response = ""
with Live() as live_log:
Expand Down
1 change: 1 addition & 0 deletions phi/docker/resource/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ class DockerNetwork(DockerResource):

# Set skip_delete=True so that the network is not deleted when the `phi ws down` command is run
skip_delete: bool = True
skip_update: bool = True

def _create(self, docker_client: DockerApiClient) -> bool:
"""Creates the Network on docker
Expand Down
2 changes: 1 addition & 1 deletion phi/memory/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ class ConversationMemory(BaseModel):
# Messages between the user and the LLM.
# Note: the actual prompts are stored in the llm_messages
chat_history: List[Message] = []
# Messages(prompts) sent to the LLM and the LLM responses.
# Prompts sent to the LLM and the LLM responses.
llm_messages: List[Message] = []
# References from the vector database.
references: List[References] = []
Expand Down
3 changes: 1 addition & 2 deletions phi/workspace/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -322,8 +322,7 @@ def get_resources_from_file(
logger.debug(f"Loading .env from {resource_file_parent_dir}")
load_env(dotenv_dir=resource_file_parent_dir)

workspace_dir_name = resource_file_parent_dir.name
temporary_ws_config = WorkspaceConfig(ws_dir_name=workspace_dir_name, ws_root_path=resource_file_parent_dir)
temporary_ws_config = WorkspaceConfig(ws_root_path=resource_file_parent_dir)

# NOTE: When loading a workspace, relative imports or package imports do not work.
# This is a known problem in python
Expand Down

0 comments on commit 9526989

Please sign in to comment.