Skip to content

Latest commit

 

History

History
53 lines (41 loc) · 1.24 KB

README.md

File metadata and controls

53 lines (41 loc) · 1.24 KB

simple-async-openai-assistant

Asynchronous FastAPI wrapper for AsyncOpenAI and OpenAI assistantAPI Resources

Usage

Local

We use Python3@11 for this project.

  1. Configure your venv and install the requirements
python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
  1. Add .secrets.yaml file to config/
---
OPENAI_API_KEY: <YOUR_OPENAI_API_KEY>
  1. Start the ASGI uvicorn server:
uvicorn app.main:app --reload --host 0.0.0.0 --port 50050
  1. Execute api calls concurrently
python async_query_requests.py <N> http://localhost:50050/api/query

Docker

  1. Build the image:
docker build -t async-openai-assistant:latest .
  1. Run the container:
docker run -p 50000:50050 async-openai-assistant:latest
  1. Don't forget to set your config/.secrets.yaml to enable openai api
docker exec <container_id_or_name> sh -c 'echo "OPENAI_API_KEY: <YOUR_API_KEY_HERE>" > /app/config/.secrets.yaml'
  1. Execute api calls concurrently
python async_query_requests.py <N> http://<YOUR_HOST>:50000/api/query