Asynchronous FastAPI wrapper for AsyncOpenAI and OpenAI assistantAPI Resources
We use Python3@11 for this project.
- Configure your venv and install the requirements
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
- Add
.secrets.yaml
file toconfig/
---
OPENAI_API_KEY: <YOUR_OPENAI_API_KEY>
- Start the ASGI uvicorn server:
uvicorn app.main:app --reload --host 0.0.0.0 --port 50050
- Execute api calls concurrently
python async_query_requests.py <N> http://localhost:50050/api/query
- Build the image:
docker build -t async-openai-assistant:latest .
- Run the container:
docker run -p 50000:50050 async-openai-assistant:latest
- Don't forget to set your
config/.secrets.yaml
to enable openai api
docker exec <container_id_or_name> sh -c 'echo "OPENAI_API_KEY: <YOUR_API_KEY_HERE>" > /app/config/.secrets.yaml'
- Execute api calls concurrently
python async_query_requests.py <N> http://<YOUR_HOST>:50000/api/query