-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add ability to configure OpenAI base URL in ChatGPTAgentConfig #577
Conversation
- Added `base_url` parameter to `ChatGPTAgentConfig` to allow customization of the OpenAI API base URL. - Updated `instantiate_openai_client` function to use the `base_url` parameter from the configuration. - Modified `ChatGPTAgent` to utilize the updated `instantiate_openai_client` function. - Added tests to verify the new `base_url` functionality in `tests/streaming/agent/test_base_agent.py`. This enhancement allows users to specify a custom OpenAI API base URL, providing greater flexibility in agent configuration.
…timation for llama
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@celmore25 this is an awesome change! should be simple to get in my suggestion and then let's get this in
vocode/streaming/models/agent.py
Outdated
@@ -115,6 +115,7 @@ class ChatGPTAgentConfig(AgentConfig, type=AgentType.CHAT_GPT.value): # type: i | |||
openai_api_key: Optional[str] = None | |||
prompt_preamble: str | |||
model_name: str = CHAT_GPT_AGENT_DEFAULT_MODEL_NAME | |||
base_url: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's either name this base_url_override
or default it to "https://api.openai.com/v1"
— i'd prefer the former since it changes the code less
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ajar98 nice to meet you! Thanks for looking over this.
I just pushed some changes to go to the override option. Let me know if anything else needs to get changed!
thanks @celmore25 ! Just had to fix the test BTW - we have inbuilt support for groq - see
still think it's quite useful to support any vLLM compatible API with this change! |
Great, this will allow you to use Ollama, for example. |
* [DOW-118] set up code linting and tests (#589) * adds github workflow * run black * run isort * adds precommit * adds vscode settings * adds pre-commit guidelines (#590) * creates docker image, updates telephony app deps (#601) * [DOW-105] refactor interruptions into the output device (#586) * [DOW-105] refactor interruptions into the output device (#562) * initial refactor works * remove notion of UtteranceAudioChunk and put all of the state in the callback * move per_chunk_allowance_seconds into output device * onboard onto vonage * rename to abstract output device and onboard other output devices * initial work to onboard twilio output device * twilio conversation works * some cleanup with better comments * unset poetry.lock * move abstract play method into ratelimitoutputdevice + dispatch to thread in fileoutputdevice * rename back to AsyncWorker * comments * work through a bit of mypy * asyncio.gather is g2g: * create interrupt lock * remove todo * remove last todo * remove log for interrupts * fmt * fix mypy * fix mypy * isort * creates first test and adds scaffolding * adds two other send_speech_to_output tests * make send_speech_to_output more efficient * adds tests for rate limit interruptions output device * makes some variables private and also makes the chunk id coming back from the mark match the incoming audio chunk * adds twilio output device tests * make typing better for output devices * fix mypy * resolve PR comments * resolve PR comments * [DOW-101] LiveKit integration (#591) * checkpoint * livekit v0 * in progress changes * integrate with worker * fix import * update deps and remove unneeded files * integrate it properly into app * fix interrupts * make transcript publish work * a confounding fix * isort * constants, some cleanup --------- Co-authored-by: Kian <[email protected]> * upgrade to latest cartesia 1.0.3 (#587) * upgrade to latest cartesia 1.0.3 * fixed linting conflict * finish streaming * make cartesia optional --------- Co-authored-by: Ajay Raj <[email protected]> * poetry version prerelease (#602) * feat: Add ability to configure OpenAI base URL in ChatGPTAgentConfig (#577) * feat: Add ability to configure OpenAI base URL in ChatGPTAgentConfig - Added `base_url` parameter to `ChatGPTAgentConfig` to allow customization of the OpenAI API base URL. - Updated `instantiate_openai_client` function to use the `base_url` parameter from the configuration. - Modified `ChatGPTAgent` to utilize the updated `instantiate_openai_client` function. - Added tests to verify the new `base_url` functionality in `tests/streaming/agent/test_base_agent.py`. This enhancement allows users to specify a custom OpenAI API base URL, providing greater flexibility in agent configuration. * adding capability to use the openai compatible endpoint with token estimation for llama * lint fix * changing openai base_url parameter for overall less code changes * missed logging update * Update vocode/streaming/agent/chat_gpt_agent.py * Update tests/streaming/agent/test_base_agent.py * fix test --------- Co-authored-by: Ajay Raj <[email protected]> * Support passthrough of AsyncHTTPTransport (#603) Support passthrough of AsyncHTTPTransport object * add script used to make PR * adds test target for vocodehq-public * Remove catch-all exception logger for asyncio tasks (#605) * remove error log from exception for asyncio tasks * remove log error on chatgpt query --------- Co-authored-by: Kian <[email protected]> Co-authored-by: rjheeta <[email protected]> Co-authored-by: Clay Elmore <[email protected]> Co-authored-by: vocode-petern <[email protected]> Co-authored-by: Adnaan Sachidanandan <[email protected]>
Motivation
To allow for extremely low latency interactions over voice with LLMs, Groq currently has the highest throughput models which are publicly available. By allowing for a custom OpenAI API compatible endpoint, users can take advantage of a broader set of hardware and model providers.
Changes
base_url
parameter toChatGPTAgentConfig
to allow customization of the OpenAI API base URL.instantiate_openai_client
function to use thebase_url
parameter from the configuration.ChatGPTAgent
to utilize the updatedinstantiate_openai_client
function.get_tokenizer_info
function to allow for llama model usage with custom base URLs. This estimates the token usage and is not exact. Future feature expansion would be needed here to allow more models with certain token counting.base_url
functionality intests/streaming/agent/test_base_agent.py
.This enhancement allows users to specify a custom OpenAI API base URL, providing greater flexibility in agent configuration.
Usage
Example end to end example: