-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support passthrough of transport options #603
Conversation
@@ -52,6 +52,7 @@ async def send_request( | |||
payload: Dict[str, Any], | |||
signature_secret: str, | |||
additional_payload_values: Dict[str, Any] = {}, | |||
transport_options: Dict[str, Any] = {"retries": 2}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we actually just pass through the AsyncHTTPTransport
object? is better for typing and configurability I'd say
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered it originally but thought it would introduce potentially undesirable coupling. But I think it's good to be opinionated about some things even if it introduces some coupling -- made the change.
* [DOW-118] set up code linting and tests (#589) * adds github workflow * run black * run isort * adds precommit * adds vscode settings * adds pre-commit guidelines (#590) * creates docker image, updates telephony app deps (#601) * [DOW-105] refactor interruptions into the output device (#586) * [DOW-105] refactor interruptions into the output device (#562) * initial refactor works * remove notion of UtteranceAudioChunk and put all of the state in the callback * move per_chunk_allowance_seconds into output device * onboard onto vonage * rename to abstract output device and onboard other output devices * initial work to onboard twilio output device * twilio conversation works * some cleanup with better comments * unset poetry.lock * move abstract play method into ratelimitoutputdevice + dispatch to thread in fileoutputdevice * rename back to AsyncWorker * comments * work through a bit of mypy * asyncio.gather is g2g: * create interrupt lock * remove todo * remove last todo * remove log for interrupts * fmt * fix mypy * fix mypy * isort * creates first test and adds scaffolding * adds two other send_speech_to_output tests * make send_speech_to_output more efficient * adds tests for rate limit interruptions output device * makes some variables private and also makes the chunk id coming back from the mark match the incoming audio chunk * adds twilio output device tests * make typing better for output devices * fix mypy * resolve PR comments * resolve PR comments * [DOW-101] LiveKit integration (#591) * checkpoint * livekit v0 * in progress changes * integrate with worker * fix import * update deps and remove unneeded files * integrate it properly into app * fix interrupts * make transcript publish work * a confounding fix * isort * constants, some cleanup --------- Co-authored-by: Kian <[email protected]> * upgrade to latest cartesia 1.0.3 (#587) * upgrade to latest cartesia 1.0.3 * fixed linting conflict * finish streaming * make cartesia optional --------- Co-authored-by: Ajay Raj <[email protected]> * poetry version prerelease (#602) * feat: Add ability to configure OpenAI base URL in ChatGPTAgentConfig (#577) * feat: Add ability to configure OpenAI base URL in ChatGPTAgentConfig - Added `base_url` parameter to `ChatGPTAgentConfig` to allow customization of the OpenAI API base URL. - Updated `instantiate_openai_client` function to use the `base_url` parameter from the configuration. - Modified `ChatGPTAgent` to utilize the updated `instantiate_openai_client` function. - Added tests to verify the new `base_url` functionality in `tests/streaming/agent/test_base_agent.py`. This enhancement allows users to specify a custom OpenAI API base URL, providing greater flexibility in agent configuration. * adding capability to use the openai compatible endpoint with token estimation for llama * lint fix * changing openai base_url parameter for overall less code changes * missed logging update * Update vocode/streaming/agent/chat_gpt_agent.py * Update tests/streaming/agent/test_base_agent.py * fix test --------- Co-authored-by: Ajay Raj <[email protected]> * Support passthrough of AsyncHTTPTransport (#603) Support passthrough of AsyncHTTPTransport object * add script used to make PR * adds test target for vocodehq-public * Remove catch-all exception logger for asyncio tasks (#605) * remove error log from exception for asyncio tasks * remove log error on chatgpt query --------- Co-authored-by: Kian <[email protected]> Co-authored-by: rjheeta <[email protected]> Co-authored-by: Clay Elmore <[email protected]> Co-authored-by: vocode-petern <[email protected]> Co-authored-by: Adnaan Sachidanandan <[email protected]>
Summary
External actions simply post to a given configured URL. Whether that URL has valid SSL should be irrelevant to us since it's configured by the user. This change makes it so consumers of the library can specify their own transport options and decide whether or not to verify the URL