-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial commit of Amazon Q Business runnable for langchain #301
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution @tomron-aws ! Added a few comments.
Also, can you provide some additional context on what use-cases you're targeting with this ChatSync API integration?
Depending on your goals, it may be better to write this as a base Runnable interface, which can be simpler and more flexible to use than the specialized LLM interface that implements it.
Code changes should be minimal if you decide to switch (i.e. change _call
/_acall
to invoke
/ainvoke
, remove _llm_type
and _identifying_params
).
print("Prompt Length (Amazon Q ChatSync API takes a maximum of 7000 chars)") | ||
print(len(prompt)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These look like debug print statements. Should they be removed or replaced with logger statements?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been debating that. Currently Amazon Q ChatSync API only supports a max of 7000 chars. Its helpful to debug in the case that you are passing along additional context with the prompt.
I think a good place for that log would be in the error handling block, which I added. Let me know what you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - let's remove L83/84 in favor of the new changes.
} | ||
|
||
# Call Amazon Q | ||
response = self.client.chat_sync(**request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before attempting to call chat_sync API, can we run some validation code to check if the QBusiness boto client
has already been defined externally? If it doesn't exist, we should attempt to create it.
Refer here for an example: https://github.com/langchain-ai/langchain-aws/blob/main/libs/aws/langchain_aws/llms/sagemaker_endpoint.py#L271
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I implemented this as well. Let me know if that satisfies the requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
logging.info(f"Prompt Length: {len(prompt)}") | ||
print(f"""Prompt: | ||
{prompt}""") | ||
raise ValueError(f"Error raised by Amazon Q service: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line, as it's redundant with L123. Or change the ValueError message here to specifically reference the prompt length constraint
…te client using credentials sine anonymous invocation of chatsync with userId is no longer suppported
Changed to runnable. Also finished implementing and testing the client validator |
No description provided.