You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The notion of storage should not include trimming of the conversation. This would be useful if you want to keep all the messages in the persistent storage. For example, the user may want to look at the old conversations. In your current architecture, only non-persistent sessions would make sense.
Assuming you only will support non-persistent sessions and storage is supposed to be evaporated after the session. Even in this case it would be a good idea to pass the function to trim as an argument instead of hardcoding it in the abstract class. This gives the user the flexibility to chose various trimming strategies.
Enhancements related to Agents related API (classifier, other agents and orchestrator)
modelId as an optional parameter
The modelId is an optional parameter in your Agent construction API and set to default.
This is an important parameter and very easy for the user to forget setting it and end up using the default model. Explicit is better than implicit most of the time for things that are very important/key. Yes, the framework should provide default implementations but too much magic especially related to important choices (e.g. modelId) should not be made optional.
Possible issue when re-using the agents & orchestrator
At various places, your code updates various instance variables. For example
self.update_system_prompt()
The process_request or route_request types API, require user_id, session_id etc. Given this, your apis seem to imply that an application could create one instance of classifier, agents etc and reuse it for all users and sessions. However, it conflicts with you updating the instance variables from within these functions.
Workaround/Guidance:
If a developer is aware of all this they should make sure that for every connection they create a new orchestrator, classifier, and agent instance. Giving this guidance means that the framework is inconsistent.
Your examples use lambda functions where the instances are created globally. This works out because of the way lambdas behave in the first place.
At minimum, you should not modify the instance variables from within the API. This would keep things thread-safe etc as well (just in case developer creates multiple threads). A significant number of developers do not know what they are doing!! They would follow your examples which may not work in other setups.
If your intention indeed is that the developer will create a new instance of these for every user connection then you should pass the boto3 client as a constructor argument
Lots of template/code repetition
A significant portion of agents could share code. As such if you had used langchain or llamaindex some of the manual creation of prompts, tool schema, support for different LLM providers etc would have been done for you.
Overall the APIs are confusing & can easily create bugs difficult to debug in production environment.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
While these enhancements may not impact the runtime they are important for the maintainability of the project and overall code
Enhancements related to Typing
Since you require python version >= 3.11, you do not have use the List, Dict, Union, Optional etc from typing.
Simply use the modern language features i.e. list, dict etc
For Optional you can do
For Union you can do
Enhancements related to Storage
The notion of storage should not include trimming of the conversation. This would be useful if you want to keep all the messages in the persistent storage. For example, the user may want to look at the old conversations. In your current architecture, only non-persistent sessions would make sense.
Assuming you only will support non-persistent sessions and storage is supposed to be evaporated after the session. Even in this case it would be a good idea to pass the function to trim as an argument instead of hardcoding it in the abstract class. This gives the user the flexibility to chose various trimming strategies.
Enhancements related to Agents related API (classifier, other agents and orchestrator)
modelId
as an optional parameterThe modelId is an optional parameter in your Agent construction API and set to default.
This is an important parameter and very easy for the user to forget setting it and end up using the default model. Explicit is better than implicit most of the time for things that are very important/key. Yes, the framework should provide default implementations but too much magic especially related to important choices (e.g. modelId) should not be made optional.
Possible issue when re-using the agents & orchestrator
At various places, your code updates various instance variables. For example
multi-agent-orchestrator/python/src/multi_agent_orchestrator/agents/bedrock_llm_agent.pyLine 112 in 67a2944
self.update_system_prompt()
The process_request or route_request types API, require user_id, session_id etc. Given this, your apis seem to imply that an application could create one instance of classifier, agents etc and reuse it for all users and sessions. However, it conflicts with you updating the instance variables from within these functions.
Workaround/Guidance:
If a developer is aware of all this they should make sure that for every connection they create a new orchestrator, classifier, and agent instance. Giving this guidance means that the framework is inconsistent.
Your examples use lambda functions where the instances are created globally. This works out because of the way lambdas behave in the first place.
At minimum, you should not modify the instance variables from within the API. This would keep things thread-safe etc as well (just in case developer creates multiple threads). A significant number of developers do not know what they are doing!! They would follow your examples which may not work in other setups.
If your intention indeed is that the developer will create a new instance of these for every user connection then you should pass the boto3 client as a constructor argument
Lots of template/code repetition
A significant portion of agents could share code. As such if you had used langchain or llamaindex some of the manual creation of prompts, tool schema, support for different LLM providers etc would have been done for you.
Overall the APIs are confusing & can easily create bugs difficult to debug in production environment.
Beta Was this translation helpful? Give feedback.
All reactions