You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some suggestions below. I wonder if you have any optimizations for this in the future?
1.Why don't we make a recat, cot or thinking tree in the demo? The reason module in the current agent only calls the internal part of the big model through the prompt word, and does not implement some functions of repeated iterative self-verification.
2.And when choreographing the workflow, how to design the memory of each agent and the memory of the workflow in detail, how to solve the problem of long response time of workflow orchestration, and how to optimize the rapid transmission of data between agents?
The text was updated successfully, but these errors were encountered:
We have some chain-of-thought examples here. If you don't supply a model it would use our default model which does chain of thought reasoning. This is a beta feature, so we will expand it further in future.
We are working on the long-response problem. We want to revamp workflows to add polling for long running workflows. As to the memory, you can give each agent their own memory for now.
I have some suggestions below. I wonder if you have any optimizations for this in the future?
1.Why don't we make a recat, cot or thinking tree in the demo? The reason module in the current agent only calls the internal part of the big model through the prompt word, and does not implement some functions of repeated iterative self-verification.
2.And when choreographing the workflow, how to design the memory of each agent and the memory of the workflow in detail, how to solve the problem of long response time of workflow orchestration, and how to optimize the rapid transmission of data between agents?
The text was updated successfully, but these errors were encountered: