You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What I and many seem to want is a way to truly put LLMs to work, ways to outsource higher volumes of work without explicit instruction/live oversight.
I think this is a fascinating problem, and in many ways how I hope to use gptme, but not really there yet.
Ideas:
ask it to try implementing feature/fix/refactor x, then typechecking/testing/linting it, then make a PR if successful
if unsuccessful, attempt up to n retry strategies, possibly in a branching manner where we try to detect the best branch, since any step could go wrong and put it in an unusable state.
Somehow it feels like a timer that executes a prompt and a tool that can spawn/control another instance is enough to pull off everything.
Better would be a generic trigger that could be a timer or an event fired from an agent as then error handling and task iteration could be continuous rather than batched.
Either way you could build complex workflows interpreted/executed by the model (ie "if error then execute error-flow") if the agent had an agent-tool to manage context(s) and instances.
"When current task is completed, review next task, condense and extend a copy of the context for that task, hand it off to a new instance and wait for events from the instance or a timeout."
What I and many seem to want is a way to truly put LLMs to work, ways to outsource higher volumes of work without explicit instruction/live oversight.
I think this is a fascinating problem, and in many ways how I hope to use gptme, but not really there yet.
Ideas:
make edit to page -> view web page -> make edit
What we should focus on:
What we'd need:
--tools
option in 48d559bIssues:
The text was updated successfully, but these errors were encountered: