-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Review Mode with Checklist for Aider #723
Comments
I agree totally with this concept. It seems to already exist as we already get prompted by aider to do things like add files that the LLM suggests it wants access to, and for the LLM to add new files and such things. I have more applications of this concept to add to the discussion as well. One very big bottleneck that high level LLM tools like aider address very well is the issue of repetitively copy-pasting code. Now we do have to repeat our code in the prompts a significant amount, because that's what's required to update the LLM's understanding of changes to the code which we as the primary programmer are making. Sometimes these updates may be compressed with descriptions or diffs, but just like with math, an LLM is poor at tracking mutated state. As we blow past this bottleneck, then, the tradeoff is API token consumption which costs directly scale with. When we do the caveman copy paste workflow, we manually manipulate text and get a deeply intuitive sense of token consumption. Granted, with chatbot usage the pricing isn't per-token, but we need to manage token consumption for the purpose of keeping important information in-context anyhow, as results drop off a cliff when we run afoul of that. What I'm trying to get at is with an open-source tool like aider we have a unique opportunity to give excellent UX that no AI vendor will ever offer, that is to say something designed to help the user control resource consumption. So I would suggest for Review Mode to expand its scope beyond just being an approval process for changes: I want to apply this approval process also to prompts whose token consumption exceed a certain threshold (heuristically or otherwise) as one method to help us keep token related costs in check.
Edit: OK I just came across #127 (comment) and it led me to discover the verbose flag, which is very useful in this regard as it appears to be doing exactly what my "lite version" concept above is, letting me see exactly what's being sent and received so I can gauge if I'm being efficient in token usage. the I'm REALLY impressed with this tool right now. I might suggest two minor quality of life things related to verbose:
|
You might find these docs helpful: https://aider.chat/docs/config/options.html#--llm-history-file-llm_history_file If you run aider with |
Is there a way to get it to display the cost even when streaming? |
Unfortunately, no. The streaming API doesn't return cost info. |
Context:
When I ask Aider to review my code, it often identifies issues and suggests corrections. While some of these corrections are accurate, others are based on design choices that do not need to be changed. Currently, Aider goes ahead and implements all suggested changes and commits them, which sometimes results in unnecessary modifications that I need to undo, leading to diminishing returns.
Request:
I propose adding a "Review Mode" switch in the Aider browser interface, with an accompanying checklist feature. This mode would allow users to control how Aider behaves when reviewing code. In Review Mode, Aider would:
Workflow:
Benefits:
Implementing this feature would streamline the code review process, ensuring that changes made by Aider align more closely with the user's intentions and providing a more interactive and user-friendly experience.
Aider v0.40.0
Models: claude-3-5-sonnet-20240620 with diff edit format, weak model claude-3-haiku-20240307
Git repo: \192.168.0.35\YonaVmDataShare\Projects\posexport.git with 28 files
Repo-map: using 1024 tokens
Restored previous conversation history.
The text was updated successfully, but these errors were encountered: