-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addition of pull-precheck logic #1
base: development
Are you sure you want to change the base?
Addition of pull-precheck logic #1
Conversation
… into development
… into development
Any code changes you guys may want? |
@0x4007 i think using Claude's 3.5 sonnet is a bad choice as a model to review pull requests as it doesn't support structured outputs, structured outputs enable a model to output a JSON typed object every time. I just tried using https://openrouter.ai/google/gemini-2.0-flash-thinking-exp:free which worked wonderfully |
Then study cline's implementation. Claude is much better dealing with code changes and it's used in similar tools, like cline. No chance we are using Gemini I've never heard any compliments on it in the context of dealing with code. |
The current implementation can and will generate proper json outputs but my thinking was it would have been better to use a model which supported structured outputs. |
I was testing the plugin and noticed that if a pull-request is marked ready for review, the bot posts an error message and marks the review back to draft. While I think this can be a good behavior for outside contributors, I believe this should be overridden by the core team, because sometimes we just bump packages or do quick fixes that do not relate to any tasks. @0x4007 Thoughts? |
Also, I tried to use this with a |
You can easily add the token limits for said model, I made it configurable so that future support for future claude models becomes easier |
This seems to be hard-coded within the plugin so any modification implies re-compiling it, which doesn't seem very intuitive. |
i am gonna add this in the config, it would be like tokenLimits: {"anthropic/claude-3.5-sonnet": 200000, gpt-4o: 128000} |
Currently if someone who is not the PR author converts it to ready for review, the bot will skip the review but as you said a lot of times the PR can be created by the core team for quick fixes. |
Can you show me the link to your QA pull @gentlementlegen easier for me to follow along what the problem is |
Also heres the qa using meta-llama/llama-3.1-70b-instruct |
The "This pull request has passed the automated review, a reviewer will review this pull request shortly" seems unnecessary. You're not handling the error "unexpected no response from LLM" |
Changing it to "Pull request has passed the automated review"
That's the error when openrouter gets a resource exhausted error which gives no response from the llm. I'll change it so that it also outputs the error which caused no output |
I am thinking of how we can indicate that the review has passed without cluttering the timeline with comments. I want to show that the checking logic ran successfully but I can imagine that will get repetitive and verbose as they continue pushing commits. |
Just approved review with no comment should be enough. Logic is only ran when it's converted from draft to ready for review and not on every commit |
The possible problem with an approval is that it will imply that the pull can be merged. It is intended to act as a "pre-check" and not imply to the assignee that their work has been accepted. Furthermore we have the daemon-merging plugin that would automatically merge it in after a week even without human review. Maybe we can emoji react a thumbs up on the pull body to indicate a successful run. 👍 |
Resolves #45