-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise GPT prompt #147
Revise GPT prompt #147
Conversation
Reviewer's Guide by SourceryThis pull request revises the GPT prompt in the test_gpt.py file to address a reported issue. The changes include updating the test prompt for more precise output and enhancing the assertion message for better error reporting. No diagrams generated as the changes look simple and do not need a visual representation. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👋 Hi! Thanks for submitting your first pull request!
• We appreciate your effort to improve this project.
• If you're enjoying your experience, please consider giving us a star ⭐
• It helps us grow and motivates us to keep improving! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @macnult - I've reviewed your changes - here's some feedback:
Overall Comments:
- Consider adding a brief comment in the code explaining the reason for changing the GPT prompt. This would help future maintainers understand the context of the change.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟡 Testing: 1 issue found
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
The test still passes/fails inconsistently. When it does fail, it returns the following: assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" tests\test_gpt.py:22: AssertionError The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved. |
The gpt is definitely finicky, but your prompt seems to be the better choice. Might have to rethink this going forward but I'll merge your PR as its an improvement! |
Codecov ReportAll modified and coverable lines are covered by tests ✅ |
@ryansurf Sure thing! Should be good to go now. |
@all-contributors please add @macnult for code |
I've put up a pull request to add @macnult! 🎉 |
General:
Code:
[✓] Does your submission pass tests?
Documentation:
Revised the GPT prompt in response to this reported issue: #146
I also added to the assertion to see what's returned if the test fails.
Summary by Sourcery
Revise the GPT prompt in the test to ensure precise response matching and enhance the assertion to provide detailed feedback on test failures.
Enhancements:
Tests: