Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise GPT prompt #147

Merged
merged 3 commits into from
Oct 21, 2024
Merged

Revise GPT prompt #147

merged 3 commits into from
Oct 21, 2024

Conversation

macnult
Copy link
Contributor

@macnult macnult commented Oct 8, 2024

General:

  • [✓] Have you followed the guidelines in our Contributing document?
  • [✓] Have you checked to ensure there aren't other open Pull Requests for the same update/change?

Code:

[✓] Does your submission pass tests?

Documentation:

Revised the GPT prompt in response to this reported issue: #146
I also added to the assertion to see what's returned if the test fails.

Summary by Sourcery

Revise the GPT prompt in the test to ensure precise response matching and enhance the assertion to provide detailed feedback on test failures.

Enhancements:

  • Improve the assertion in the test to provide more detailed feedback when the test fails.

Tests:

  • Revise the GPT prompt in the test to ensure the response strictly matches the expected phrase.

Copy link
Contributor

sourcery-ai bot commented Oct 8, 2024

Reviewer's Guide by Sourcery

This pull request revises the GPT prompt in the test_gpt.py file to address a reported issue. The changes include updating the test prompt for more precise output and enhancing the assertion message for better error reporting.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Revised GPT prompt for more precise output
  • Updated the surf_summary variable with a more specific instruction
  • Changed from 'Please only output: 'gpt works!'' to 'Please respond with the exact phrase 'gpt works'. Do not include any additional text or context.'
tests/test_gpt.py
Enhanced assertion message for better error reporting
  • Modified the assert statement to include the actual response in case of failure
  • Added f-string to display the unexpected response: f"Expected 'gpt works', but got: {gpt_response}"
tests/test_gpt.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hi! Thanks for submitting your first pull request!
• We appreciate your effort to improve this project.
• If you're enjoying your experience, please consider giving us a star ⭐
• It helps us grow and motivates us to keep improving! 🚀

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @macnult - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Consider adding a brief comment in the code explaining the reason for changing the GPT prompt. This would help future maintainers understand the context of the change.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟡 Testing: 1 issue found
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

tests/test_gpt.py Outdated Show resolved Hide resolved
@macnult
Copy link
Contributor Author

macnult commented Oct 9, 2024

The test still passes/fails inconsistently. When it does fail, it returns the following:


assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}"
E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today?
E assert "I'm here to ...st you today?" == 'gpt works'
E
E - gpt works
E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError


The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

@ryansurf
Copy link
Owner

The test still passes/fails inconsistently. When it does fail, it returns the following:

assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today? E assert "I'm here to ...st you today?" == 'gpt works' E E - gpt works E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError

The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

The test still passes/fails inconsistently. When it does fail, it returns the following:

assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today? E assert "I'm here to ...st you today?" == 'gpt works' E E - gpt works E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError

The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

The gpt is definitely finicky, but your prompt seems to be the better choice. Might have to rethink this going forward but I'll merge your PR as its an improvement!

Copy link

codecov bot commented Oct 16, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

@ryansurf
Copy link
Owner

@macnult whoops, actually the linter failed. Can you run make lint to fix the error?

@macnult
Copy link
Contributor Author

macnult commented Oct 17, 2024

@ryansurf Sure thing! Should be good to go now.

@ryansurf
Copy link
Owner

@all-contributors please add @macnult for code

Copy link
Contributor

@ryansurf

I've put up a pull request to add @macnult! 🎉

@ryansurf ryansurf merged commit f7f9676 into ryansurf:main Oct 21, 2024
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants