Skip to content
This repository has been archived by the owner on Sep 19, 2024. It is now read-only.

Ask ChatGPT /ask #291

Open
0x4007 opened this issue May 4, 2023 · 39 comments · Fixed by #663
Open

Ask ChatGPT /ask #291

0x4007 opened this issue May 4, 2023 · 39 comments · Fixed by #663

Comments

@0x4007
Copy link
Member

0x4007 commented May 4, 2023

There have been several instances (including with myself) where I would answer a question presented in a pull request review, or in an issue conversation, by asking ChatGPT and pasting in the results.

It could be very nice to see what exactly the original prompt was inside of the conversation for full context. Imagine if we can simply handle this by using a /ask command? Any of the words following the command would be passed into GPT4.

On one hand, it feels a bit extraneous as a feature. On the other hand, we do plan to lean in pretty heavily into the AI powered features for version one of the bot, so I feel that this idea is not totally off course.

I got this from GPT
@pavlovcik

One possible solution to this problem is to modify the foundry test scenario so that it checks for a range of acceptable facet numbers, instead of a fixed number. For example, instead of testing for an exact number of facets, the test could be modified to check that the number of facets falls within a certain range, such as between a minimum and maximum value. This would allow for some flexibility in the number of facets that are acceptable and would be less likely to break with each pull request.

Another possible solution is to automate the process of updating the foundry test scenario with each pull request. This could be done using a script or tool that automatically reads the number of facets from the 3D model or other source and updates the test scenario accordingly. This would ensure that the test scenario always matches the actual number of facets in the model and would reduce the likelihood of errors or discrepancies.

It may also be possible to use a combination of these approaches, such as setting a range of acceptable facet numbers and automating the process of updating the test scenario with each pull request. This would provide both flexibility and consistency in the testing process.

Originally posted by @FibrinLab in ubiquity/ubiquity-dollar#629 (comment)

@Keyrxng
Copy link
Member

Keyrxng commented Aug 23, 2023

/start

@ubiquibot
Copy link

ubiquibot bot commented Aug 23, 2023

Deadline Wed, 23 Aug 2023 17:36:41 UTC
Registered Wallet 0xAe5D1F192013db889b1e2115A370aB133f359765
Tips:
  • Use /wallet 0x0000...0000 if you want to update your registered payment wallet address @user.
  • Be sure to open a draft pull request as soon as possible to communicate updates on your progress.
  • Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the bounty.

    @Keyrxng Keyrxng mentioned this issue Aug 23, 2023
    @Keyrxng
    Copy link
    Member

    Keyrxng commented Aug 23, 2023

    Take a look here

    @0x4007
    Copy link
    Member Author

    0x4007 commented Aug 26, 2023

    https://api.github.com/repos/ubiquity/ubiquibot/issues/291/comments

    Scrubbed:

    [
      {
        "id": 1690282995,
        "user": {
          "login": "Keyrxng",
          "id": 106303466,
          "type": "User"
        },
        "created_at": "2023-08-23T16:36:38Z",
        "updated_at": "2023-08-23T16:36:38Z",
        "body": "/start"
      },
      {
        "id": 1690283105,
        "user": {
          "login": "ubiquibot[bot]",
          "id": 113181824,
          "type": "Bot"
        },
        "created_at": "2023-08-23T16:36:43Z",
        "updated_at": "2023-08-23T16:36:43Z",
        "body": "\n<code>\n\n  <table>\n  <tr>\n    <td></td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>Deadline</td>\n    <td>Wed, 23 Aug 2023 17:36:41 UTC</td>\n  </tr>\n  <tr>\n    <td>Registered Wallet</td>\n    <td>0xAe5D1F192013db889b1e2115A370aB133f359765</td>\n  </tr>\n  \n  \n  \n  </table>\n</code><h6>Tips:</h6>\n    <ul>\n    <li>Use <code>/wallet 0x0000...0000</code> if you want to update your registered payment wallet address @user.</li>\n    <li>Be sure to open a draft pull request as soon as possible to communicate updates on your progress.</li>\n    <li>Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the bounty.</li>\n    <ul>"
      },
      {
        "id": 1690302671,
        "user": {
          "login": "Keyrxng",
          "id": 106303466,
          "type": "User"
        },
        "created_at": "2023-08-23T16:50:15Z",
        "updated_at": "2023-08-23T16:50:15Z",
        "body": "Take a look [here](https://github.com/Keyrxng/didactic-octo-train/issues/8)"
      }
    ]

    @Keyrxng
    Copy link
    Member

    Keyrxng commented Sep 23, 2023

    I don't think there's default config anymore, I had a hard time using my repo after some recent update..

    It kept telling me to remove some data in config that's not needed and add some new ones, also renamed some before it started working again

    It done the same for myself but looking at your repo config you only have two here.

    Where is it reading the rest from for you?

    ^^^^^

    Scratch that, your dev branch is 90 commits behind just noticed and not sure which is your most recent branch to check myself

    @Keyrxng
    Copy link
    Member

    Keyrxng commented Sep 23, 2023

    ahhh okay maybe I'll have to do the same then if that's the only way things are working at the moment but not ideal

    @seprintour
    Copy link
    Contributor

    I'm going to test with my repo now, besides I think #796 needs to be fixed. It's not easily noticeable but new updates will not reflect with this bug

    @Keyrxng
    Copy link
    Member

    Keyrxng commented Sep 23, 2023

    I spotted whilefoo raise this on TG at the time and tried to reproduce that and could not, to this day still haven't and i've done a fresh install multiple times and again like 10 mins ago so #796 doesn't affect me somehow

    This was referenced Sep 23, 2023
    @ubiquibot
    Copy link

    ubiquibot bot commented Sep 27, 2023

    Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
    Last activity time: Sat Sep 23 2023 18:19:11 GMT+0000 (Coordinated Universal Time)

    @ubiquibot ubiquibot bot unassigned Keyrxng Oct 1, 2023
    @Keyrxng
    Copy link
    Member

    Keyrxng commented Oct 2, 2023

    /start

    @ubiquibot
    Copy link

    ubiquibot bot commented Oct 2, 2023

    Deadline Mon, 09 Oct 2023 18:16:41 UTC
    Registered Wallet 0xAe5D1F192013db889b1e2115A370aB133f359765
    Tips:
    • Use /wallet 0x0000...0000 if you want to update your registered payment wallet address @user.
    • Be sure to open a draft pull request as soon as possible to communicate updates on your progress.
    • Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the bounty.

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 2, 2023

      /start

      @ubiquibot
      Copy link

      ubiquibot bot commented Oct 2, 2023

      Skipping /start since the issue is already assigned

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 2, 2023

      It's working on my org repo now that I've gotten around the default config still not working

      All I've done is pass in 4000 via the private settings repo with the path configRepo/.github/config.yml

      Is this still relevant? Busy week there but didn't think it was resolved

      Using langchain we can pass -1 for token count which just passes the remaining tokens in as the requested response, just a little fyi

      this.llm = new OpenAI({
                  openAIApiKey: this.apiKey,
                  modelName: 'gpt-3.5-turbo-16k',
                  maxTokens: -1,
              })

      This would be an easy and simple fix for our token problems

      I'm still getting the no such file error for \lib\assets\images\pmg.png']. I've created /lib/assets/pmg.png and still noy joy, it's exiting the process after responding for me. Any fix on this?

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 2, 2023

      I'm assuming that you've probably used an api key that is used for other things but @pavlovcik by chance have you created a key that we can check the usage to see whether the /ask call is being made when called from this issue?

      I cannot reproduce the non-response I get errors or it responds to me as it should

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 2, 2023

      Introduced `tokenLimit: openAITokenLimit || 0,`` which will result in it failing if the tokenLimit is undefined as 0 is an invalid value for max_tokens. So we'll have to define a reason value as rndquu requested before, my recommendation is probably about 60/40 for the size of the issues being parsed and linked in this org

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 2, 2023

      It's probably not the case but I'm sure I said it before but parallel /asks on different repos would fail for me on at least one them. This would imply every time it's been called was at the same time as someone else calling it which is absurd but still something to consider for down the road I think

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 4, 2023

      @pavlovcik @rndquu bump

      bumping as I feel I've dragged this out far longer than acceptable with being MIA last week, hoping to sort the 3 out this week asap

      @ubiquibot
      Copy link

      ubiquibot bot commented Oct 8, 2023

      Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
      Last activity time: Wed Oct 04 2023 09:36:18 GMT+0000 (Coordinated Universal Time)

      @rndquu
      Copy link
      Member

      rndquu commented Oct 10, 2023

      The /ask command is working fine in the latest development branch.

      The thing is that right now the bot's production build is set to the 1st of September which doesn't have the ask command codebase so we can't use it in production until we switch to the latest bot's build.

      Anyway this issue can be closed as completed because all we need to do is:

      1. Switch to the latest bot's build
      2. Set openai bot params (already set)

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 10, 2023

      Yes @rndquu, love to hear it! So everything is all good with this, so I can finish the rest of the pr's that rely on and/use the same functionality. I knew there wasn't anything on my end I done everything I could think of to debug and resolve this lmao was rather shitting it truth be told so I'm glad you have put this to bed

      That "tokenLimit || 0" needs to be updated as that'll cause headaches but all good otherwise

      @ubiquibot
      Copy link

      ubiquibot bot commented Oct 14, 2023

      Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment /stop
      Last activity time: Tue Oct 10 2023 14:07:18 GMT+0000 (Coordinated Universal Time)

      @Keyrxng
      Copy link
      Member

      Keyrxng commented Oct 14, 2023

      Everything is working as it should be

      removed my assignment to avoid any more bot updates

      Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
      Projects
      None yet
      Development

      Successfully merging a pull request may close this issue.

      4 participants