-
Notifications
You must be signed in to change notification settings - Fork 61
Ask ChatGPT /ask
#291
Comments
/start |
Tips:
|
Take a look here |
https://api.github.com/repos/ubiquity/ubiquibot/issues/291/comments Scrubbed: [
{
"id": 1690282995,
"user": {
"login": "Keyrxng",
"id": 106303466,
"type": "User"
},
"created_at": "2023-08-23T16:36:38Z",
"updated_at": "2023-08-23T16:36:38Z",
"body": "/start"
},
{
"id": 1690283105,
"user": {
"login": "ubiquibot[bot]",
"id": 113181824,
"type": "Bot"
},
"created_at": "2023-08-23T16:36:43Z",
"updated_at": "2023-08-23T16:36:43Z",
"body": "\n<code>\n\n <table>\n <tr>\n <td></td>\n <td></td>\n </tr>\n <tr>\n <td>Deadline</td>\n <td>Wed, 23 Aug 2023 17:36:41 UTC</td>\n </tr>\n <tr>\n <td>Registered Wallet</td>\n <td>0xAe5D1F192013db889b1e2115A370aB133f359765</td>\n </tr>\n \n \n \n </table>\n</code><h6>Tips:</h6>\n <ul>\n <li>Use <code>/wallet 0x0000...0000</code> if you want to update your registered payment wallet address @user.</li>\n <li>Be sure to open a draft pull request as soon as possible to communicate updates on your progress.</li>\n <li>Be sure to provide timely updates to us when requested, or you will be automatically unassigned from the bounty.</li>\n <ul>"
},
{
"id": 1690302671,
"user": {
"login": "Keyrxng",
"id": 106303466,
"type": "User"
},
"created_at": "2023-08-23T16:50:15Z",
"updated_at": "2023-08-23T16:50:15Z",
"body": "Take a look [here](https://github.com/Keyrxng/didactic-octo-train/issues/8)"
}
] |
It done the same for myself but looking at your repo config you only have two here. Where is it reading the rest from for you? ^^^^^ Scratch that, your dev branch is 90 commits behind just noticed and not sure which is your most recent branch to check myself |
ahhh okay maybe I'll have to do the same then if that's the only way things are working at the moment but not ideal |
I'm going to test with my repo now, besides I think #796 needs to be fixed. It's not easily noticeable but new updates will not reflect with this bug |
I spotted whilefoo raise this on TG at the time and tried to reproduce that and could not, to this day still haven't and i've done a fresh install multiple times and again like 10 mins ago so #796 doesn't affect me somehow |
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment |
/start |
Tips:
|
/start |
Skipping |
It's working on my org repo now that I've gotten around the default config still not working All I've done is pass in 4000 via the private settings repo with the path Is this still relevant? Busy week there but didn't think it was resolved Using langchain we can pass -1 for token count which just passes the remaining tokens in as the requested response, just a little fyi this.llm = new OpenAI({
openAIApiKey: this.apiKey,
modelName: 'gpt-3.5-turbo-16k',
maxTokens: -1,
}) This would be an easy and simple fix for our token problems I'm still getting the no such file error for |
I'm assuming that you've probably used an api key that is used for other things but @pavlovcik by chance have you created a key that we can check the usage to see whether the /ask call is being made when called from this issue? I cannot reproduce the non-response I get errors or it responds to me as it should |
Introduced `tokenLimit: openAITokenLimit || 0,`` which will result in it failing if the tokenLimit is undefined as 0 is an invalid value for max_tokens. So we'll have to define a reason value as rndquu requested before, my recommendation is probably about 60/40 for the size of the issues being parsed and linked in this org |
It's probably not the case but I'm sure I said it before but parallel |
@pavlovcik @rndquu bump bumping as I feel I've dragged this out far longer than acceptable with being MIA last week, hoping to sort the 3 out this week asap |
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment |
The The thing is that right now the bot's production build is set to the 1st of September which doesn't have the Anyway this issue can be closed as completed because all we need to do is:
|
Yes @rndquu, love to hear it! So everything is all good with this, so I can finish the rest of the pr's that rely on and/use the same functionality. I knew there wasn't anything on my end I done everything I could think of to debug and resolve this lmao was rather shitting it truth be told so I'm glad you have put this to bed That "tokenLimit || 0" needs to be updated as that'll cause headaches but all good otherwise |
Do you have any updates @Keyrxng? If you would like to release the bounty back to the DevPool, please comment |
Everything is working as it should be removed my assignment to avoid any more bot updates |
There have been several instances (including with myself) where I would answer a question presented in a pull request review, or in an issue conversation, by asking ChatGPT and pasting in the results.
It could be very nice to see what exactly the original prompt was inside of the conversation for full context. Imagine if we can simply handle this by using a
/ask
command? Any of the words following the command would be passed into GPT4.On one hand, it feels a bit extraneous as a feature. On the other hand, we do plan to lean in pretty heavily into the AI powered features for version one of the bot, so I feel that this idea is not totally off course.
Originally posted by @FibrinLab in ubiquity/ubiquity-dollar#629 (comment)
The text was updated successfully, but these errors were encountered: