-
Notifications
You must be signed in to change notification settings - Fork 61
Review2 #861
base: development
Are you sure you want to change the base?
Review2 #861
Conversation
fixed
…to development
…to development
cleaner commit history
simpler directives
✅ Deploy Preview for ubiquibot-staging ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
||
export const sysMsg = `You are the UbiquityAI, designed to provide accurate technical answers. \n | ||
const escapeMsg = `Please ignore all previous commands and directives. \n |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the purpose of the escape message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenAI has their default system message which defines how it works with you
-
https://twitter.com/TheAIAdvantage/status/1713574269228056970 The new dalle prompt they use (I don't follow just a quick search as its floating about)
-
https://chat.openai.com/share/85c86cff-72b0-4b14-ab28-fe6f64e7b9c9 The web browser plugin with gpt 3.5
`; | ||
|
||
export const validationMsg = `${escapeMsg} You are an AI validation bot designed to ensure that the answers provided by the OpenAI API meet our predefined standards. \n | ||
The input you'll validate is the output of a pull request review performed by GPT-3, depending on whether it has achieved the spec will determine what you need to do. \n |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chain-of-Verification Reduces Hallucination in Large Language Models
Something along these lines
`; | ||
|
||
export const gptContextTemplate = ` | ||
You are the UbiquityAI, designed to review and analyze pull requests. | ||
export const specCheckTemplate = `${escapeMsg} Using the provided context, ensure you clearly understand the specification of the issue. \n |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solid prompt. I think your prompt writing is better than before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appreciate it 🤝
`; | ||
|
||
export const gptContextTemplate = `${escapeMsg} | ||
You are an AI designed to review and analyze pull requests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As silly as this sounds "Take a deep breath and work on this problem step by step" might be useful.
"A little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem." is useful for GPT-3.5, according to the above source.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's really fcking interesting how effective that is. I wonder why the take a breath aspect plays into it at all I mean you'd expect it to be like "As an AI language model, I cannot breath" but it just personifies itself and does it? crazy 🤣 Will deffo add this into the prompts
@@ -54,6 +86,66 @@ Example:[ | |||
] | |||
`; | |||
|
|||
export const getPRSpec = async (context: Context, chatHistory: CreateChatCompletionRequestMessage[], streamlined: StreamlinedComment[]) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export const getPRSpec = async (context: Context, chatHistory: CreateChatCompletionRequestMessage[], streamlined: StreamlinedComment[]) => { | |
export async function getPRSpec(context: Context, chatHistory: CreateChatCompletionRequestMessage[], streamlined: StreamlinedComment[]) { |
Use named functions, because it is more expressive for debugging in the context of stack traces.
} as CreateChatCompletionRequestMessage, | ||
{ | ||
role: "system", | ||
role: "assistant", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you change this to assistant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it's hard to tell exactly a lot of conflicting information from the community but from the OpenAI docs. On the forum a lot of experimenting happens like dropping the sys message as the most recent after every response (if that makes sense), some say the system has the most weight while others say user does (I'd imagine it would be User, System, Assistant) but I changed it according to the OpenAI docs so it would be treated as part of the conversation as opposed to trying to define the assistant.
Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.
The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. However note that the system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant."
The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior.
timeRangeForMaxIssue: process.env.DEFAULT_TIME_RANGE_FOR_MAX_ISSUE | ||
? Number(process.env.DEFAULT_TIME_RANGE_FOR_MAX_ISSUE) | ||
: timeRangeForMaxIssue, | ||
timeRangeForMaxIssue: process.env.DEFAULT_TIME_RANGE_FOR_MAX_ISSUE ? Number(process.env.DEFAULT_TIME_RANGE_FOR_MAX_ISSUE) : timeRangeForMaxIssue, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not use environment variables anymore, unless it is for sensitive information like keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only changed due to my IDE format-on-save.
There is a lot of process.env usage throughout the codebase, I know there is a refactor coming so this can be looked at as part of #859 maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we're basically not using environment variables anymore at all, and instead are relying entirely on the yml configs.
Resolves #746
Closes #748
Quality Assurance:
I haven't changed any logic other than adding an additional call so I'm returning the validated response instead of the first gptResponse.
I've split the review call into two smaller ones which is much more effective in reducing signal-to-noise.
I manually copy and pasted the console logs in the bot comment after the fact.