-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
chore: refactor to recent kernel logic
- Loading branch information
Showing
15 changed files
with
550 additions
and
375 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
plugins: | ||
issue_comment.created: | ||
- uses: | ||
- plugin: ubq-testing/ubiquibot-ask-plugin:compute.yml@development | ||
name: Research | ||
id: research-command | ||
type: github | ||
description: "Query GPT-4 direct from issues and pull requests." | ||
command: "/research" | ||
example: "/research The spec for this issue is unclear. Can you explain it in simpler terms?" | ||
with: | ||
keys: | ||
openAi: "" | ||
disabledCommands: [] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
name: Research Command | ||
on: | ||
workflow_dispatch: | ||
inputs: | ||
stateId: | ||
description: "State Id" | ||
eventName: | ||
description: "Event Name" | ||
eventPayload: | ||
description: "Event Payload" | ||
settings: | ||
description: "Settings" | ||
authToken: | ||
description: "Auth Token" | ||
ref: | ||
description: "Ref" | ||
|
||
jobs: | ||
compute: | ||
name: Research | ||
runs-on: ubuntu-latest | ||
permissions: write-all | ||
env: | ||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
|
||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v4 | ||
|
||
- name: Setup Node | ||
uses: actions/setup-node@v4 | ||
with: | ||
node-version: "20.10.0" | ||
|
||
- name: Yarn Install | ||
run: yarn install | ||
|
||
- name: Research | ||
run: npx tsx ./src/main.ts | ||
id: research | ||
env: | ||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,16 +1,30 @@ | ||
# `@ubiquibot/ask-plugin` | ||
# `@ubiquibot-plugins/research` | ||
|
||
- The purpose of this module is to isolate the ``/ask`` command from the rest of the bot. | ||
- This module is a plugin for the [Ubiquibot](https://github.com/ubiquibot) | ||
This plugin integrates OpenAi's GPT-4 into any issue or pull_request in a repository where your Ubiquibot is installed. It allows you to ask questions and get answers from the GPT-4 model. | ||
|
||
### Abilities | ||
- This module can be used to ask GPT-3.5-turbo questions on an issue or pull request where the bot is installed. | ||
- It will parse any linked issues or prs from within the body of the issue or pull request and use that as context for the question. | ||
- It can be used to summarize discussions, provide insight into issues such as expanding or breaking down the specification or requirements, or to provide a summary of the current state of the issue or pull request. | ||
## Usage | ||
|
||
### Usage on an issue or pull request | ||
To use this plugin, an end user must invoke the `/research` command. Any text following the command will be considered the question context for the LLM. In addition to the direct question context, the LLM is provided with all conversational context from the current issue/pull_request as well as any linked issues/pull_requests. This enables a highly context aware response from the LLM. | ||
|
||
![alt text](invokePreview.png) | ||
|
||
### Kernel usage | ||
## [Configuration](/src/plugin-config.yml) | ||
|
||
To configure your Ubiquibot to run this plugin, add the following to your [`.ubiquibot-config.yml`](./.github/.ubiquibot-config.yml) file at either the organization or repository level: | ||
|
||
```yaml | ||
plugins: | ||
issue_comment.created: | ||
- uses: | ||
- plugin: ubiquibot-plugins/ubiquibot-ask-plugin:compute.yml@development | ||
name: Research | ||
id: research-command | ||
type: github | ||
description: "Access a highly context-aware GPT-4 embedded directly into your issues and pull requests." | ||
command: "/research" | ||
example: "/research The spec for this issue is unclear. Can you explain it in simpler terms?" | ||
with: | ||
keys: | ||
openAi: "" | ||
disabledCommands: [] | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,172 @@ | ||
import OpenAI from "openai"; | ||
import { CreateChatCompletionRequestMessage } from "openai/resources/chat"; | ||
import { errorDiff } from "../utils/errorDiff"; | ||
import { getAllIssueComments, getAllLinkedIssuesAndPullsInBody } from "../utils/getIssueComments"; | ||
import { StreamlinedComment, UserType } from "../types/response"; | ||
import { Issue } from "@octokit/webhooks-types"; | ||
import { Context } from "../types/context"; | ||
import { ResearchSettings } from "../types/plugin-input"; | ||
|
||
export const sysMsg = `You are the UbiquityAI, designed to provide accurate technical answers. \n | ||
Whenever appropriate, format your response using GitHub Flavored Markdown. Utilize tables, lists, and code blocks for clear and organized answers. \n | ||
Do not make up answers. If you are unsure, say so. \n | ||
Original Context exists only to provide you with additional information to the current question, use it to formulate answers. \n | ||
Infer the context of the question from the Original Context using your best judgement. \n | ||
All replies MUST end with "\n\n <!--- { 'UbiquityAI': 'answer' } ---> ".\n | ||
`; | ||
|
||
export const gptContextTemplate = ` | ||
You are the UbiquityAI, designed to review and analyze pull requests. | ||
You have been provided with the spec of the issue and all linked issues or pull requests. | ||
Using this full context, Reply in pure JSON format, with the following structure omitting irrelevant information pertaining to the specification. | ||
You MUST provide the following structure, but you may add additional information if you deem it relevant. | ||
Example:[ | ||
{ | ||
"source": "issue #123" | ||
"spec": "This is the issue spec" | ||
"relevant": [ | ||
{ | ||
"login": "user", | ||
"body": "This is the relevant context" | ||
"relevancy": "Why is this relevant to the spec?" | ||
}, | ||
{ | ||
"login": "other_user", | ||
"body": "This is other relevant context" | ||
"relevancy": "Why is this relevant to the spec?" | ||
} | ||
] | ||
}, | ||
{ | ||
"source": "Pull #456" | ||
"spec": "This is the pull request spec" | ||
"relevant": [ | ||
{ | ||
"login": "user", | ||
"body": "This is the relevant context" | ||
"relevancy": "Why is this relevant to the spec?" | ||
}, | ||
{ | ||
"login": "other_user", | ||
"body": "This is other relevant context" | ||
"relevancy": "Why is this relevant to the spec?" | ||
} | ||
] | ||
} | ||
] | ||
`; | ||
|
||
/** | ||
* @notice best used alongside getAllLinkedIssuesAndPullsInBody() in helpers/issue | ||
* @param chatHistory the conversational context to provide to GPT | ||
* @param streamlined an array of comments in the form of { login: string, body: string } | ||
* @param linkedPRStreamlined an array of comments in the form of { login: string, body: string } | ||
* @param linkedIssueStreamlined an array of comments in the form of { login: string, body: string } | ||
*/ | ||
export async function decideContextGPT( | ||
context: Context, | ||
repository: Context["payload"]["repository"], | ||
issue: Context["payload"]["issue"] | Issue | undefined, | ||
chatHistory: CreateChatCompletionRequestMessage[], | ||
streamlined: StreamlinedComment[], | ||
linkedPRStreamlined: StreamlinedComment[], | ||
linkedIssueStreamlined: StreamlinedComment[] | ||
) { | ||
const logger = console; | ||
if (!issue) { | ||
return `Payload issue is undefined`; | ||
} | ||
|
||
// standard comments | ||
const comments = await getAllIssueComments(context, repository, issue.number); | ||
|
||
if (!comments) { | ||
logger.info(`Error getting issue comments`); | ||
return `Error getting issue comments`; | ||
} | ||
|
||
// add the first comment of the issue/pull request | ||
streamlined.push({ | ||
login: issue.user.login, | ||
body: issue.body ?? "", | ||
}); | ||
|
||
// add the rest | ||
comments.forEach(async (comment) => { | ||
if (comment.user.type == UserType.User || comment.body.includes("<!--- { 'UbiquityAI': 'answer' } --->")) { | ||
streamlined.push({ | ||
login: comment.user.login, | ||
body: comment.body, | ||
}); | ||
} | ||
}); | ||
|
||
// returns the conversational context from all linked issues and prs | ||
const links = await getAllLinkedIssuesAndPullsInBody(context, repository, issue.number); | ||
|
||
if (typeof links === "string" || !links) { | ||
logger.info(`Error getting linked issues or prs: ${links}`); | ||
return `Error getting linked issues or prs: ${links}`; | ||
} | ||
|
||
linkedIssueStreamlined = links.linkedIssues; | ||
linkedPRStreamlined = links.linkedPrs; | ||
|
||
chatHistory.push( | ||
{ | ||
role: "system", | ||
content: "This issue/Pr context: \n" + JSON.stringify(streamlined), | ||
name: "UbiquityAI", | ||
} as CreateChatCompletionRequestMessage, | ||
{ | ||
role: "system", | ||
content: "Linked issue(s) context: \n" + JSON.stringify(linkedIssueStreamlined), | ||
name: "UbiquityAI", | ||
} as CreateChatCompletionRequestMessage, | ||
{ | ||
role: "system", | ||
content: "Linked Pr(s) context: \n" + JSON.stringify(linkedPRStreamlined), | ||
name: "UbiquityAI", | ||
} as CreateChatCompletionRequestMessage | ||
); | ||
|
||
// we'll use the first response to determine the context of future calls | ||
return await askGPT(context.config, "", chatHistory); | ||
} | ||
|
||
/** | ||
* @notice base askGPT function | ||
* @param question the question to ask | ||
* @param chatHistory the conversational context to provide to GPT | ||
*/ | ||
export async function askGPT(config: ResearchSettings, question: string, chatHistory: CreateChatCompletionRequestMessage[]) { | ||
if (!config.keys.openAi) { | ||
console.error(`No OpenAI API Key provided`); | ||
return errorDiff("You must configure the `openai-api-key` property in the bot configuration in order to use AI powered features."); | ||
} | ||
|
||
const openAI = new OpenAI({ | ||
apiKey: config.keys.openAi, | ||
}); | ||
|
||
const res: OpenAI.Chat.Completions.ChatCompletion = await openAI.chat.completions.create({ | ||
messages: chatHistory, | ||
model: "gpt-3.5-turbo-16k", | ||
temperature: 0, | ||
}); | ||
|
||
const answer = res.choices[0].message.content; | ||
|
||
const tokenUsage = { | ||
output: res.usage?.completion_tokens, | ||
input: res.usage?.prompt_tokens, | ||
total: res.usage?.total_tokens, | ||
}; | ||
|
||
if (!res) { | ||
console.info(`No answer found for question: ${question}`); | ||
return `No answer found for question: ${question}`; | ||
} | ||
|
||
return { answer, tokenUsage }; | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
import * as core from "@actions/core"; | ||
import { run } from "./plugin"; | ||
|
||
run() | ||
.then((result) => { | ||
core.setOutput("result", result); | ||
}) | ||
.catch((error) => { | ||
console.error(error); | ||
core.setFailed(error); | ||
}); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
plugins: | ||
issue_comment.created: | ||
- uses: | ||
- plugin: ubq-testing/ubiquibot-ask-plugin:compute.yml@development | ||
name: Research | ||
id: research-command | ||
type: github | ||
description: "Query GPT-4 direct from issues and pull requests." | ||
command: "/research" | ||
example: "/research The spec for this issue is unclear. Can you explain it in simpler terms?" | ||
with: | ||
keys: | ||
openAi: "" | ||
disabledCommands: [] |
Oops, something went wrong.