Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openai function #96

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 38 additions & 1 deletion docs/src/content/docs/agents/built-in/openai-agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ The `OpenAIAgent` is a powerful agent class in the Multi-Agent Orchestrator fram
- Customizable inference configuration
- Handles conversation history for context-aware responses
- Customizable system prompts
- Optional integration with retrieval systems for enhanced context (Typescript only)
- Support for Tool use within the conversation flow (Typescript only)

## Creating an OpenAIAgent

Expand All @@ -39,7 +41,41 @@ const agent = new OpenAIAgent({
topP: 0.9,
stopSequences: ['Human:', 'AI:']
},
systemPrompt: 'You are a helpful AI assistant specialized in answering questions about technology.'
systemPrompt: 'You are a helpful AI assistant specialized in answering questions about technology.',
toolConfig: {
tool: {
type: 'function',
function: {
name: "Weather_Tool",
description: "Get the current weather for a given location, based on its WGS84 coordinates.",
parameters: {
additionalProperties: false,
type: "object",
properties: {
latitude: {
type: "string",
description: "Geographical WGS84 latitude of the location.",
},
longitude: {
type: "string",
description: "Geographical WGS84 longitude of the location.",
},
},
required: ["latitude", "longitude"],
},
strict: true,
},
},
useToolHandler: (response, conversation) => {
//process tool response
return {
role: 'tool' as const,
tool_call_id: id,
content: JSON.stringify(response),
};
},
toolMaxRecursions: 5,
},
});
```
</TabItem>
Expand All @@ -62,6 +98,7 @@ The `OpenAIAgentOptions` extends the base `AgentOptions` and includes the follow
- `topP` (optional): Controls diversity of output generation.
- `stopSequences` (optional): An array of sequences that, when generated, will stop the generation process.
- `systemPrompt` (optional): A string representing the initial system prompt for the agent.
- `toolConfig` (optional): Defines tools the agent can use and how to handle their responses.

## Setting the System Prompt

Expand Down
249 changes: 223 additions & 26 deletions typescript/src/agents/openAIAgent.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
import { Agent, AgentOptions } from './agent';
import { ConversationMessage, OPENAI_MODEL_ID_GPT_O_MINI, ParticipantRole } from '../types';
import { ConversationMessage, OPENAI_MODEL_ID_GPT_O_MINI, ParticipantRole, TemplateVariables } from '../types';
import OpenAI from 'openai';
import { Logger } from '../utils/logger';
import { Retriever } from '../retrievers/retriever';

export interface OpenAIAgentOptions extends AgentOptions {
apiKey: string;
Expand All @@ -13,6 +14,15 @@ export interface OpenAIAgentOptions extends AgentOptions {
topP?: number;
stopSequences?: string[];
};
customSystemPrompt?: {
template: string, variables?: TemplateVariables
};
retriever?: Retriever;
toolConfig?: {
tool: OpenAI.ChatCompletionTool[];
useToolHandler: (response: any, conversation: any[]) => any;
toolMaxRecursions?: number;
};
}

const DEFAULT_MAX_TOKENS = 1000;
Expand All @@ -28,6 +38,18 @@ export class OpenAIAgent extends Agent {
stopSequences?: string[];
};

protected retriever?: Retriever;

private toolConfig?: {
tool: OpenAI.ChatCompletionTool[];
useToolHandler: (response: any, conversation: any[]) => any;
toolMaxRecursions?: number;
};

private promptTemplate: string;
private systemPrompt: string;
private customVariables: TemplateVariables;

constructor(options: OpenAIAgentOptions) {
super(options);
this.openai = new OpenAI({ apiKey: options.apiKey });
Expand All @@ -39,6 +61,35 @@ export class OpenAIAgent extends Agent {
topP: options.inferenceConfig?.topP,
stopSequences: options.inferenceConfig?.stopSequences,
};

this.retriever = options.retriever;
this.toolConfig = options.toolConfig ?? null;

this.systemPrompt = '';
this.customVariables = {};

this.promptTemplate = `You are a ${this.name}. ${this.description} Provide helpful and accurate information based on your expertise.
You will engage in an open-ended conversation, providing helpful and accurate information based on your expertise.
The conversation will proceed as follows:
- The human may ask an initial question or provide a prompt on any topic.
- You will provide a relevant and informative response.
- The human may then follow up with additional questions or prompts related to your previous response, allowing for a multi-turn dialogue on that topic.
- Or, the human may switch to a completely new and unrelated topic at any point.
- You will seamlessly shift your focus to the new topic, providing thoughtful and coherent responses based on your broad knowledge base.
Throughout the conversation, you should aim to:
- Understand the context and intent behind each new question or prompt.
- Provide substantive and well-reasoned responses that directly address the query.
- Draw insights and connections from your extensive knowledge when appropriate.
- Ask for clarification if any part of the question or prompt is ambiguous.
- Maintain a consistent, respectful, and engaging tone tailored to the human's communication style.
- Seamlessly transition between topics as the human introduces new subjects.`

if (options.customSystemPrompt) {
this.setSystemPrompt(
options.customSystemPrompt.template,
options.customSystemPrompt.variables
);
}
}

/* eslint-disable @typescript-eslint/no-unused-vars */
Expand All @@ -49,8 +100,6 @@ export class OpenAIAgent extends Agent {
chatHistory: ConversationMessage[],
additionalParams?: Record<string, string>
): Promise<ConversationMessage | AsyncIterable<any>> {


const messages = [
...chatHistory.map(msg => ({
role: msg.role.toLowerCase() as OpenAI.Chat.ChatCompletionMessageParam['role'],
Expand All @@ -59,6 +108,16 @@ export class OpenAIAgent extends Agent {
{ role: 'user' as const, content: inputText }
] as OpenAI.Chat.ChatCompletionMessageParam[];

this.updateSystemPrompt()

let systemPrompt = this.systemPrompt;

if (this.retriever) {
const response = await this.retriever.retrieveAndCombineResults(inputText);
const contextPrompt = "\nHere is the context to use to answer the user's question:\n" + response;
systemPrompt = systemPrompt + contextPrompt;
}

const { maxTokens, temperature, topP, stopSequences } = this.inferenceConfig;

const requestOptions: OpenAI.Chat.ChatCompletionCreateParams = {
Expand All @@ -69,18 +128,51 @@ export class OpenAIAgent extends Agent {
temperature,
top_p: topP,
stop: stopSequences,
tools: this.toolConfig?.tool || undefined,
};

try {

if (this.streaming) {
return this.handleStreamingResponse(requestOptions);
} else {
let finalMessage: string = '';
let toolUse = false;
let recursions = this.toolConfig?.toolMaxRecursions || 5;

do {
const response = await this.handleSingleResponse(requestOptions);

if (response.tool_calls) {
messages.push(response);

if (!this.toolConfig) {
throw new Error('No tools available for tool use');
}

const toolResponse = await this.toolConfig.useToolHandler(response, messages);
messages.push(toolResponse);
toolUse = true;
} else {
finalMessage = response.content;
toolUse = false;
}

if (this.streaming) {
return this.handleStreamingResponse(requestOptions);
} else {
return this.handleSingleResponse(requestOptions);
recursions--;
} while (toolUse && recursions > 0);

return {
role: ParticipantRole.ASSISTANT,
content: [{ text: finalMessage }],
};
}
} catch (error) {
Logger.logger.error('Error in OpenAI API call:', error);
throw error;
}
}

private async handleSingleResponse(input: any): Promise<ConversationMessage> {
private async handleSingleResponse(input: any): Promise<OpenAI.Chat.ChatCompletionMessage> {
try {
const nonStreamingOptions = { ...input, stream: false };
const chatCompletion = await this.openai.chat.completions.create(nonStreamingOptions);
Expand All @@ -89,33 +181,138 @@ export class OpenAIAgent extends Agent {
throw new Error('No choices returned from OpenAI API');
}

const assistantMessage = chatCompletion.choices[0]?.message?.content;

if (typeof assistantMessage !== 'string') {
throw new Error('Unexpected response format from OpenAI API');
}

return {
role: ParticipantRole.ASSISTANT,
content: [{ text: assistantMessage }],
};
const message = chatCompletion.choices[0].message;
return message as OpenAI.Chat.ChatCompletionMessage;
} catch (error) {
Logger.logger.error('Error in OpenAI API call:', error);
throw error;
}
}

private async *handleStreamingResponse(options: OpenAI.Chat.ChatCompletionCreateParams): AsyncIterable<string> {
const stream = await this.openai.chat.completions.create({ ...options, stream: true });
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
yield content;
}
setSystemPrompt(template?: string, variables?: TemplateVariables): void {
if (template) {
this.promptTemplate = template;
}

if (variables) {
this.customVariables = variables;
}

this.updateSystemPrompt();
}

private async * handleStreamingResponse(options: OpenAI.Chat.ChatCompletionCreateParams): AsyncIterable<string> {
let recursions = this.toolConfig?.toolMaxRecursions || 5;

while (recursions > 0) {
// Add tool calls to messages before creating stream
const messagesWithToolCalls = [...options.messages];

const stream = await this.openai.chat.completions.create({
...options,
messages: messagesWithToolCalls,
stream: true
});

let currentToolCalls: any[] = [];
let hasToolCalls = false;

for await (const chunk of stream) {
const toolCalls = chunk.choices[0]?.delta?.tool_calls;

if (toolCalls) {
for (const toolCall of toolCalls) {
if (toolCall.id) {
currentToolCalls.push({
id: toolCall.id,
function: toolCall.function,
});
}

if (toolCall.function?.arguments) {
const lastToolCall = currentToolCalls[currentToolCalls.length - 1];
lastToolCall.function.arguments = (lastToolCall.function.arguments || '') + toolCall.function.arguments;
}
}
}

if (chunk.choices[0]?.finish_reason === 'tool_calls') {
hasToolCalls = true;
const toolCallResults = [];

// Add tool calls to messages before processing
messagesWithToolCalls.push({
role: 'assistant',
tool_calls: currentToolCalls.map(tc => ({
id: tc.id,
type: 'function',
function: tc.function
}))
});

for (const toolCall of currentToolCalls) {
try {
const toolResponse = await this.toolConfig.useToolHandler(
{ tool_calls: [toolCall] },
messagesWithToolCalls
);

toolCallResults.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(toolResponse)
});
} catch (error) {
console.error('Tool call error', error);
}
}

// Append tool call results to messages
messagesWithToolCalls.push(...toolCallResults);

// Update options for next iteration
options.messages = messagesWithToolCalls;

}
currentToolCalls = [];
}

const content = chunk.choices[0]?.delta?.content;
if (content) {
yield content;
}
}

// Break if no tool calls were found
if (!hasToolCalls) break;

recursions--;
}
}

private updateSystemPrompt(): void {
const allVariables: TemplateVariables = {
...this.customVariables
};

this.systemPrompt = this.replaceplaceholders(
this.promptTemplate,
allVariables
);
}

private replaceplaceholders(
template: string,
variables: TemplateVariables
): string {
return template.replace(/{{(\w+)}}/g, (match, key) => {
if (key in variables) {
const value = variables[key];
if (Array.isArray(value)) {
return value.join("\n");
}
return value;
}
return match; // If no replacement found, leave the placeholder as is
});
}
}
Loading