Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Brace/anthropic tools #4978

Merged
merged 9 commits into from
Apr 4, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions docs/core_docs/docs/integrations/chat/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,37 @@ You can pass custom headers in your requests like this:
import AnthropicCustomHeaders from "@examples/models/chat/integration_anthropic_custom_headers.ts";

<CodeBlock language="typescript">{AnthropicCustomHeaders}</CodeBlock>

## Tools

The Anthropic API supports tool calling, along with multi-tool calling. The following examples demonstrate how to call tools:

### Single Tool

import AnthropicSingleTool from "@examples/models/chat/integration_anthropic_single_tool.ts";

<CodeBlock language="typescript">{AnthropicSingleTool}</CodeBlock>

:::tip
See the LangSmith trace [here](https://smith.langchain.com/public/90c03ed0-154b-4a50-afbf-83dcbf302647/r)
:::

### Multi-Tool

import AnthropicMultiTool from "@examples/models/chat/integration_anthropic_multi_tool.ts";

<CodeBlock language="typescript">{AnthropicMultiTool}</CodeBlock>

:::tip
See the LangSmith trace [here](https://smith.langchain.com/public/1349bb57-df1b-48b5-89c2-e6d5bd8f694a/r)
:::

### `withStructuredOutput`

import AnthropicWSA from "@examples/models/chat/integration_anthropic_wsa.ts";

<CodeBlock language="typescript">{AnthropicWSA}</CodeBlock>

:::tip
See the LangSmith trace [here](https://smith.langchain.com/public/efbd11c5-886e-4e07-be1a-951690fa8a27/r)
:::
5 changes: 5 additions & 0 deletions docs/core_docs/docs/integrations/chat/anthropic_tools.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
---
sidebar_label: Anthropic Tools
sidebar_class_name: hidden
---

:::warning
This API is deprecated as Anthropic now officially supports tools. [Click here to read the documentation](/docs/models/chat/anthropic#tools).
:::

# Anthropic Tools

LangChain offers an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.
Expand Down
100 changes: 100 additions & 0 deletions examples/src/models/chat/integration_anthropic_multi_tool.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
import { ChatAnthropic } from "@langchain/anthropic";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there! 👋 I've flagged this PR for your review because it adds code that explicitly accesses an environment variable using process.env. Please take a look and ensure that the handling of environment variables aligns with best practices. Let me know if you need any further assistance with this!

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide", "average"])
.describe("The type of operation to execute."),
numbers: z.array(z.number()).describe("The numbers to operate on."),
});

const weatherSchema = z
.object({
location: z.string().describe("The name of city to get the weather for."),
})
.describe(
"Get the weather of a specific location and return the temperature in Celsius."
);

const tools = [
{
name: "calculator",
description: "A simple calculator tool.",
input_schema: zodToJsonSchema(calculatorSchema),
},
{
name: "get_weather",
description: "Get the weather of a location",
input_schema: zodToJsonSchema(weatherSchema),
},
];

const model = new ChatAnthropic({
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
modelName: "claude-3-opus-20240229",
}).bind({
tools,
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always uses tools to ensure you provide accurate, up to date information.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(model);

const response = await chain.invoke({
input:
"What is the current weather in new york, and san francisco? Also, what is the average of these numbers: 2273,7192,272,92737?",
});
console.log(JSON.stringify(response, null, 2));
/*
{
"kwargs": {
"content": "<thinking>\nTo answer this query, there are two relevant tools:\n\n1. get_weather - This can be used to get the current weather for New York and San Francisco. It requires a \"location\" parameter. Since the user provided \"new york\" and \"san francisco\" as locations, we have the necessary information to call this tool twice - once for each city.\n\n2. calculator - This can be used to calculate the average of the provided numbers. It requires a \"numbers\" parameter which is an array of numbers, and an \"operation\" parameter. The user provided the numbers \"2273,7192,272,92737\" which we can split into an array, and they asked for the \"average\", so we have the necessary information to call this tool.\n\nSince we have the required parameters for both relevant tools, we can proceed with the function calls.\n</thinking>",
"additional_kwargs": {
"id": "msg_013AgVS83LU6fWRHbykfvbYS",
"type": "message",
"role": "assistant",
"model": "claude-3-opus-20240229",
"stop_reason": "tool_use",
"usage": {
"input_tokens": 714,
"output_tokens": 336
},
"tool_calls": [
{
"id": "toolu_01NHY2v7kZx8WqAvGzBuCu4h",
"type": "function",
"function": {
"arguments": "{\"location\":\"new york\"}",
"name": "get_weather"
}
},
{
"id": "toolu_01PVCofvgkbnD4NfWfvXdsPC",
"type": "function",
"function": {
"arguments": "{\"location\":\"san francisco\"}",
"name": "get_weather"
}
},
{
"id": "toolu_019AVVNUyCYnvsVdpkGKVDdv",
"type": "function",
"function": {
"arguments": "{\"operation\":\"average\",\"numbers\":[2273,7192,272,92737]}",
"name": "calculator"
}
}
]
},
}
}
*/
64 changes: 64 additions & 0 deletions examples/src/models/chat/integration_anthropic_single_tool.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
import { ChatAnthropic } from "@langchain/anthropic";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there! I've reviewed the code and noticed that the added code explicitly accesses an environment variable via process.env. I've flagged this for your review to ensure it aligns with security best practices. Let me know if you need further assistance with this.

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const tool = {
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema),
};

const model = new ChatAnthropic({
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
modelName: "claude-3-haiku-20240307",
}).bind({
tools: [tool],
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(model);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(JSON.stringify(response, null, 2));
/*
{
"kwargs": {
"content": "Okay, let's calculate that using the calculator tool:",
"additional_kwargs": {
"id": "msg_01YcT1KFV8qH7xG6T6C4EpGq",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"tool_calls": [
{
"id": "toolu_01UiqGsTTH45MUveRQfzf7KH",
"type": "function",
"function": {
"arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}",
"name": "calculator"
}
}
]
},
"response_metadata": {}
}
}
*/
90 changes: 90 additions & 0 deletions examples/src/models/chat/integration_anthropic_wsa.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
import { ChatAnthropic } from "@langchain/anthropic";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey team, just a heads up that I've flagged the latest change in the PR for review. It explicitly accesses the ANTHROPIC_API_KEY environment variable using process.env, so it's important to ensure that this is handled securely. Keep up the great work!

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const calculatorSchema = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");

const model = new ChatAnthropic({
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
modelName: "claude-3-haiku-20240307",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorSchema);

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

/**
* You can supply a "name" field to give the LLM additional context
* around what you are trying to generate. You can also pass
* 'includeRaw' to get the raw message back from the model too.
*/
const includeRawModel = model.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true,
});
const includeRawChain = prompt.pipe(includeRawModel);

const includeRawResponse = await includeRawChain.invoke({
input: "What is 2 + 2?",
});
console.log(JSON.stringify(includeRawResponse, null, 2));
/*
{
"raw": {
"kwargs": {
"content": "Okay, let me use the calculator tool to find the result of 2 + 2:",
"additional_kwargs": {
"id": "msg_01HYwRhJoeqwr5LkSCHHks5t",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"usage": {
"input_tokens": 458,
"output_tokens": 109
},
"tool_calls": [
{
"id": "toolu_01LDJpdtEQrq6pXSqSgEHErC",
"type": "function",
"function": {
"arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}",
"name": "calculator"
}
}
]
},
}
},
"parsed": {
"operation": "add",
"number1": 2,
"number2": 2
}
}
*/
8 changes: 7 additions & 1 deletion langchain-core/src/messages/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,13 @@ export type MessageContentImageUrl = {
image_url: string | { url: string; detail?: ImageDetail };
};

export type MessageContentComplex = MessageContentText | MessageContentImageUrl;
export type MessageContentComplex =
| MessageContentText
| MessageContentImageUrl
// eslint-disable-next-line @typescript-eslint/no-explicit-any
| (Record<string, any> & { type?: "text" | "image_url" | string })
// eslint-disable-next-line @typescript-eslint/no-explicit-any
| (Record<string, any> & { type?: never });

export type MessageContent = string | MessageContentComplex[];

Expand Down
18 changes: 14 additions & 4 deletions langchain-core/src/output_parsers/string.ts
Original file line number Diff line number Diff line change
Expand Up @@ -63,15 +63,25 @@ export class StringOutputParser extends BaseTransformOutputParser<string> {
): string {
switch (content.type) {
case "text":
return this._textContentToString(content);
if ("text" in content) {
// Type guard for MessageContentText
return this._textContentToString(content as MessageContentText);
}
break;
case "image_url":
return this._imageUrlContentToString(content);
if ("url" in content) {
// Type guard for MessageContentImageUrl
return this._imageUrlContentToString(
content as MessageContentImageUrl
);
}
break;
default:
throw new Error(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
`Cannot coerce "${(content as any).type}" message part into a string.`
`Cannot coerce "${content.type}" message part into a string.`
);
}
throw new Error(`Invalid content type: ${content.type}`);
}

protected _baseMessageContentToString(
Expand Down
25 changes: 22 additions & 3 deletions langchain-core/src/prompts/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -704,12 +704,31 @@ function _coerceMessagePromptTemplateLike(
return new MessagesPlaceholder({ variableName, optional: true });
}
const message = coerceMessageLikeToMessage(messagePromptTemplateLike);
let templateData:
| string
| (string | _TextTemplateParam | _ImageTemplateParam)[];

if (typeof message.content === "string") {
templateData = message.content;
} else {
// Assuming message.content is an array of complex objects, transform it.
templateData = message.content.map((item) => {
if ("text" in item) {
return { text: item.text };
} else if ("image_url" in item) {
return { image_url: item.image_url };
} else {
throw new Error("Invalid message content");
}
});
}

if (message._getType() === "human") {
return HumanMessagePromptTemplate.fromTemplate(message.content);
return HumanMessagePromptTemplate.fromTemplate(templateData);
} else if (message._getType() === "ai") {
return AIMessagePromptTemplate.fromTemplate(message.content);
return AIMessagePromptTemplate.fromTemplate(templateData);
} else if (message._getType() === "system") {
return SystemMessagePromptTemplate.fromTemplate(message.content);
return SystemMessagePromptTemplate.fromTemplate(templateData);
} else if (ChatMessage.isInstance(message)) {
return ChatMessagePromptTemplate.fromTemplate(
message.content as string,
Expand Down
Loading
Loading