Releases: davidmigloz/langchain_dart
v0.7.5
2024-08-22
What's New?
✨ OpenAI's Structured Outputs
The ChatOpenAI
wrapper now supports OpenAI's Structured Outputs. This allows you to provide a JSON Schema to guide the model's responses, ensuring they adhere to your desired JSON format. You can simply provide the schema for guidance, or enable strict mode to guarantee the model's output matches the schema exactly, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value. Mind that only a subset of JSON Schema is supported when strict is enabled.
final chatModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: ChatOpenAIOptions(
model: 'gpt-4o',
responseFormat: ChatOpenAIResponseFormat.jsonSchema(
ChatOpenAIJsonSchema(
name: 'Companies',
description: 'A list of companies',
strict: true,
schema: {
'type': 'object',
'properties': {
// your schema definition
},
'additionalProperties': false,
},
),
),
),
);
Strict mode is also available when working with tools. ToolSpec
now has a strict
field that you can set it to true to enable it.
Read more about how it works internally in their announcement blog post and how to use it in LangChain.dart in the ChatOpenAI documentation.
🤖 ToolsAgent: A Generic Tool-Calling Agent
While we're working on porting LangGraph to Dart, we've refactored OpenAIToolsAgent
into a new, more versatile ToolsAgent
. This generic agent can be used with any model that supports tool-calling, including ChatOllama
, ChatOpenAI
, ChatAnthropic
, and more. Check out the docs for more info.
To migrate from OpenAIToolsAgent
to ToolsAgent
just run:
dart fix --apply
🛠️ Other Improvements
- All
RunnableOptions
subclasses now have a convenientcopyWith
method for easier modification. ChatOpenAI
now includes log probabilities (logprobs) in the result metadata, offering more context for analysis.- Removed the
OpenAI-Beta
header inChatOpenAI
to prevent CORS issues when using OpenRouter. - Added
gpt-4o-2024-08-06
andchatgpt-4o-latest
to the model catalog inChatOpenAI
. - Added support for
min_p
inChatOllama
, providing more control over the sampling process.
💡 Introducing Code Assist AI
Thanks to the team at CommandDash, we now have a dedicated AI chatbot to help you with LangChain.dart! This chatbot can answer your questions, provide documentation links, and even generate code snippets. Try it out here:
🔧 API Clients Updates
openai_dart
: Now supports OpenAI's Structured Outputs.ollama_dart
: Now supports themin_p
parameter for finer control over sampling.
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
langchain
-v0.7.5
langchain_core
-v0.3.5
langchain_community
-v0.3.1
langchain_openai
-v0.7.1
langchain_ollama
-v0.3.1
langchain_google
-v0.6.2
langchain_mistralai
-v0.2.3
ollama_dart
-v0.2.1
openai_dart
-v0.4.1
langchain_firebase
-v0.2.1+1
langchain_supabase
-v0.1.1+2
langchain_pinecone
-v0.1.0+8
langchain_anthropic
-v0.1.1+1
langchain_chroma
-v0.2.1+2
Packages with dependency updates only:
Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.
langchain_firebase
-v0.2.1+1
langchain_supabase
-v0.1.1+2
langchain_pinecone
-v0.1.0+8
langchain_anthropic
-v0.1.1+1
langchain_chroma
-v0.2.1+2
langchain
- v0.7.5
- FEAT: Add ToolsAgent for models with tool-calling support (#530). (f3ee5b44)
- FEAT: Deprecate OpenAIToolsAgent in favour of ToolsAgent (#532). (68d8011a)
- DOCS: Add Code Assist AI in README and documentation (#538). (e752464c)
langchain_core
- v0.3.5
- FEAT: Add copyWith method to all RunnableOptions subclasses (#531). (42c8d480)
- FEAT: Support OpenAI's strict mode for tool calling in ChatOpenAI (#536). (71623f49)
- FEAT: Deprecate OpenAIToolsAgent in favour of ToolsAgent (#532). (68d8011a)
langchain_community
- v0.3.1
langchain_openai
- v0.7.1
- FEAT: Add support for Structured Outputs in ChatOpenAI (#526). (c5387b5d)
- FEAT: Handle refusal in OpenAI's Structured Outputs API (#533). (f4c4ed99)
- FEAT: Include logprobs in result metadata from ChatOpenAI (#535). (1834b3ad)
- FEAT: Add chatgpt-4o-latest to model catalog (#527). (ec82c760)
- FEAT: Add gpt-4o-2024-08-06 to model catalog (#522). (563200e0)
- FEAT: Deprecate OpenAIToolsAgent in favour of ToolsAgent (#532). (68d8011a)
- REFACTOR: Don't send OpenAI-Beta header in ChatOpenAI (#511). (0e532bab)
langchain_ollama
- v0.3.1
- FEAT: Add support for min_p in Ollama (#512). (e40d54b2)
- FEAT: Add copyWith method to all RunnableOptions subclasses (#531). (42c8d480)
langchain_google
- v0.6.2
langchain_mistralai
- v0.2.3
openai_dart
- v0.4.1
v0.7.4
2024-07-26
What's New?
🔥 Ollama Tool Support
ChatOllama
now offers support for native tool calling with popular models such as Llama 3.1, the latest state-of-the-art model from Meta. This enables a model to answer a given prompt using tool(s) it knows about, making it possible for models to perform more complex tasks or interact with the outside world. It follows the standard LangChain.dart tools API, so you can use it in the same way as you would with other providers that support tool-calling (e.g., ChatOpenAI
, ChatAnthropic
, etc.).
Explore more:
- ChatOllama documentation
- Example: Answering questions with data from an external API
- Example: Extracting structured data with tools
🔄 Runnable Fallbacks
When working with language models, you may encounter issues from the underlying APIs, such as rate limits or downtime. To enhance reliability in production environments, we've introduced Runnable Fallbacks. This feature allows you to define alternative models or even entirely different chains to use when the primary option fails. Create fallbacks using the withFallbacks()
function:
final chatModel = chatOpenAI.withFallbacks([chatAnthropic, chatMistral]);
Check out the full documentation on how to use fallbacks.
🔗 Bind Improvements
We've enhanced the bind functionality to merge options additively across multiple cascade bind
calls. This change preserves existing options while allowing specific overrides, providing more flexibility in configuration. Also, LanguageModelOptions
(e.g., ChatOpenAIOptions
, ChatMistralAIOptions
, etc.) no longer specify the default model in the constructor, preventing the model from being reset when using the bind
operator.
🌟 OpenAI GPT-4o-mini
A week ago, OpenAI released GPT-4o mini, their most cost-efficient small model. We have now made it the default model for the ChatOpenAI
wrapper.
Additionally, you can now disable parallel tool calls and define the service tier in ChatOpenAI
.
✨ Mistral New Models
Mistral recently released several powerful models: Mistral Large 2, Mistral NeMo, and Codestral Mamba. o take advantage of these models, use the ChatMistralAI
wrapper from the langchain_mistralai
package.
final chatModel = ChatMistralAI(
apiKey: 'your-api-key',
defaultOptions: ChatMistralAIOptions(
model: 'mistral-large-latest',
),
);
🔧 API Clients Updates
- ollama_dart: Now supports tool calls, a new API for retrieving the Ollama server version, and extended model information in the model API.
- openai_dart: Adds support for disabling parallel tool calls and specifying the service tier in the chat completions API, and customising the chunking strategy in the file_search tool and overrides in the assistants API. You can now also customise the
OpenAI-Beta
header for beta features.
Changes
Packages with breaking changes:
langchain_community
-v0.3.0
langchain_ollama
-v0.3.0
langchain_openai
-v0.7.0
ollama_dart
-v0.2.0
openai_dart
-v0.4.0
Packages with other changes:
langchain
-v0.7.4
langchain_anthropic
-v0.1.1
langchain_chroma
-v0.2.1+1
langchain_core
-v0.3.4
langchain_firebase
-v0.2.1
langchain_google
-v0.6.1
langchain_mistralai
-v0.2.2
langchain_pinecone
-v0.1.0+7
langchain_supabase
-v0.1.1+1
langchain
- v0.7.4
- FEAT: Add Fallback support for Runnables (#501). (5887858d)
- FEAT: Implement additive options merging for cascade bind calls (#500). (8691eb21)
- REFACTOR: Remove default model from the language model options (#498). (44363e43)
- REFACTOR: Depend on exact versions for internal 1st party dependencies (#484). (244e5e8f)
- DOCS: Update README.md with Ollama tool call support. (e016b0bd)
langchain_core
- v0.3.4
- FEAT: Add Fallback support for Runnables (#501). (5887858d)
- FEAT: Implement additive options merging for cascade bind calls (#500). (8691eb21)
- REFACTOR: Remove default model from the language model options (#498). (44363e43)
langchain_community
- v0.3.0
- FEAT: Implement additive options merging for cascade bind calls (#500). (8691eb21)
- REFACTOR: Depend on exact versions for internal 1st party dependencies (#484). (244e5e8f)
langchain_ollama
- v0.3.0
- FEAT: Add tool calling support in ChatOllama (#505). (6ffde204)
- BREAKING FEAT: Update Ollama default model to llama-3.1 (#506). (b1134bf1)
- FEAT: Implement additive options merging for cascade bind calls (#500). (8691eb21)
- REFACTOR: Remove default model from the language model options (#498). (44363e43)
- REFACTOR: Depend on exact versions for internal 1st party dependencies (#484). (244e5e8f)
- DOCS: Update Ollama request options default values in API docs (#479). (e1f93366)
langchain_openai
- v0.7.0
- BREAKING FEAT: Update ChatOpenAI default model to gpt-4o-mini (#507). (c7b8ce91)
- FEAT: Add support for disabling parallel tool calls in ChatOpenAI (#493). (c46d676d)
- FEAT: Add GPT-4o-mini to model catalog (#497). (faa23aee)
- FEAT: Add support for service tier in ChatOpenAI (#495). (af79a4ff)
- FEAT: Implement additive options merging for cascade bind calls...
v0.7.3
2024-07-02
What's New?
🔥 Anthropic Integration
Introducing the new langchain_anthropic
package, which provides support for the ChatAnthropic
chat model wrapper to consume the Anthropic's Messages API. This integration gives you access to cutting-edge models such as Claude 3.5 Sonnet, which sets new standards in reasoning, knowledge and coding, while offering enhanced capabilities in image understanding, data analysis and writing.
final chatModel = ChatAnthropic(
apiKey: 'yourApiKey',
defaultOptions: ChatAnthropicOptions(
model: 'claude-3-5-sonnet-20240620',
),
);
ChatAnthropic
supports streaming and tool calling. For more information, check out the docs.
🔍 Tavily Search Integration
Connect your LLMs to the web with the new Tavily integration, a search engine optimized for LLMs and RAG.
TavilySearchResultsTool
: returns a list of real-time, accurate, and factual search results for a query.TavilyAnswerTool
: returns a direct answer for a query.
🤖 Google AI and VertexAI for Firebase
- Both
ChatFirebaseVertexAI
andChatGoogleGenerativeAI
now utilize thegemini-1.5-flash
model by default. - Added MIME type support, allowing you to force the model to reply using JSON.
ChatFirebaseVertexAI
now supports Firebase Auth.ChatFirebaseVertexAI
now correctly reports usage metadata.
🛠 Tool calling improvements
- You can now use
ChatToolChoice.required
to enforce the use of at least one tool, without specifying a particular one.
📚 Documentation Updates
- We've heard your feedback about the difficulty in finding all supported integrations and their corresponding packages. Now, you can easily locate this information in one place.
🧩 API Clients Releases
- A new tavily_dart client is available for consuming the Tavily API.
- The anthropic_sdk_dart client now supports tool use, including streaming tools.
Changes
New packages:
Packages with breaking changes:
Packages with other changes:
langchain
-v0.7.3
langchain_core
-v0.3.3
langchain_community
-v0.2.2
langchain_chroma
-v0.2.1
langchain_mistralai
-v0.2.1
langchain_ollama
-v0.2.2+1
langchain_openai
-v0.6.3
langchain_pinecone
-v0.1.0+6
langchain_supabase
-v0.1.1
anthropic_sdk_dart
-v0.1.0
googleai_dart
-v0.1.0+2
mistralai_dart
-v0.0.3+3
ollama_dart
-v0.1.2
openai_dart
-v0.3.3+1
langchain
- v0.7.3
Note: Anthropic integration (
ChatAnthropic
) is available in the newlangchain_anthropic
package.
- FEAT: Add support for TavilySearchResultsTool and TavilyAnswerTool (#467). (a9f35755)
- DOCS: Document existing integrations in README.md. (cc4246c8)
langchain_core
- v0.3.3
- FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
- FEAT: Update ChatResult.id concat logic (#477). (44c7fafd)
langchain_community
- v0.2.2
langchain_anthropic
- v0.1.0
langchain_firebase
- v0.2.0
Note:
ChatFirebaseVertexAI
now usesgemini-1.5-flash
model by default.
- BREAKING FEAT: Update ChatFirebaseVertexAI default model to gemini-1.5-flash (#458). (d3c96c52)
- FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
- FEAT: Support response MIME type in ChatFirebaseVertexAI (#461) (#463). (c3452721)
- FEAT: Add support for Firebase Auth in ChatFirebaseVertexAI (#460). (6d137290)
- FEAT: Add support for usage metadata in ChatFirebaseVertexAI (#457). (2587f9e2)
- REFACTOR: Simplify how tools are passed to the internal Firebase client (#459). (7f772396)
langchain_google
- v0.6.0
Note:
ChatGoogleGenerativeAI
now usesgemini-1.5-flash
model by default.
- BREAKING FEAT: Update ChatGoogleGenerativeAI default model to gemini-1.5-flash (#462). (c8b30c90)
- FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
- FEAT: Support response MIME type and schema in ChatGoogleGenerativeAI (#461). (e258399e)
- REFACTOR: Migrate conditional imports to js_interop (#453). (a6a78cfe)
langchain_openai
- v0.6.3
langchain_ollama
- v0.2.2+1
- DOCS: Update ChatOllama API docs. (cc4246c8)
langchain_chroma
- v0.2.1
- Update a dependency to the latest release.
langchain_mistralai
- v0.2.1
- Update a dependency to the latest release.
langchain_pinecone
- v0.1.0+6
- Update a dependency to the latest release.
langchain_supabase
- v0.1.1
- Update a dependency to the latest release.
anthropic_sdk_dart
- v0.1.0
v0.7.2
2024-06-01
What's New?
🔥 ObjectBox Vector Search
We are excited to announce that Langchain.dart now supports ObjectBox as a vector store!
ObjectBox is an embedded database that runs inside your application. With the release of v4.0.0, it now supports storing and querying vectors. Leveraging the HNSW algorithm, ObjectBox provides fast and efficient vector search without keeping all the vectors in-memory, making it the first scalable on-device vector database for Dart/Flutter applications.
Check out the ObjectBoxVectorStore documentation to learn how to use it.
final vectorStore = ObjectBoxVectorStore(
embeddings: OllamaEmbeddings(model: 'jina/jina-embeddings-v2-small-en'),
dimensions: 512,
);
We have also introduced a new example showcasing a fully local Retrieval Augmented Generation (RAG) pipeline with Llama 3, utilizing ObjectBox and Ollama:
✨ Runnable.close
You now have the ability to close any resources associated with a Runnable
by invoking the close
method. For instance, if you have a chain like:
final chain = promptTemplate
.pipe(model)
.pipe(outputParser);
// ...
chain.close();
Calling close()
will propagate the close()
call to each Runnable
instance within the chain. In this example, it won't affect promptTemplate
and outputParser
as they have no associated resources to close, but it will effectively close the HTTP client of the model.
🚚 Documentation Migration: langchaindart.dev
We have successfully migrated our documentation to a new domain: langchaindart.dev.
🛠️ Bugfixes
- Errors are now correctly propagated to the stream listener when streaming a chain that uses a
StringOutputParser
. - The Ollama client now properly handles buffered stream responses, such as when utilizing Cloudflare Tunnels.
🆕 anthropic_sdk_dart client
We are working on integrating Anthropic into LangChain.dart. As part of this effort, we have released a new client for the Anthropic API: anthropic_sdk_dart. In the next release, we will add support for tool calling and further integrate it into LangChain.dart.
Changes
New packages:
Packages with other changes:
langchain
-v0.7.2
langchain_core
-v0.3.2
langchain_community
-v0.2.1
langchain_chroma
-v0.2.0+5
langchain_firebase
-v0.1.0+2
langchain_google
-v0.5.1
langchain_mistralai
-v0.2.1
langchain_ollama
-v0.2.2
langchain_openai
-v0.6.2
langchain_pinecone
-v0.1.0+5
langchain_supabase
-v0.1.0+5
chromadb
-v0.2.0+1
googleai_dart
-v0.1.0+1
mistralai_dart
-v0.0.3+2
ollama_dart
-v0.1.1
openai_dart
-v0.3.3
vertex_ai
-v0.1.0+1
langchain
- v0.7.2
- FEAT: Add support for ObjectBoxVectorStore (#438). (81e167a6)
- Check out the ObjectBoxVectorStore documentation
- REFACTOR: Migrate to langchaindart.dev domain (#434). (358f79d6)
langchain_core
- v0.3.2
- FEAT: Add Runnable.close() to close any resources associated with it (#439). (4e08cced)
- FIX: Stream errors are not propagated by StringOutputParser (#440). (496b11cc)
langchain_community
- v0.2.1
- FEAT: Add support for ObjectBoxVectorStore (#438). (81e167a6)
- Check out the ObjectBoxVectorStore documentation
langchain_openai
- v0.6.2
anthropic_sdk_dart
- v0.0.1
ollama_dart
- v0.1.1
openai_dart
- v0.3.3
- FEAT: Support FastChat OpenAI-compatible API (#444). (ddaf1f69)
- FIX: Make vector store name optional (#436). (29a46c7f)
- FIX: Fix deserialization of sealed classes (#435). (7b9cf223)
New Contributors
- @alfredobs97 made their first contribution in #433
📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.7.1
2024-05-14
What's New?
🔥 VertexAI for Firebase
We are excited to announce 0-day support for Vertex AI for Firebase with the introduction of the new langchain_firebase
package.
If you need to call the Vertex AI Gemini API directly from your mobile or web app, you can now use the ChatFirebaseVertexAI
class. This class is specifically designed for mobile and web apps, offering enhanced security options against unauthorized clients (via Firebase App Check) and seamless integration with other Firebase services. It supports the latest models (gemini-1.5-pro
and gemini-1.5-flash
) as well as tool calling.
await Firebase.initializeApp();
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview-0514',
),
);
Check out the documentation and the sample project (a port of the official firebase_vertexai
sample).
⚡️ Google AI for Developers (Upgrade)
ChatGoogleGenerativeAI
and GoogleGenerativeAIEmbeddings
have been upgraded to use version v1beta
of the Gemini API (previously v1
), which supports the latest models (gemini-1.5-pro-latest
and gemini-1.5-flash-latest
).
ChatGoogleGenerativeAI
now includes support for tool calling, including parallel tool calling.
Under the hood, we have migrated the client from googleai_dart
to the official google_generative_ai
package.
✨ OpenAI (Enhancements)
You can already use the new OpenAI's GPT-4o model. Additionally, usage statistics are now included when streaming with OpenAI
and ChatOpenAI
.
🦙 Ollama
The default models for Ollama
, ChatOllama
, and OllamaEmbeddings
have been updated to llama3
. ChatOllama
now returns a finishReason
. OllamaEmbeddings
now supports keepAlive
.
🛠️ openai_dart
The Assistant API has been enhanced to support different content types, and several bug fixes have been implemented.
The batch API now supports completions and embeddings.
🔧 ollama_dart
The client has been aligned with the Ollama v0.1.36 API.
Changes
Packages with breaking changes:
Packages with other changes:
langchain
-v0.7.1
langchain_core
-v0.3.1
langchain_community
-v0.2.0+1
langchain_firebase
-v0.1.0
langchain_openai
-v0.6.1
langchain_ollama
-v0.2.1
langchain_chroma
-v0.2.0+4
langchain_mistralai
-v0.2.0+1
langchain_pinecone
-v0.1.0+4
langchain_supabase
-v0.1.0+4
openai_dart
-v0.3.2
langchain
- v0.7.1
Note: VertexAI for Firebase (
ChatFirebaseVertexAI
) is available in the newlangchain_firebase
package.
- DOCS: Add docs for ChatFirebaseVertexAI (#422). (8d0786bc)
- DOCS: Update ChatOllama docs (#417). (9d30b1a1)
langchain_core
- v0.3.1
- FEAT: Add equals to ChatToolChoiceForced (#422). (8d0786bc)
- FIX: Fix finishReason null check (#406). (5e2b0ecc)
langchain_community
- v0.2.0+1
- Update a dependency to the latest release.
langchain_google
- v0.5.0
Note:
ChatGoogleGenerativeAI
andGoogleGenerativeAIEmbeddings
now use the versionv1beta
of the Gemini API (instead ofv1
) which support the latest models (gemini-1.5-pro-latest
andgemini-1.5-flash-latest
).VertexAI for Firebase (
ChatFirebaseVertexAI
) is available in the newlangchain_firebase
package.
- FEAT: Add support for tool calling in ChatGoogleGenerativeAI (#419). (df41f38a)
- DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)
- BREAKING FEAT: Migrate internal client from googleai_dart to google_generative_ai (#407). (fa4b5c37)
langchain_firebase
- v0.1.0
- FEAT: Add support for ChatFirebaseVertexAI (#422). (8d0786bc)
- DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)
langchain_openai
- v0.6.1
- FEAT: Add GPT-4o to model catalog (#420). (96214307)
- FEAT: Include usage stats when streaming with OpenAI and ChatOpenAI (#406). (5e2b0ecc)
langchain_ollama
- v0.2.1
- FEAT: Handle finish reason in ChatOllama (#416). (a5e1af13)
- FEAT: Add keepAlive option to OllamaEmbeddings (#415). (32e19028)
- FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
- REFACTOR: Remove deprecated Ollama options (#414). (861a2b74)
openai_dart
- v0.3.2
- FEAT: Add GPT-4o to model catalog (#420). (96214307)
- FEAT: Add support for different content types in Assistants API and other fixes (#412). (97acab45)
- FEAT: Add support for completions and embeddings in batch API in openai_dart (#425). (16fe4c68)
- FEAT: Add incomplete status to RunObject in openai_dart (#424). (71b116e6)
ollama_dart
- v0.1.0
- BREAKING FEAT: Align Ollama client to the Ollama v0.1.36 API (#411). (326212ce)
- FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
- FEAT: Add support for done reason (#413). (cc5b1b02)
googleai_dart
- v0.1.0
- REFACTOR: Minor changes (#407). ([fa4b5c3](https://github.co...
v0.7.0
2024-05-05
What's New?
This update introduces a standardised interface for tool calling (also known as function calling), allowing models to interact more effectively with external tools.
Previously, our function-calling capability was tightly integrated with the OpenAI provider. The new interface decouples this by providing an abstraction layer over the tool-calling APIs of different vendors. This enhancement makes it easier to switch providers without modifying your existing code.
We have also improved integration with LangChain tools. Now you can seamlessly integrate these tools into your models without the need to convert data formats.
Models can now call multiple tools in a single request, an improvement over the previous limit of one tool per request.
A new output parser, ToolsOutputParser
, has been introduced to extract tool calls from the model response:
final calculator = CalculatorTool();
final model = ChatOpenAI(
apiKey: openAiApiKey,
defaultOptions: ChatOpenAIOptions(
model: 'gpt-4-turbo',
tools: [calculator],
),
);
final chain = model.pipe(ToolsOutputParser());
final res = await chain.invoke(
PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
print(res);
// [ParsedToolCall{
// id: call_p4GmED1My56vV6XZi9ChljJN,
// name: calculator,
// arguments: {
// input: 3 * 12
// },
// }, ParsedToolCall{
// id: call_eLJo7nII9EanFUcxy42WA5Pm,
// name: calculator,
// arguments: {
// input: 11 + 49
// },
// }]
It effectively handles streaming by progressively concatenating chunks and completing partial JSONs into valid ones:
final stream = chain2.stream(
PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
await stream.forEach(print);
// []
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * }, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 +}, }]
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 + 49}, }]
Finally, the OpenAIFunctionsAgent
has been renamed to OpenAIToolsAgent
and updated to work with the new standardised tool calling interface. We plan to extend this functionality in future updates by introducing a ToolsAgent
that is compatible with any vendor that supports tool calling.
Refer to the Tool Calling and ToolsOutputParser documentation for more details.
To migrate from the previous function call paradigm to the new standard tool call interface, see this migration guide. We have also improved the tool abstractions, see here for all the changes.
Changes
Packages with breaking changes:
langchain
-v0.7.0
langchain_core
-v0.3.0
langchain_community
-v0.2.0
langchain_openai
-v0.6.0
langchain_google
-v0.4.0
langchain_mistralai
-v0.2.0
langchain_ollama
-v0.2.0
langchain
- v0.7.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_core
- v0.3.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_community
- v0.2.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_openai
- v0.6.0
- BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
- BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)
langchain_google
- v0.4.0
langchain_mistralai
- v0.2.0
langchain_ollama
- v0.2.0
📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.6.0
2024-04-30
What's New?
This release focuses on enhancing and expanding the capabilities of LangChain Expression Language (LCEL):
🚦 RunnableRouter
RunnableRouter
enables the creation of non-deterministic chains where the output of a previous step determines the next step. This feature allows you to use an LLM to dynamically select the appropriate prompt, chain, LLM, or other components based on some input. A particularly effective technique is combining RunnableRouter
with embedding models to route a query to the most relevant (semantically similar) prompt.
final router = Runnable.fromRouter((Map<String, dynamic> input, _) {
final topic = input['topic'] as String;
if (topic.contains('langchain')) {
return langchainChain;
} else if (topic.contains('anthropic')) {
return anthropicChain;
} else {
return generalChain;
}
});
For more details and examples, please refer to the router documentation.
🌟 JsonOutputParser
In certain scenarios, it is useful to ask the model to respond in JSON format, which makes it easier to parse the response. Many vendors even offer a JSON mode that guarantees valid JSON output. With the new JsonOutputParser
you can now easily parse the output of a runnable as a JSON map. It also supports streaming, returning valid JSON from the incomplete JSON chunks streamed by the model.
final model = ChatOpenAI(
apiKey: openAiApiKey,
defaultOptions: ChatOpenAIOptions(
responseFormat: ChatOpenAIResponseFormat(
type: ChatOpenAIResponseFormatType.jsonObject,
),
),
);
final parser = JsonOutputParser<ChatResult>();
final chain = model.pipe(parser);
final stream = chain.stream(
PromptValue.string(
'Output a list of the countries france, spain and japan and their '
'populations in JSON format. Use a dict with an outer key of '
'"countries" which contains a list of countries. '
'Each country should have the key "name" and "population"',
),
);
await stream.forEach((final chunk) => print('$chunk|'));
// {}|
// {countries: []}|
// {countries: [{name: France}]}|
// {countries: [{name: France, population: 67076000}, {}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}|
🗺️ Mapping input values
Mapping the output value of a previous runnable to a new value that aligns with the input requirements of the next runnable is a common task. Previous versions provided the Runnable.mapInput
method for custom mapping logic, but it lacked control over the stream of input values when using streaming. With this release, you can now utilize Runnable.mapInputStream
to have full control over the input stream.
For example, you may want to output only the last element of the input stream: (full code)
final mapper = Runnable.mapInputStream((Stream<String> inputStream) async* {
yield await inputStream.last;
});
If you need to define separate logic for invoke and stream operations, Runnable.fromFunction
has been updated to allow you to specify the invoke
logic, the stream
logic, or both, providing greater flexibility. This refactoring of Runnable.fromFunction
resulted in a minor breaking change, see the migration guide for more information.
In this example, we create a runnable that we can use in our chains to debug the output of the previous step. It prints different information when the chain is invoked vs streamed. (full code)
Runnable<T, RunnableOptions, T> logOutput<T extends Object>(String stepName) {
return Runnable.fromFunction<T, T>(
invoke: (input, options) {
print('Output from step "$stepName":\n$input\n---');
return Future.value(input);
},
stream: (inputStream, options) {
return inputStream.map((input) {
print('Chunk from step "$stepName":\n$input\n---');
return input;
});
},
);
}
final chain = Runnable.getMapFromInput<String>('equation_statement')
.pipe(logOutput('getMapFromInput'))
.pipe(promptTemplate)
.pipe(logOutput('promptTemplate'))
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(logOutput('chatModel'))
.pipe(StringOutputParser())
.pipe(logOutput('outputParser'));
🙆 Non-streaming components
Previously, all LangChain.dart components processed a streaming input item by item. This made sense for some components, such as output parsers, but was problematic for others. For example, you don't want a retriever to retrieve documents for each streamed chunk, instead you want to wait for the full query to be received before performing the search.
This has been fixed in this release, as from now on the following components will reduce/aggregate the streaming input from the previous step into a single value before processing it:
PromptTemplate
ChatPromptTemplate
LLM
ChatModel
Retriever
Tool
RunnableFunction
RunnableRouter
📚 Improved LCEL docs
We have revamped the LangChain Expression Language documentation. It now includes a dedicated section explaining the different primitives available in LCEL. Also, a new page has been added specifically covering streaming.
- Sequence: Chaining runnables
- Map: Formatting inputs & concurrency
- Passthrough: Passing inputs through
- Mapper: Mapping inputs
- Function: Run custom logic
- Binding: Configuring runnables
- Router: Routing inputs
Changes
Packages with breaking changes:
Packages with other changes:
langchain
- v0.6.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
- DOCS: Update LangChain Expression Language documentation (#395). (6ce75e5f)
langchain_core
- v0.2.0+1
- FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
- FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
- FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
- BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
- FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
openai_dart
- v0.2.2
v0.5.0
2024-04-10
What's New?
We're excited to announce a major update with a focus on enhancing the project's scalability and improving the developer experience. Here are the key enhancements:
🛠️ Restructured package organization:
LangChain.dart's main package has been divided into multiple packages to simplify usage and contribution to the project.
langchain_core
: Includes only the core abstractions and the LangChain Expression Language as a way to compose them together.- Depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.
langchain
: Features higher-level components and use-case specific frameworks crucial to the application's cognitive architecture.- Depend on this package to build LLM applications with LangChain.dart.
- This package exposes
langchain_core
so you don't need to depend on it explicitly.
langchain_community
: Houses community-contributed components and third-party integrations not included in the main LangChain.dart API.- Depend on this package if you want to use any of the integrations or components it provides.
- Integration-specific packages such as
langchain_openai
andlangchain_google
: These enable independent imports of popular third-party integrations without full dependency on thelangchain_community
package.- Depend on an integration-specific package if you want to use the specific integration.
✨ Enhanced APIs and New .batch
API:
The LanguageModelResult
class structure (including its child classes LLMResult
and ChatResult
) has been simplified, with each LanguageModelResult
now storing a single output directly.
To generate multiple outputs, use the new .batch
API (instead of .invoke
or .stream
), which batches the invocation of a Runnable
on a list of inputs. If the underlying provider supports batching, this method will attempt to batch the calls to the provider. Otherwise, it will concurrently call invoke
on each input (you can configure the concurrencyLimit
).
Output parsers have been revamped to provide greater flexibility and compatibility with any type of Runnable
. StringOutputParser
now supports reducing the output of a stream to a single value, which is useful when the next step in the chain expects a single input value instead of a stream.
The deprecated generate
and predict
APIs have been removed, favouring the LCEL APIs (invoke
, stream
, and batch
).
The internal implementation of the stream
API has been optimized, providing clearer error messages in case of issues.
🆕 Google AI streaming and embeddings:
ChatGoogleGenerativeAI
(used for interacting with Gemini models) now supports streaming and tuned models.
Support for Google AI embedding models has been added through the GoogleGenerativeAIEmbeddings
class, compatible with the latest text-embedding-004
embedding model. Specifying the number of output dimensions is also supported.
🚚 Migration guide:
We have compiled a migration guide to assist you in updating your code to the new version. You can find it here. For any questions or assistance, please reply to this discussion or reach out to us on Discord.
Changes
Packages with breaking changes:
langchain
-v0.5.0
langchain_chroma
-v0.2.0
langchain_community
-v0.1.0
langchain_core
-v0.1.0
langchain_google
-v0.3.0
langchain_mistralai
-v0.1.0
langchain_ollama
-v0.1.0
langchain_openai
-v0.5.0
langchain_pinecone
-v0.1.0
langchain_supabase
-v0.1.0
chromadb
-v0.2.0
openai_dart
-v0.2.0
vertex_ai
-v0.1.0
Packages with other changes:
langchain
- v0.5.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
- REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
- FEAT: Implement .batch support (#370). (d254f929)
- FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)
- DOCS: Update LCEL docs. (ab3ab573)
- DOCS: Add RAG example using OllamaEmbeddings and ChatOllama (#337). (8bddc6c0)
langchain_community
- v0.1.0
langchain_core
- v0.1.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
- FEAT: Implement .batch support (#370). (d254f929)
- FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)
langchain_chroma
- v0.2.0
langchain_google
- v0.3.0
- BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
- BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
- BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
- BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
- REFACTOR: Simplify internal .stream impleme...
v0.4.2
2024-02-15
What's new?
This release contains some minor improvements:
- Ollama
keep_alive
: users now can control the duration for which models remain active in memory when using Ollama (which, by the way, just added Windows support). - Streaming support for
googleai_dart
: the client has been upgraded to support streaming functionality. However, with the release of Google's official google_generative_ai client, we are evaluating the potential deprecation ofgoogleai
_dart in favour of the official client if they are on feature parity. - Custom instance configuration for OpenAI.
In addition to these updates, we've also started the work to split langchain
package into 3 packages. This refactoring aims to enhance modularity and facilitate community contributions.
langchain_core
: will contain the core abstractions (ie. language models, document loaders, embedding models, vectorstores, retrievers, etc.), as well as LangChain Expression Language as a way to compose these components together. The community can depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.langchain_community
: will contain third-party integrations that don't have a dedicated package.langchain
: will depend and exposelangchain_core
and contain higher-level and use-case specific chains, agents, and retrieval algorithms that are at the core of the application's cognitive architecture.
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
chromadb
-v0.1.2
googleai_dart
-v0.0.3
langchain
-v0.4.2
langchain_chroma
-v0.1.1
langchain_google
-v0.2.4
langchain_mistralai
-v0.0.3
langchain_ollama
-v0.0.4
langchain_openai
-v0.4.1
langchain_pinecone
-v0.0.7
langchain_supabase
-v0.0.1+1
mistralai_dart
-v0.0.3
ollama_dart
-v0.0.3
openai_dart
-v0.1.7
vertex_ai
-v0.0.10
googleai_dart
- v0.0.3
- FEAT: Add streaming support to googleai_dart client (#299). (2cbd538a)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
openai_dart
- v0.1.7
- FEAT: Allow to specify OpenAI custom instance (#327). (4744648c)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
langchain_openai
- v0.4.1
- FEAT: Allow to specify OpenAI custom instance (#327). (4744648c)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
ollama_dart
- v0.0.3
- FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
langchain_ollama
- v0.0.4
- FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
- FEAT: Update meta and test dependencies (#331). (912370ee)
- DOCS: Update pubspecs. (d23ed89a)
chromadb
- v0.1.2
langchain
- v0.4.2
langchain_chroma
- v0.1.1
langchain_google
- v0.2.4
langchain_mistralai
- v0.0.3
langchain_pinecone
- v0.0.7
langchain_supabase
- v0.0.1+1
- DOCS: Update pubspecs. (d23ed89a)
mistralai_dart
- v0.0.3
vertex_ai
- v0.0.10
Contributors
📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.
v0.4.1
2024-01-31
What's new?
🆕 Supabase Vector integration
Now you can use Supabase Vector to store, query, and index vector embeddings in your Supbase Postgres database.
final vectorStore = Supabase(
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
supabaseUrl: 'https://xyzcompany.supabase.co',
supabaseKey: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...',
);
final res = await vectorStore.similaritySearch(
query: 'Where is the cat?',
config: SupabaseSimilaritySearch(
k: 5,
filter: {
'category': {r'$ne': 'person'},
},
),
);
Check out the docs for more info.
Changes
Packages with breaking changes:
- There are no breaking changes in this release.
Packages with other changes:
langchain
- v0.4.1
langchain_supabase
- v0.0.1
New Contributors
- @matteodg made their first contribution in #318
- @luisredondo made their first contribution in #318
📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.