Skip to content

Releases: davidmigloz/langchain_dart

v0.7.3

04 Jul 07:34
9a27506
Compare
Choose a tag to compare

2024-07-02

What's New?

🔥 Anthropic Integration

Introducing the new langchain_anthropic package, which provides support for the ChatAnthropic chat model wrapper to consume the Anthropic's Messages API. This integration gives you access to cutting-edge models such as Claude 3.5 Sonnet, which sets new standards in reasoning, knowledge and coding, while offering enhanced capabilities in image understanding, data analysis and writing.

final chatModel = ChatAnthropic(
  apiKey: 'yourApiKey',
  defaultOptions: ChatAnthropicOptions(
    model: 'claude-3-5-sonnet-20240620',
  ),
);

ChatAnthropic supports streaming and tool calling. For more information, check out the docs.

🔍 Tavily Search Integration

Connect your LLMs to the web with the new Tavily integration, a search engine optimized for LLMs and RAG.

  • TavilySearchResultsTool: returns a list of real-time, accurate, and factual search results for a query.
  • TavilyAnswerTool: returns a direct answer for a query.

🤖 Google AI and VertexAI for Firebase

  • Both ChatFirebaseVertexAI and ChatGoogleGenerativeAI now utilize the gemini-1.5-flash model by default.
  • Added MIME type support, allowing you to force the model to reply using JSON.
  • ChatFirebaseVertexAI now supports Firebase Auth.
  • ChatFirebaseVertexAI now correctly reports usage metadata.

🛠 Tool calling improvements

  • You can now use ChatToolChoice.required to enforce the use of at least one tool, without specifying a particular one.

📚 Documentation Updates

  • We've heard your feedback about the difficulty in finding all supported integrations and their corresponding packages. Now, you can easily locate this information in one place.

🧩 API Clients Releases

  • A new tavily_dart client is available for consuming the Tavily API.
  • The anthropic_sdk_dart client now supports tool use, including streaming tools.

Changes


New packages:

Packages with breaking changes:

Packages with other changes:


langchain - v0.7.3

Note: Anthropic integration (ChatAnthropic) is available in the new langchain_anthropic package.

  • FEAT: Add support for TavilySearchResultsTool and TavilyAnswerTool (#467). (a9f35755)
  • DOCS: Document existing integrations in README.md. (cc4246c8)

langchain_core - v0.3.3

  • FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
  • FEAT: Update ChatResult.id concat logic (#477). (44c7fafd)

langchain_community - v0.2.2

  • FEAT: Add support for TavilySearchResultsTool and TavilyAnswerTool (#467). (a9f35755)

langchain_anthropic - v0.1.0

langchain_firebase - v0.2.0

Note: ChatFirebaseVertexAI now uses gemini-1.5-flash model by default.

  • BREAKING FEAT: Update ChatFirebaseVertexAI default model to gemini-1.5-flash (#458). (d3c96c52)
  • FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
  • FEAT: Support response MIME type in ChatFirebaseVertexAI (#461) (#463). (c3452721)
  • FEAT: Add support for Firebase Auth in ChatFirebaseVertexAI (#460). (6d137290)
  • FEAT: Add support for usage metadata in ChatFirebaseVertexAI (#457). (2587f9e2)
  • REFACTOR: Simplify how tools are passed to the internal Firebase client (#459). (7f772396)

langchain_google - v0.6.0

Note: ChatGoogleGenerativeAI now uses gemini-1.5-flash model by default.

  • BREAKING FEAT: Update ChatGoogleGenerativeAI default model to gemini-1.5-flash (#462). (c8b30c90)
  • FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)
  • FEAT: Support response MIME type and schema in ChatGoogleGenerativeAI (#461). (e258399e)
  • REFACTOR: Migrate conditional imports to js_interop (#453). (a6a78cfe)

langchain_openai - v0.6.3

  • FEAT: Add support for ChatToolChoiceRequired (#474). (bf324f36)

langchain_ollama - v0.2.2+1

  • DOCS: Update ChatOllama API docs. (cc4246c8)

langchain_chroma - v0.2.1

  • Update a dependency to the latest release.

langchain_mistralai - v0.2.1

  • Update a dependency to the latest release.

langchain_pinecone - v0.1.0+6

  • Update a dependency to the latest release.

langchain_supabase - v0.1.1

  • Update a dependency to the latest release.

anthropic_sdk_dart - v0.1.0

  • FEAT: Add support for tool use in anthropic_sdk_dart client (#469). (81896cfd)
  • FEAT: Add extensions on ToolResultBlockContent in anthropic_sdk_dart (#476). (8d92d9b0)
  • REFACTOR: Improve schemas names in anthropic_sdk_dart (#475). (8ebeacde)
  • REFACTOR: Migrate ...
Read more

v0.7.2

01 Jun 09:02
b93c25d
Compare
Choose a tag to compare

2024-06-01

What's New?

🔥 ObjectBox Vector Search

We are excited to announce that Langchain.dart now supports ObjectBox as a vector store!

ObjectBox is an embedded database that runs inside your application. With the release of v4.0.0, it now supports storing and querying vectors. Leveraging the HNSW algorithm, ObjectBox provides fast and efficient vector search without keeping all the vectors in-memory, making it the first scalable on-device vector database for Dart/Flutter applications.

Check out the ObjectBoxVectorStore documentation to learn how to use it.

final vectorStore = ObjectBoxVectorStore(
  embeddings: OllamaEmbeddings(model: 'jina/jina-embeddings-v2-small-en'),
  dimensions: 512,
);

We have also introduced a new example showcasing a fully local Retrieval Augmented Generation (RAG) pipeline with Llama 3, utilizing ObjectBox and Ollama:

✨ Runnable.close

You now have the ability to close any resources associated with a Runnable by invoking the close method. For instance, if you have a chain like:

final chain = promptTemplate
    .pipe(model)
    .pipe(outputParser);
// ...
chain.close();

Calling close() will propagate the close() call to each Runnable instance within the chain. In this example, it won't affect promptTemplate and outputParser as they have no associated resources to close, but it will effectively close the HTTP client of the model.

🚚 Documentation Migration: langchaindart.dev

We have successfully migrated our documentation to a new domain: langchaindart.dev.

🛠️ Bugfixes

  • Errors are now correctly propagated to the stream listener when streaming a chain that uses a StringOutputParser.
  • The Ollama client now properly handles buffered stream responses, such as when utilizing Cloudflare Tunnels.

🆕 anthropic_sdk_dart client

We are working on integrating Anthropic into LangChain.dart. As part of this effort, we have released a new client for the Anthropic API: anthropic_sdk_dart. In the next release, we will add support for tool calling and further integrate it into LangChain.dart.

Changes


New packages:

Packages with other changes:


langchain - v0.7.2

langchain_core - v0.3.2

  • FEAT: Add Runnable.close() to close any resources associated with it (#439). (4e08cced)
  • FIX: Stream errors are not propagated by StringOutputParser (#440). (496b11cc)

langchain_community - v0.2.1

langchain_openai - v0.6.2

  • DOCS: Document tool calling with OpenRouter (#437). (47986592)

anthropic_sdk_dart - v0.0.1

  • FEAT: Implement anthropic_sdk_dart, a Dart client for Anthropic API (#433). (e5412b)

ollama_dart - v0.1.1

  • FEAT: Support buffered stream responses (#445). (ce2ef30c)

openai_dart - v0.3.3

  • FEAT: Support FastChat OpenAI-compatible API (#444). (ddaf1f69)
  • FIX: Make vector store name optional (#436). (29a46c7f)
  • FIX: Fix deserialization of sealed classes (#435). (7b9cf223)

New Contributors


📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.

v0.7.1

14 May 22:09
621663a
Compare
Choose a tag to compare

2024-05-14

What's New?

🔥 VertexAI for Firebase

We are excited to announce 0-day support for Vertex AI for Firebase with the introduction of the new langchain_firebase package.

If you need to call the Vertex AI Gemini API directly from your mobile or web app, you can now use the ChatFirebaseVertexAI class. This class is specifically designed for mobile and web apps, offering enhanced security options against unauthorized clients (via Firebase App Check) and seamless integration with other Firebase services. It supports the latest models (gemini-1.5-pro and gemini-1.5-flash) as well as tool calling.

await Firebase.initializeApp();
final chatModel = ChatFirebaseVertexAI(
  defaultOptions: ChatFirebaseVertexAIOptions(
    model: 'gemini-1.5-pro-preview-0514',
  ),
);

Check out the documentation and the sample project (a port of the official firebase_vertexai sample).

vertex_ai_firebase

⚡️ Google AI for Developers (Upgrade)

ChatGoogleGenerativeAI and GoogleGenerativeAIEmbeddings have been upgraded to use version v1beta of the Gemini API (previously v1), which supports the latest models (gemini-1.5-pro-latest and gemini-1.5-flash-latest).

ChatGoogleGenerativeAI now includes support for tool calling, including parallel tool calling.

Under the hood, we have migrated the client from googleai_dart to the official google_generative_ai package.

✨ OpenAI (Enhancements)

You can already use the new OpenAI's GPT-4o model. Additionally, usage statistics are now included when streaming with OpenAI and ChatOpenAI.

🦙 Ollama

The default models for Ollama, ChatOllama, and OllamaEmbeddings have been updated to llama3. ChatOllama now returns a finishReason. OllamaEmbeddings now supports keepAlive.

🛠️ openai_dart

The Assistant API has been enhanced to support different content types, and several bug fixes have been implemented.
The batch API now supports completions and embeddings.

🔧 ollama_dart

The client has been aligned with the Ollama v0.1.36 API.

Changes


Packages with breaking changes:

Packages with other changes:


langchain - v0.7.1

Note: VertexAI for Firebase (ChatFirebaseVertexAI) is available in the new langchain_firebase package.

langchain_core - v0.3.1

langchain_community - v0.2.0+1

  • Update a dependency to the latest release.

langchain_google - v0.5.0

Note: ChatGoogleGenerativeAI and GoogleGenerativeAIEmbeddings now use the version v1beta of the Gemini API (instead of v1) which support the latest models (gemini-1.5-pro-latest and gemini-1.5-flash-latest).

VertexAI for Firebase (ChatFirebaseVertexAI) is available in the new langchain_firebase package.

  • FEAT: Add support for tool calling in ChatGoogleGenerativeAI (#419). (df41f38a)
  • DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)
  • BREAKING FEAT: Migrate internal client from googleai_dart to google_generative_ai (#407). (fa4b5c37)

langchain_firebase - v0.1.0

  • FEAT: Add support for ChatFirebaseVertexAI (#422). (8d0786bc)
  • DOCS: Add Gemini 1.5 Flash to models list (#423). (40f4c9de)

langchain_openai - v0.6.1

  • FEAT: Add GPT-4o to model catalog (#420). (96214307)
  • FEAT: Include usage stats when streaming with OpenAI and ChatOpenAI (#406). (5e2b0ecc)

langchain_ollama - v0.2.1

  • FEAT: Handle finish reason in ChatOllama (#416). (a5e1af13)
  • FEAT: Add keepAlive option to OllamaEmbeddings (#415). (32e19028)
  • FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
  • REFACTOR: Remove deprecated Ollama options (#414). (861a2b74)

openai_dart - v0.3.2

  • FEAT: Add GPT-4o to model catalog (#420). (96214307)
  • FEAT: Add support for different content types in Assistants API and other fixes (#412). (97acab45)
  • FEAT: Add support for completions and embeddings in batch API in openai_dart (#425). (16fe4c68)
  • FEAT: Add incomplete status to RunObject in openai_dart (#424). (71b116e6)

ollama_dart - v0.1.0

  • BREAKING FEAT: Align Ollama client to the Ollama v0.1.36 API (#411). (326212ce)
  • FEAT: Update Ollama default model from llama2 to llama3 (#417). (9d30b1a1)
  • FEAT: Add support for done reason (#413). (cc5b1b02)

googleai_dart - v0.1.0

Read more

v0.7.0

06 May 09:53
c7c9791
Compare
Choose a tag to compare

2024-05-05

What's New?

This update introduces a standardised interface for tool calling (also known as function calling), allowing models to interact more effectively with external tools.

Previously, our function-calling capability was tightly integrated with the OpenAI provider. The new interface decouples this by providing an abstraction layer over the tool-calling APIs of different vendors. This enhancement makes it easier to switch providers without modifying your existing code.

We have also improved integration with LangChain tools. Now you can seamlessly integrate these tools into your models without the need to convert data formats.

Models can now call multiple tools in a single request, an improvement over the previous limit of one tool per request.

A new output parser, ToolsOutputParser, has been introduced to extract tool calls from the model response:

final calculator = CalculatorTool();
final model = ChatOpenAI(
  apiKey: openAiApiKey,
  defaultOptions: ChatOpenAIOptions(
    model: 'gpt-4-turbo',
    tools: [calculator],
  ),
);
final chain = model.pipe(ToolsOutputParser());
final res = await chain.invoke(
  PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
print(res);
// [ParsedToolCall{
//   id: call_p4GmED1My56vV6XZi9ChljJN,
//   name: calculator,
//   arguments: {
//     input: 3 * 12
//   },
// }, ParsedToolCall{
//   id: call_eLJo7nII9EanFUcxy42WA5Pm,
//   name: calculator,
//   arguments: {
//     input: 11 + 49
//   },
// }]

It effectively handles streaming by progressively concatenating chunks and completing partial JSONs into valid ones:

final stream = chain2.stream(
  PromptValue.string('Calculate 3 * 12 and 11 + 49'),
);
await stream.forEach(print);
// [] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {}, }] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * }, }] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {}, }] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 +}, }] 
// [ParsedToolCall{ id: call_gGXYQDJj9ZG4YmvLhZyLD442, name: calculator, arguments: {input: 3 * 12}, }, ParsedToolCall{ id: call_axZ3Q5Ve8ZvLUB9NDXdwuUVh, name: calculator, arguments: {input: 11 + 49}, }]

Finally, the OpenAIFunctionsAgent has been renamed to OpenAIToolsAgent and updated to work with the new standardised tool calling interface. We plan to extend this functionality in future updates by introducing a ToolsAgent that is compatible with any vendor that supports tool calling.

Refer to the Tool Calling and ToolsOutputParser documentation for more details.

To migrate from the previous function call paradigm to the new standard tool call interface, see this migration guide. We have also improved the tool abstractions, see here for all the changes.

Changes


Packages with breaking changes:


langchain - v0.7.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
  • BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)

langchain_core - v0.3.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
  • BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)

langchain_community - v0.2.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
  • BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)

langchain_openai - v0.6.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)
  • BREAKING REFACTOR: Improve Tool abstractions (#398). (2a50aec2)

langchain_google - v0.4.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)

langchain_mistralai - v0.2.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)

langchain_ollama - v0.2.0

  • BREAKING FEAT: Migrate from function calling to tool calling (#400). (44413b83)

📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.

v0.6.0

30 Apr 20:51
c7dc979
Compare
Choose a tag to compare

2024-04-30

What's New?

This release focuses on enhancing and expanding the capabilities of LangChain Expression Language (LCEL):

🚦 RunnableRouter

RunnableRouter enables the creation of non-deterministic chains where the output of a previous step determines the next step. This feature allows you to use an LLM to dynamically select the appropriate prompt, chain, LLM, or other components based on some input. A particularly effective technique is combining RunnableRouter with embedding models to route a query to the most relevant (semantically similar) prompt.

final router = Runnable.fromRouter((Map<String, dynamic> input, _) {
  final topic = input['topic'] as String;
  if (topic.contains('langchain')) {
    return langchainChain;
  } else if (topic.contains('anthropic')) {
    return anthropicChain;
  } else {
    return generalChain;
  }
});

For more details and examples, please refer to the router documentation.

🌟 JsonOutputParser

In certain scenarios, it is useful to ask the model to respond in JSON format, which makes it easier to parse the response. Many vendors even offer a JSON mode that guarantees valid JSON output. With the new JsonOutputParser you can now easily parse the output of a runnable as a JSON map. It also supports streaming, returning valid JSON from the incomplete JSON chunks streamed by the model.

final model = ChatOpenAI(
  apiKey: openAiApiKey,
  defaultOptions: ChatOpenAIOptions(
    responseFormat: ChatOpenAIResponseFormat(
      type: ChatOpenAIResponseFormatType.jsonObject,
    ),
  ),
);
final parser = JsonOutputParser<ChatResult>();
final chain = model.pipe(parser);
final stream = chain.stream(
  PromptValue.string(
    'Output a list of the countries france, spain and japan and their '
    'populations in JSON format. Use a dict with an outer key of '
    '"countries" which contains a list of countries. '
    'Each country should have the key "name" and "population"',
  ),
);
await stream.forEach((final chunk) => print('$chunk|'));
// {}|
// {countries: []}|
// {countries: [{name: France}]}|
// {countries: [{name: France, population: 67076000}, {}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}|
// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}|

🗺️ Mapping input values

Mapping the output value of a previous runnable to a new value that aligns with the input requirements of the next runnable is a common task. Previous versions provided the Runnable.mapInput method for custom mapping logic, but it lacked control over the stream of input values when using streaming. With this release, you can now utilize Runnable.mapInputStream to have full control over the input stream.

For example, you may want to output only the last element of the input stream: (full code)

final mapper = Runnable.mapInputStream((Stream<String> inputStream) async* {
  yield await inputStream.last;
});

If you need to define separate logic for invoke and stream operations, Runnable.fromFunction has been updated to allow you to specify the invoke logic, the stream logic, or both, providing greater flexibility. This refactoring of Runnable.fromFunction resulted in a minor breaking change, see the migration guide for more information.

In this example, we create a runnable that we can use in our chains to debug the output of the previous step. It prints different information when the chain is invoked vs streamed. (full code)

Runnable<T, RunnableOptions, T> logOutput<T extends Object>(String stepName) {
  return Runnable.fromFunction<T, T>(
    invoke: (input, options) {
      print('Output from step "$stepName":\n$input\n---');
      return Future.value(input);
    },
    stream: (inputStream, options) {
      return inputStream.map((input) {
        print('Chunk from step "$stepName":\n$input\n---');
        return input;
      });
    },
  );
}

final chain = Runnable.getMapFromInput<String>('equation_statement')
    .pipe(logOutput('getMapFromInput'))
    .pipe(promptTemplate)
    .pipe(logOutput('promptTemplate'))
    .pipe(ChatOpenAI(apiKey: openaiApiKey))
    .pipe(logOutput('chatModel'))
    .pipe(StringOutputParser())
    .pipe(logOutput('outputParser'));

🙆 Non-streaming components

Previously, all LangChain.dart components processed a streaming input item by item. This made sense for some components, such as output parsers, but was problematic for others. For example, you don't want a retriever to retrieve documents for each streamed chunk, instead you want to wait for the full query to be received before performing the search.

This has been fixed in this release, as from now on the following components will reduce/aggregate the streaming input from the previous step into a single value before processing it:

  • PromptTemplate
  • ChatPromptTemplate
  • LLM
  • ChatModel
  • Retriever
  • Tool
  • RunnableFunction
  • RunnableRouter

📚 Improved LCEL docs

We have revamped the LangChain Expression Language documentation. It now includes a dedicated section explaining the different primitives available in LCEL. Also, a new page has been added specifically covering streaming.

Changes


Packages with breaking changes:

Packages with other changes:


langchain - v0.6.0+1

  • FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
  • FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
  • FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
  • BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
  • FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)
  • DOCS: Update LangChain Expression Language documentation (#395). (6ce75e5f)

langchain_core - v0.2.0+1

  • FEAT: Add support for Runnable.mapInputStream (#393). (a2b6bbb5)
  • FEAT: Add support for JsonOutputParser (#392). (c6508f0f)
  • FEAT: Reduce input stream for PromptTemplate, LLM, ChatModel, Retriever and Tool (#388). (b59bcd40)
  • BREAKING FEAT: Support different logic for streaming in RunnableFunction (#394). (8bb2b8ed)
  • FIX: Allow async functions in Runnable.mapInput (#396). (e4c35092)

openai_dart - v0.2.2

  • FEAT: Add temperature, top_p and response format to Assistants API (#384). ([1d18290](1d182...
Read more

v0.5.0

10 Apr 17:04
0f268f0
Compare
Choose a tag to compare

2024-04-10

What's New?

We're excited to announce a major update with a focus on enhancing the project's scalability and improving the developer experience. Here are the key enhancements:

🛠️ Restructured package organization:

LangChain.dart's main package has been divided into multiple packages to simplify usage and contribution to the project.

  • langchain_core: Includes only the core abstractions and the LangChain Expression Language as a way to compose them together.
    • Depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.
  • langchain: Features higher-level components and use-case specific frameworks crucial to the application's cognitive architecture.
    • Depend on this package to build LLM applications with LangChain.dart.
    • This package exposes langchain_core so you don't need to depend on it explicitly.
  • langchain_community: Houses community-contributed components and third-party integrations not included in the main LangChain.dart API.
    • Depend on this package if you want to use any of the integrations or components it provides.
  • Integration-specific packages such as langchain_openai and langchain_google: These enable independent imports of popular third-party integrations without full dependency on the langchain_community package.
    • Depend on an integration-specific package if you want to use the specific integration.

✨ Enhanced APIs and New .batch API:

The LanguageModelResult class structure (including its child classes LLMResult and ChatResult) has been simplified, with each LanguageModelResult now storing a single output directly.

To generate multiple outputs, use the new .batch API (instead of .invoke or .stream), which batches the invocation of a Runnable on a list of inputs. If the underlying provider supports batching, this method will attempt to batch the calls to the provider. Otherwise, it will concurrently call invoke on each input (you can configure the concurrencyLimit).

Output parsers have been revamped to provide greater flexibility and compatibility with any type of Runnable. StringOutputParser now supports reducing the output of a stream to a single value, which is useful when the next step in the chain expects a single input value instead of a stream.

The deprecated generate and predict APIs have been removed, favouring the LCEL APIs (invoke, stream, and batch).

The internal implementation of the stream API has been optimized, providing clearer error messages in case of issues.

🆕 Google AI streaming and embeddings:

ChatGoogleGenerativeAI (used for interacting with Gemini models) now supports streaming and tuned models.

Support for Google AI embedding models has been added through the GoogleGenerativeAIEmbeddings class, compatible with the latest text-embedding-004 embedding model. Specifying the number of output dimensions is also supported.

🚚 Migration guide:

We have compiled a migration guide to assist you in updating your code to the new version. You can find it here. For any questions or assistance, please reply to this discussion or reach out to us on Discord.

Changes


Packages with breaking changes:

Packages with other changes:


langchain - v0.5.0

  • BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
  • BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
  • BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
  • BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
  • REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
  • FEAT: Implement .batch support (#370). (d254f929)
  • FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)
  • DOCS: Update LCEL docs. (ab3ab573)
  • DOCS: Add RAG example using OllamaEmbeddings and ChatOllama (#337). (8bddc6c0)

langchain_community - v0.1.0

  • BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)

langchain_core - v0.1.0

  • BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
  • BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
  • BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
  • REFACTOR: Simplify internal .stream implementation (#364). (c83fed22)
  • FEAT: Implement .batch support (#370). (d254f929)
  • FEAT: Add reduceOutputStream option to StringOutputParser (#368). (7f9a9fae)

langchain_chroma - v0.2.0

  • BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)

langchain_google - v0.3.0

  • BREAKING REFACTOR: Introduce langchain_core and langchain_community packages (#328). (5fa520e6)
  • BREAKING REFACTOR: Simplify LLMResult and ChatResult classes (#363). (ffe539c1)
  • BREAKING REFACTOR: Simplify Output Parsers (#367). (f24b7058)
  • BREAKING REFACTOR: Remove deprecated generate and predict APIs (#335). (c55fe50f)
  • REFACTOR: Simplify internal .stream impleme...
Read more

v0.4.2

15 Feb 21:51
bdc6a76
Compare
Choose a tag to compare

2024-02-15

What's new?

This release contains some minor improvements:

  • Ollama keep_alive: users now can control the duration for which models remain active in memory when using Ollama (which, by the way, just added Windows support).
  • Streaming support for googleai_dart: the client has been upgraded to support streaming functionality. However, with the release of Google's official google_generative_ai client, we are evaluating the potential deprecation of googleai_dart in favour of the official client if they are on feature parity.
  • Custom instance configuration for OpenAI.

In addition to these updates, we've also started the work to split langchain package into 3 packages. This refactoring aims to enhance modularity and facilitate community contributions.

  • langchain_core: will contain the core abstractions (ie. language models, document loaders, embedding models, vectorstores, retrievers, etc.), as well as LangChain Expression Language as a way to compose these components together. The community can depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.
  • langchain_community: will contain third-party integrations that don't have a dedicated package.
  • langchain: will depend and expose langchain_core and contain higher-level and use-case specific chains, agents, and retrieval algorithms that are at the core of the application's cognitive architecture.

Changes


Packages with breaking changes:

  • There are no breaking changes in this release.

Packages with other changes:


googleai_dart - v0.0.3

  • FEAT: Add streaming support to googleai_dart client (#299). (2cbd538a)
  • FEAT: Update meta and test dependencies (#331). (912370ee)
  • DOCS: Update pubspecs. (d23ed89a)

openai_dart - v0.1.7

langchain_openai - v0.4.1

ollama_dart - v0.0.3

  • FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
  • FEAT: Update meta and test dependencies (#331). (912370ee)
  • DOCS: Update pubspecs. (d23ed89a)

langchain_ollama - v0.0.4

  • FEAT: Add Ollama keep_alive param to control how long models stay loaded (#319). (3b86e227)
  • FEAT: Update meta and test dependencies (#331). (912370ee)
  • DOCS: Update pubspecs. (d23ed89a)

chromadb - v0.1.2

  • FEAT: Update meta and test dependencies (#331). (912370ee)

langchain - v0.4.2

  • FEAT: Update meta and test dependencies (#331). (912370ee)

langchain_chroma - v0.1.1

langchain_google - v0.2.4

langchain_mistralai - v0.0.3

langchain_pinecone - v0.0.7

langchain_supabase - v0.0.1+1

mistralai_dart - v0.0.3

vertex_ai - v0.0.10

Contributors


📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.

v0.4.1

31 Jan 21:26
4100023
Compare
Choose a tag to compare

2024-01-31

What's new?

Supbase Vector LangChain.dart

🆕 Supabase Vector integration

Now you can use Supabase Vector to store, query, and index vector embeddings in your Supbase Postgres database.

final vectorStore = Supabase(
  embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
  supabaseUrl: 'https://xyzcompany.supabase.co',
  supabaseKey: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...',
);
final res = await vectorStore.similaritySearch(
  query: 'Where is the cat?',
  config: SupabaseSimilaritySearch(
    k: 5,
    filter: {
      'category': {r'$ne': 'person'},
    },
  ),
);

Check out the docs for more info.

Changes


Packages with breaking changes:

  • There are no breaking changes in this release.

Packages with other changes:


langchain - v0.4.1

langchain_supabase - v0.0.1

  • FEAT: Add support for Supabase VectorStore (#69). (be9e72bc)

New Contributors


📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.

v0.4.0

26 Jan 21:58
668c9de
Compare
Choose a tag to compare

2024-01-26

What's new?

🆕 New OpenAI embedding models

You can now use the new generation of OpenAI embedding models:

  • text-embedding-3-small: smaller and highly efficient. It provides a significant upgrade over its predecessor ( text-embedding-ada-002) and it is 5X cheaper.
  • text-embedding-3-large: larger and more powerful. It creates embeddings with up to 3072 dimensions.

Breaking change: text-embedding-3-small is now the default model of OpenAIEmbeddings wrapper.

🆕 Support for shortening OpenAI embeddings

Using larger embeddings, for example storing them in a vector store for retrieval, generally costs more and consumes more computing, memory and storage than using smaller embeddings. The new embedding models support shortening embeddings (i.e. removing some numbers from the end of the sequence) without the embedding losing its concept-representing properties.

For example, on the MTEB benchmark, a text-embedding-3-large embedding can be shortened to a size of 256 while still outperforming an unshortened text-embedding-ada-002 embedding with a size of 1536.

Eg:

final embeddings = OpenAIEmbeddings(
  apiKey: openaiApiKey,
  model: 'text-embedding-3-large',
  dimensions: 256,
);

Changes


Packages with breaking changes:

Packages with other changes:

Packages with dependency updates only:

Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.

  • langchain_ollama - v0.0.3+2
  • langchain_mistralai - v0.0.2+2
  • langchain_pinecone - v0.0.6+13
  • langchain_chroma - v0.1.0+14
  • langchain_google - v0.2.3+2

langchain - v0.4.0

langchain_openai - v0.4.0

  • BREAKING FEAT: Update OpenAIEmbeddings' default model to text-embedding-3-small (#313). (43463481)
  • FEAT: Add support for shortening embeddings in OpenAIEmbeddings (#312). (5f5eb54f)

openai_dart - v0.1.6

  • FEAT: Add gpt-4-0125-preview and gpt-4-turbo-preview in model catalog (#309). (f5a78867)
  • FEAT: Add text-embedding-3-small and text-embedding-3-large in model catalog (#310). (fda16024)
  • FEAT: Add support for shortening embeddings (#311). (c725db0b)

📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.

v0.3.3

20 Jan 11:44
95919ce
Compare
Choose a tag to compare

2024-01-20

What's new?

together_ai_anyscale

🆕 Together AI support

Together AI offers a unified OpenAI-compatible API for a broad range of models running serverless or on your own dedicated instances. It also allows you to fine-tune models on your data or train new models from scratch.

You can now consume Chat and Embeddings models from Together AI using the ChatOpenAI and OpenAIEmbeddings wrappers.

Eg:

final chatModel = ChatOpenAI(
  apiKey: togetherAiApiKey,
  baseUrl: 'https://api.together.xyz/v1',
  defaultOptions: const ChatOpenAIOptions(
    model: 'NousResearch/Nous-Hermes-2-Yi-34B',
  ),
);

🆕 Anyscale support

Similarly to Together AI, Anyscale also offers an OpenAI-compatible API for a large range of chat and embedding models.

Eg:

final chatModel = ChatOpenAI(
  apiKey: anyscaleApiKey,
  baseUrl: 'https://api.endpoints.anyscale.com/v1',
  defaultOptions: const ChatOpenAIOptions(
    model: 'meta-llama/Llama-2-70b-chat-hf',
  ),
);

Other fixes an improvements

  • The Mistral client is now aligned with the latest spec of their API
  • The OpenAI client for the Assistant API now returns the usage data for Run and RunStep
  • VertexAI / ChatVertexAI wrappers now count tokens using the countTokens API instead of tiktoken

Changes


Packages with breaking changes:

  • There are no breaking changes in this release.

Packages with other changes:

Packages with dependency updates only:

Packages listed below depend on other packages in this workspace that have had changes. Their versions have been incremented to bump the minimum dependency versions of the packages they depend upon in this project.

  • langchain_pinecone - v0.0.6+12
  • langchain_ollama - v0.0.3+1
  • langchain_chroma - v0.1.0+13

langchain - v0.3.3

  • DOCS: Add Anyscale and Together AI documentation (#305). (7daa3eb0)

langchain_openai - v0.3.3

  • FEAT: Support Anyscale in ChatOpenAI and OpenAIEmbeddings wrappers (#305). (7daa3eb0)
  • FEAT: Support Together AI in ChatOpenAI wrapper (#297). (28ab56af)
  • FEAT: Support Together AI in OpenAIEmbeddings wrapper (#304). (ddc761d6)

langchain_google - v0.2.3+1

  • REFACTOR: Remove tiktoken in favour of countTokens API on VertexAI (#307). (8158572b)

langchain_mistralai - v0.0.2+1

  • REFACTOR: Update safe_mode and max temperature in Mistral chat (#300). (1a4ccd1e)

openai_dart - v0.1.5

  • FEAT: Support Anyscale API in openai_dart client (#303). (e0a3651c)
  • FEAT: Support Together AI API (#296). (ca6f23d5)
  • FEAT: Support Together AI Embeddings API in openai_dart client (#301). (4a6e1045)
  • FEAT: Add usage to Run/RunStep in openai_dart client (#302). (cc6538b5)

vertex_ai - v0.0.9

  • FEAT: Add count tokens method to vertex_ai client (#306). (54ae317d)

mistralai_dart - v0.0.2+2

  • REFACTOR: Update safe_mode and max temperature in Mistral chat (#300). (1a4ccd1e)

New Contributors


📣 Check out the #announcements channel in the LangChain.dart Discord server for more details.