-
LangChain is a framework for developing applications powered by language models. (1) Be data-aware: connect a language model to other sources of data. (2) Be agentic: Allow a language model to interact with its environment. doc:ref / blog:ref / git
-
It highlights two main value props of the framework:
- Components: modular abstractions and implementations for working with language models, with easy-to-use features.
- Use-Case Specific Chains: chains of components that assemble in different ways to achieve specific use cases, with customizable interfaces.cite: ref
-
LangChain 0.2: full separation of langchain and langchain-community. ref [May 2024]
-
Towards LangChain 0.1 ref [Dec 2023]
-
Basic LangChain building blocks ref [2023]
''' LLMChain: A LLMChain is the most common type of chain. It consists of a PromptTemplate, a model (either an LLM or a ChatModel), and an optional output parser. ''' chain = prompt | model | parser
- Macro-orchestration in LLM pipelines involves high-level design and management of complex workflows, integrating multiple LLMs and other components.
- Micro-orchestration x-ref
- LangGraph in LangChain, and Burr
- Feature Matrix: LangChain Features
- Awesome LangChain: Curated list of tools and projects using LangChain.
- Cheetsheet: LangChain CheatSheet
- LangChain Cheetsheet KD-nuggets: LangChain Cheetsheet KD-nuggets doc [Aug 2023]
- LangChain AI Handbook: published by Pinecone
- LangChain Tutorial: A Complete LangChain Guide
- RAG From Scratch💡[Feb 2024]
- DeepLearning.AI short course: LangChain for LLM Application Development ref / LangChain: Chat with Your Data ref
- LangChain Streamlit agent examples: Implementations of several LangChain agents as Streamlit apps. [Jun 2023]
- LangChain/cache: Reducing the number of API calls
- LangChain/context-aware-splitting: Splits a file into chunks while keeping metadata
- LangChain Expression Language: A declarative way to easily compose chains together [Aug 2023]
- LangSmith Platform for debugging, testing, evaluating. [Jul 2023]
- LangChain Template: LangChain Reference architectures and samples. e.g.,
RAG Conversation Template
[Oct 2023] - OpenGPTs: An open source effort to create a similar experience to OpenAI's GPTs [Nov 2023]
- LangGraph:💡Build and navigate language agents as graphs ref [Aug 2023]
- Chains ref
- SimpleSequentialChain: A sequence of steps with single input and output. Output of one step is input for the next.
- SequentialChain: Like SimpleSequentialChain but handles multiple inputs and outputs at each step.
- MultiPromptChain: Routes inputs to specialized sub-chains based on content. Ideal for different prompts for different tasks.
- Summarizer
- stuff: Sends everything at once in LLM. If it's too long, an error will occur.
- map_reduce: Summarizes by dividing and then summarizing the entire summary.
- refine: (Summary + Next document) => Summary
- map_rerank: Ranks by score and summarizes to important points.
- If you're using a text LLM, first try
zero-shot-react-description
. - If you're using a Chat Model, try
chat-zero-shot-react-description
. - If you're using a Chat Model and want to use memory, try
conversational-react-description
. self-ask-with-search
: Measuring and Narrowing the Compositionality Gap in Language Models [7 Oct 2022]react-docstore
: ReAct: Synergizing Reasoning and Acting in Language Models [6 Oct 2022]- Agent Type
class AgentType(str, Enum):
"""Enumerator with the Agent types."""
ZERO_SHOT_REACT_DESCRIPTION = "zero-shot-react-description"
REACT_DOCSTORE = "react-docstore"
SELF_ASK_WITH_SEARCH = "self-ask-with-search"
CONVERSATIONAL_REACT_DESCRIPTION = "conversational-react-description"
CHAT_ZERO_SHOT_REACT_DESCRIPTION = "chat-zero-shot-react-description"
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = "chat-conversational-react-description"
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = (
"structured-chat-zero-shot-react-description"
)
OPENAI_FUNCTIONS = "openai-functions"
OPENAI_MULTI_FUNCTIONS = "openai-multi-functions"
-
ReAct is inspired by the synergies between "acting" and "reasoning" which allow humans to learn new tasks and make decisions or reasoning.
MRKL stands for Modular Reasoning, Knowledge and Language and is a neuro-symbolic architecture that combines large language models, external knowledge sources, and discrete reasoning
cite: ref [28 Apr 2023]
zero-shot-react-description
: Uses ReAct to select tools based on their descriptions. Any number of tools can be used, each requiring a description.
react-docstore
: Uses ReAct to manage a docstore with two required tools: Search and Lookup. These tools must be named exactly as specified. It follows the original ReAct paper's example from Wikipedia.- MRKL in LangChain uses
zero-shot-react-description
, implementing ReAct. The original ReAct framework is used in thereact-docstore
agent. MRKL was published on May 1, 2022, earlier than ReAct on October 6, 2022.
- MRKL in LangChain uses
ConversationBufferMemory
: Stores the entire conversation history.ConversationBufferWindowMemory
: Stores recent messages from the conversation history.Entity Memory
: Stores and retrieves entity-related information.Conversation Knowledge Graph Memory
: Stores entities and relationships between entities.ConversationSummaryMemory
: Stores summarized information about the conversation.ConversationSummaryBufferMemory
: Stores summarized information about the conversation with a token limit.ConversationTokenBufferMemory
: Stores tokens from the conversation.VectorStore-Backed Memory
: Leverages vector space models for storing and retrieving information.
- The Problem With LangChain: ref / git [14 Jul 2023]
- What’s your biggest complaint about langchain?: ref [May 2023]
- LangChain Is Pointless: ref [Jul 2023]
LangChain has been criticized for making simple things relatively complex, which creates unnecessary complexity and tribalism that hurts the up-and-coming AI ecosystem as a whole. The documentation is also criticized for being bad and unhelpful.
- How to Build Ridiculously Complex LLM Pipelines with LangGraph! [17 Sep 2024 ]
LangChain does too much, and as a consequence, it does many things badly. Scaling beyond the basic use cases with LangChain is a challenge that is often better served with building things from scratch by using the underlying APIs.
- LangChain [Oct 2022] | LlamaIndex [Nov 2022] | Microsoft Semantic Kernel [Feb 2023] | Microsoft guidance [Nov 2022] | Azure ML Promt flow [Jun 2023] | DSPy [Jan 2023]
- Prompting Framework (PF): Prompting Frameworks for Large Language Models: A Survey git
- What Are Tools Anyway?: 1. For a small number (e.g., 5–10) of tools, LMs can directly select from contexts. However, with a larger number (e.g., hundreds), an additional retrieval step involving a retriever model is often necessary. 2. LM-used tools incl. Tool creation and reuse. Tool is not useful when machine translation, summarization, and sentiment analysis (among others). 3. Evaluation metrics [18 Mar 2024]
-
Basically LlamaIndex is a smart storage mechanism, while LangChain is a tool to bring multiple tools together. cite [14 Apr 2023]
-
LangChain offers many features and focuses on using chains and agents to connect with external APIs. In contrast, LlamaIndex is more specialized and excels at indexing data and retrieving documents.
LangChain | Semantic Kernel |
---|---|
Memory | Memory |
Tookit | Plugin (pre. Skill) |
Tool | LLM prompts (semantic functions) native C# or Python code (native function) |
Agent | Planner (Deprecated) -> Agent |
Chain | Steps, Pipeline |
Tool | Connector (Deprecated) -> Plugin |
-
What's the difference between LangChain and Semantic Kernel?
LangChain has many agents, tools, plugins etc. out of the box. More over, LangChain has 10x more popularity, so has about 10x more developer activity to improve it. On other hand, Semantic Kernel architecture and quality is better, that's quite promising for Semantic Kernel. ref [11 May 2023]
-
What's the difference between Azure Machine Learing PromptFlow and Semantic Kernel?
- Low/No Code vs C#, Python, Java
- Focused on Prompt orchestrating vs Integrate LLM into their existing app.
-
Promptflow is not intended to replace chat conversation flow. Instead, it’s an optimized solution for integrating Search and Open Source Language Models. By default, it supports Python, LLM, and the Prompt tool as its fundamental building blocks.
-
Using Prompt flow with Semantic Kernel: ref [07 Sep 2023]
Handlebars.js | Jinja2 | Prompt Template | |
---|---|---|---|
Conditions | {{#if user}} Hello {{user}}! {{else}} Hello Stranger! {{/if}} |
{% if user %} Hello {{ user }}! {% else %} Hello Stranger! {% endif %} |
Branching features such as "if", "for", and code blocks are not part of SK's template language. |
Loop | {{#each items}} Hello {{this}} {{/each}} |
{% for item in items %} Hello {{ item }} {% endfor %} |
By using a simple language, the kernel can also avoid complex parsing and external dependencies. |
LangChain Library | guidance. LangChain.js | LangChain, Azure ML prompt flow | Semantic Kernel |
URL | ref | ref | ref |
- Semantic Kernel supports HandleBars and Jinja2. [Mar 2024]