Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: switch langchain imports to core #805

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion langfuse/Sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def sample_event(self, event: dict):
return True

def deterministic_sample(self, trace_id: str, sample_rate: float):
"""determins if an event should be sampled based on the trace_id and sample_rate. Event will be sent to server if True"""
"""Determines if an event should be sampled based on the trace_id and sample_rate. Event will be sent to server if True"""
log.debug(
f"Applying deterministic sampling to trace_id: {trace_id} with rate {sample_rate}"
)
Expand Down
13 changes: 7 additions & 6 deletions langfuse/callback/langchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@

import pydantic

try: # Test that langchain is installed before proceeding
import langchain # noqa
try: # Test that langchain core is installed before proceeding
import langchain_core # noqa
except ImportError as e:
log = logging.getLogger("langfuse")
log.error(
Expand All @@ -24,11 +24,11 @@
from langfuse.utils.base_callback_handler import LangfuseBaseCallbackHandler

try:
from langchain.callbacks.base import (
from langchain_core.callbacks.base import (
BaseCallbackHandler as LangchainBaseCallbackHandler,
)
from langchain.schema.agent import AgentAction, AgentFinish
from langchain.schema.document import Document
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.documents import Document
from langchain_core.outputs import (
ChatGeneration,
LLMResult,
Expand All @@ -44,7 +44,8 @@
)
except ImportError:
raise ModuleNotFoundError(
"Please install langchain to use the Langfuse langchain integration: 'pip install langchain'"
"Please install 'langchain core' to use the Langfuse langchain integration:"
" 'pip install langchain-core'"
)


Expand Down
2 changes: 1 addition & 1 deletion langfuse/serializer.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

# Attempt to import Serializable
try:
from langchain.load.serializable import Serializable
from langchain_core.load.serializable import Serializable
except ImportError:
# If Serializable is not available, set it to NoneType
Serializable = type(None)
Expand Down
3,078 changes: 1,613 additions & 1,465 deletions poetry.lock

Large diffs are not rendered by default.

10 changes: 6 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ pydantic = ">=1.10.7, <3.0"
backoff = ">=1.10.0"
openai = { version = ">=0.27.8", optional = true }
wrapt = "^1.14"
langchain = { version = ">=0.0.309", optional = true }
langchain-core = { version = ">=0.2.0", optional = true }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the contribution!
Do you know how backwards compatible this is? We might have users who explicitly installed langchain version 0.0.310. I assume that the upgrade would not work for them, correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If yes, is there a way to make this backwards compatible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Users should install langfuse with the langchain extra (and this will ensure they have the langchain-core package installed in their env). If they also have langchain package v0.0.310 installed explicitly or for some other reason, it's fine, these are not clashing.

  • However, if users do not install langfuse with the langchain extra and do not have langchain-core in the env for some other reason, the upgrade will not work for them. I would say that is expected and correct though (if you don't specify you want to use langfuse with langchain, then langfuse<>langchain might not be compatible).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump on this? it's a minor inconvenience, but I'd love to see it merged

llama-index = {version = ">=0.10.12, <2.0.0", optional = true}
packaging = "^24.1"
idna = "^3.7"
Expand Down Expand Up @@ -48,14 +48,16 @@ bson = "^0.5.10"
langchain-anthropic = "^0.1.4"
langchain-groq = "^0.1.3"
langchain-aws = "^0.1.3"

langchain = "^0.2.9"
langchain-cohere = "^0.1.9"
langchain-community = "^0.2.14"

[tool.poetry.group.docs.dependencies]
pdoc = "^14.4.0"

[tool.poetry.extras]
openai = ["openai"]
langchain = ["langchain"]
langchain = ["langchain-core"]
llama-index = ["llama-index"]

[build-system]
Expand All @@ -68,4 +70,4 @@ log_cli = true
[tool.poetry_bumpversion.file."langfuse/version.py"]

[tool.poetry.scripts]
release = "scripts.release:main"
release = "scripts.release:main"
2 changes: 1 addition & 1 deletion tests/test_decorators.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from concurrent.futures import ThreadPoolExecutor
import pytest

from langchain_community.chat_models import ChatOpenAI
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langfuse.openai import AsyncOpenAI
from langfuse.decorators import langfuse_context, observe
Expand Down
4 changes: 2 additions & 2 deletions tests/test_langchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -1370,7 +1370,7 @@ def get_word_length(word: str) -> int:
# },
# "required": [
# "Main character",
# "Cummary",
# "Summary",
# ],
# }
# chain = create_extraction_chain(schema, llm)
Expand Down Expand Up @@ -1401,7 +1401,7 @@ def test_aws_bedrock_chain():
import os

import boto3
from langchain.llms.bedrock import Bedrock
from langchain_aws import Bedrock

api_wrapper = LangfuseAPI()
handler = CallbackHandler(debug=False)
Expand Down
6 changes: 3 additions & 3 deletions tests/test_langchain_integration.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
from langchain_openai import ChatOpenAI, OpenAI
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain.schema import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.output_parsers import StrOutputParser
import pytest
import types
from langfuse.callback import CallbackHandler
from tests.utils import get_api
from .utils import create_uuid


# to avoid the instanciation of langfuse in side langfuse.openai.
# to avoid the instantiation of langfuse in side langfuse.openai.
def _is_streaming_response(response):
return isinstance(response, types.GeneratorType) or isinstance(
response, types.AsyncGeneratorType
Expand Down