-
Notifications
You must be signed in to change notification settings - Fork 136
Description
Context: This issue was separated from a previous bug report. @tconley1428 confirmed that the plugin approach is the only way to run agents in Temporal workflows, but the tracing/observability issue remains unresolved and I think it deserves a separate issue.
What are you really trying to do?
I'm building a multi-tenant application using Temporal workflows with OpenAI agents and need observability/tracing to work properly with Langfuse (though logfire, as explained here). I want to trace agent execution, tool calls, and model interactions through Langfuse and logfire.
Describe the bug
When using Temporal workflows with OpenAI agents and the required OpenAIAgentsPlugin, the Langfuse tracing instrumentation fails to work properly. The tracing setup doesn't capture agent execution traces in Langfuse.
Expected Behavior:
- Agent execution should be traced and visible in Langfuse.
Current Behavior:
- No traces appear in Langfuse when using the plugin approach
- Agent execution happens but is not observable through Langfuse
- Tracing works fine when running agents outside of Temporal workflows
Minimal Reproduction
1. Tracing Setup (that works outside workflows):
from agents import set_trace_processors
import logfire
import nest_asyncio
import os
def init_tracing():
"""Initialize tracing and observability."""
# Set Langfuse env vars from settings
os.environ.setdefault("LANGFUSE_PUBLIC_KEY", LANGFUSE_PUBLIC_KEY)
os.environ.setdefault("LANGFUSE_SECRET_KEY", LANGFUSE_SECRET_KEY)
os.environ.setdefault("LANGFUSE_HOST", LANGFUSE_HOST)
set_trace_processors([]) # only disable OpenAI tracing
# Instrument OpenAI Agents SDK via pydantic-ai logfire
try:
nest_asyncio.apply()
logfire.configure(service_name="temporal-demo", send_to_logfire=False)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()
except Exception as exc:
logger.error(f"Logfire instrumentation not available: {exc}")2. Worker Setup with Plugin (required for Temporal workflows):
from temporalio.worker import Worker
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin, ModelActivityParameters
from datetime import timedelta
async def main():
# Initialize tracing (conflicts with plugins)
init_tracing() # Sets up logfire → OTLP → Langfuse
# Create Temporal client with plugins (required for agents)
plugins = [
OpenAIAgentsPlugin(
model_params=ModelActivityParameters(
start_to_close_timeout=timedelta(seconds=30)
),
model_provider=CustomLitellmProvider(
base_url=PROXY_BASE_URL,
api_key=PROXY_API_KEY,
),
),
]
client = await create_temporal_client(include_plugins=True)
# Run the worker
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as activity_executor:
worker = Worker(
client,
task_queue="demo-task-queue",
workflows=[MyWorkflow],
activities=[simple_tool_activity],
activity_executor=activity_executor,
)
await worker.run()3. Workflow with Agent:
from temporalio import workflow
from agents import Agent
@workflow.defn
class MyWorkflow:
@workflow.run
async def run(self) -> str:
# Create agent with string model (required with plugins)
agent = Agent(
name="Triage Agent",
instructions="Your instructions here",
model="gpt-4o-mini", # String model name required with plugins
tools=tools,
)
# This executes but doesn't appear in Langfuse traces
result = await agent.run("Your message here")
return result4. Custom Model Provider (for proxy configuration):
from agents.extensions.models.litellm_model import LitellmModel
from agents.models.interface import Model, ModelProvider
class CustomLitellmProvider(ModelProvider):
"""Custom ModelProvider that uses LiteLLM with configurable base_url and api_key."""
def __init__(self, base_url: str | None = None, api_key: str | None = None):
self.base_url = base_url
self.api_key = api_key
@property
def model_class(self) -> type[Model]:
return LitellmModel
@property
def provider_name(self) -> str:
return "CustomLitellmProvider"
def get_model(self, model_name: str) -> Model:
return LitellmModel(
model=model_name,
base_url=self.base_url,
api_key=self.api_key,
)Environment/Versions
- OS and processor: Linux
- Temporal Version: temporalio==1.18.0
- OpenAI SDK: openai==1.109.0
- OpenAI Agents: openai-agents==0.3.2
- Python: 3.11
- Langfuse: Latest version
- logfire: Latest version
- Are you using Docker or Kubernetes or building Temporal from source? Using Docker
Current Behavior:
No traces appear in Langfuse at all when using the plugin approach.
Thanks 🙏