-
Notifications
You must be signed in to change notification settings - Fork 436
feat(llmobs): add span processor #13426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
traces = events[0] | ||
assert len(traces) == 2 | ||
assert "scrub_values:1" in traces[0]["spans"][0]["tags"] | ||
assert traces[0]["spans"][0]["meta"]["input"]["messages"][0]["content"] == "scrubbed" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gonna follow up to make fetching the spans / traces from events less implementation dependent
env["DD_LLMOBS_AGENTLESS_ENABLED"] = "0" | ||
env["DD_TRACE_ENABLED"] = "0" | ||
env["DD_TRACE_AGENT_URL"] = llmobs_backend.url() | ||
env["DD_TRACE_LOGGING_RATE"] = "0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
took me way too long to learn that
1 log record per name/level/pathname/lineno every 60 seconds by default
is the default logging configuration for the library 🙂
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 232 ± 3 ms. The average import time from base is: 233 ± 2 ms. The import time difference between this PR and base is: -0.9 ± 0.1 ms. Import time breakdownThe following import paths have shrunk:
|
BenchmarksBenchmark execution time: 2025-05-15 06:49:43 Comparing candidate commit 35b25f4 in PR branch Found 0 performance improvements and 2 performance regressions! Performance is the same for 525 metrics, 9 unstable metrics. scenario:iast_aspects-ospathjoin_aspect
scenario:iast_aspects-ospathnormcase_aspect
|
Add capability to add a span processor. The processor can be used to mutate or redact sensitive data contained in inputs and outputs from LLM calls. ```python def my_processor(span): for message in span.output_messages: message["content"] = "" LLMObs.enable(span_processor=my_processor) LLMObs.add_processor(my_processor) ```
35b25f4
to
230b1ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the api, logic, and cases tested LGTM! nice job on telemetry as well 😎 just a couple questions, will approve after resolving them 😄
also - i think it should be register_processor
instead of add_processor
the PR description code block for clarity for folks who come to the PR looking at the changes
if llmobs_span.input_messages is not None: | ||
meta["input"]["messages"] = llmobs_span.input_messages | ||
if llmobs_span.output_messages is not None: | ||
meta["output"]["messages"] = llmobs_span.output_messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also just want to double check that we don't need to apply the functions on the value/document fields (or if we will but just not in scope for now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems like the value field is implicitly deprecated (from what i can tell) @Yun-Kim can you confirm? for documents i think we can follow up to add them. I omitted for the sake of brevity of the PR size and scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah will let Yun chime in - we still set them here and here for non-llm spans (similarly for Node.js), although it could be the case that they are deprecated/not used in our pipelines outside of the SDKs (i would need to verify tho)
will Yun weigh in but will otherwise treat this as resolved regardless bc i think the main use case is LLM spans 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the input.value/output.value fields aren't deprecated but only used on non-llm-kind spans, although our backend/UI depends on the value fields for llm spans if messages aren't provided. Since some of our integrations generate I/O info for non-LLM span kinds (crewai, langchain, langgraph, openai agents), it might be worth adding the span processing for the value fields as well (none of our integrations generate documents fields so no need to worry here about it for now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
none of our integrations generate documents fields so no need to worry here about it for now
i think langchain retrieval operations are the exception for generating output documents, and we also capture input documents for embedding spans as the input to whatever operation we trace (eg openai.embeddings.create("some text that we'll turn into a document")
) - but agree that if we do need/want to add it, it can be done in a follow up.
@@ -97,14 +102,32 @@ | |||
} | |||
|
|||
|
|||
@dataclass | |||
class LLMObsSpan: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if this would benefit from some typedocs here - i do feel it's pretty self-explanatory so might not be necessary, but up to you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah i was midway through documenting and questioned what i would meaningfully write that isn't described by the fields. Maybe we could include an example usage? Might be worth saying that the fields are mutable 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah i think an example usage would be good enough for this, yeah the fields themselves are self-explanatory 😂
Add capability to add a span processor. The processor can be used to mutate or redact sensitive data contained in inputs and outputs from LLM calls.
Public docs: DataDog/documentation#29365
Shared tests: TODO
Closes: #11179
Checklist
Reviewer Checklist