Skip to content

feat(llmobs): add span processor #13426

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

Kyle-Verhoog
Copy link
Member

@Kyle-Verhoog Kyle-Verhoog commented May 15, 2025

Add capability to add a span processor. The processor can be used to mutate or redact sensitive data contained in inputs and outputs from LLM calls.

def my_processor(span):
    for message in span.output_messages:
        message["content"] = ""

LLMObs.enable(span_processor=my_processor)

LLMObs.register_processor(my_processor)

Public docs: DataDog/documentation#29365
Shared tests: TODO

Closes: #11179

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

CODEOWNERS have been resolved as:

releasenotes/notes/llmobs-processor-d5cb47b12bc3bbd1.yaml               @DataDog/apm-python
ddtrace/llmobs/__init__.py                                              @DataDog/ml-observability
ddtrace/llmobs/_llmobs.py                                               @DataDog/ml-observability
ddtrace/llmobs/_telemetry.py                                            @DataDog/ml-observability
tests/llmobs/_utils.py                                                  @DataDog/ml-observability
tests/llmobs/conftest.py                                                @DataDog/ml-observability
tests/llmobs/test_llmobs.py                                             @DataDog/ml-observability

traces = events[0]
assert len(traces) == 2
assert "scrub_values:1" in traces[0]["spans"][0]["tags"]
assert traces[0]["spans"][0]["meta"]["input"]["messages"][0]["content"] == "scrubbed"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gonna follow up to make fetching the spans / traces from events less implementation dependent

env["DD_LLMOBS_AGENTLESS_ENABLED"] = "0"
env["DD_TRACE_ENABLED"] = "0"
env["DD_TRACE_AGENT_URL"] = llmobs_backend.url()
env["DD_TRACE_LOGGING_RATE"] = "0"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

took me way too long to learn that

1 log record per name/level/pathname/lineno every 60 seconds by default

is the default logging configuration for the library 🙂

Copy link
Contributor

github-actions bot commented May 15, 2025

Bootstrap import analysis

Comparison of import times between this PR and base.

Summary

The average import time from this PR is: 232 ± 3 ms.

The average import time from base is: 233 ± 2 ms.

The import time difference between this PR and base is: -0.9 ± 0.1 ms.

Import time breakdown

The following import paths have shrunk:

ddtrace.auto 1.855 ms (0.80%)
ddtrace.bootstrap.sitecustomize 1.186 ms (0.51%)
ddtrace.bootstrap.preload 1.186 ms (0.51%)
ddtrace.internal.remoteconfig.client 0.611 ms (0.26%)
ddtrace 0.669 ms (0.29%)

@pr-commenter
Copy link

pr-commenter bot commented May 15, 2025

Benchmarks

Benchmark execution time: 2025-05-15 06:49:43

Comparing candidate commit 35b25f4 in PR branch kylev/io-processor with baseline commit 83dea4c in branch main.

Found 0 performance improvements and 2 performance regressions! Performance is the same for 525 metrics, 9 unstable metrics.

scenario:iast_aspects-ospathjoin_aspect

  • 🟥 execution_time [+871.486ns; +942.201ns] or [+14.380%; +15.546%]

scenario:iast_aspects-ospathnormcase_aspect

  • 🟥 execution_time [+431.795ns; +503.884ns] or [+12.722%; +14.847%]

Add capability to add a span processor. The processor can be used to mutate or
redact sensitive data contained in inputs and outputs from LLM calls.

```python
def my_processor(span):
    for message in span.output_messages:
        message["content"] = ""

LLMObs.enable(span_processor=my_processor)

LLMObs.add_processor(my_processor)
```
Copy link
Contributor

@sabrenner sabrenner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the api, logic, and cases tested LGTM! nice job on telemetry as well 😎 just a couple questions, will approve after resolving them 😄

also - i think it should be register_processor instead of add_processor the PR description code block for clarity for folks who come to the PR looking at the changes

Comment on lines +244 to +247
if llmobs_span.input_messages is not None:
meta["input"]["messages"] = llmobs_span.input_messages
if llmobs_span.output_messages is not None:
meta["output"]["messages"] = llmobs_span.output_messages
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also just want to double check that we don't need to apply the functions on the value/document fields (or if we will but just not in scope for now).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like the value field is implicitly deprecated (from what i can tell) @Yun-Kim can you confirm? for documents i think we can follow up to add them. I omitted for the sake of brevity of the PR size and scope.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah will let Yun chime in - we still set them here and here for non-llm spans (similarly for Node.js), although it could be the case that they are deprecated/not used in our pipelines outside of the SDKs (i would need to verify tho)

will Yun weigh in but will otherwise treat this as resolved regardless bc i think the main use case is LLM spans 👍

Copy link
Contributor

@Yun-Kim Yun-Kim May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the input.value/output.value fields aren't deprecated but only used on non-llm-kind spans, although our backend/UI depends on the value fields for llm spans if messages aren't provided. Since some of our integrations generate I/O info for non-LLM span kinds (crewai, langchain, langgraph, openai agents), it might be worth adding the span processing for the value fields as well (none of our integrations generate documents fields so no need to worry here about it for now)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

none of our integrations generate documents fields so no need to worry here about it for now

i think langchain retrieval operations are the exception for generating output documents, and we also capture input documents for embedding spans as the input to whatever operation we trace (eg openai.embeddings.create("some text that we'll turn into a document")) - but agree that if we do need/want to add it, it can be done in a follow up.

@@ -97,14 +102,32 @@
}


@dataclass
class LLMObsSpan:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if this would benefit from some typedocs here - i do feel it's pretty self-explanatory so might not be necessary, but up to you!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i was midway through documenting and questioned what i would meaningfully write that isn't described by the fields. Maybe we could include an example usage? Might be worth saying that the fields are mutable 🤷‍♂️

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i think an example usage would be good enough for this, yeah the fields themselves are self-explanatory 😂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Need the option to mask the input and output of the LLM API in Datadog LLM observability
3 participants