-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat(node): Instrument stream responses for openai #17110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
size-limit report 📦
|
This reverts commit 1607304.
@@ -195,6 +143,9 @@ function addRequestAttributes(span: Span, params: Record<string, unknown>): void | |||
if ('input' in params) { | |||
span.setAttributes({ [GEN_AI_REQUEST_MESSAGES_ATTRIBUTE]: JSON.stringify(params.input) }); | |||
} | |||
if ('stream' in params) { | |||
span.setAttributes({ [OPENAI_RESPONSE_STREAM_ATTRIBUTE]: Boolean(params.stream) }); | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Incorrect Attribute Assignment in Function
The addRequestAttributes
function incorrectly sets the OPENAI_RESPONSE_STREAM_ATTRIBUTE
(a response attribute) based on a request parameter. This function is intended for request attributes, and the stream
request parameter is already correctly captured as GEN_AI_REQUEST_STREAM_ATTRIBUTE
by extractRequestAttributes
.
Locations (1)
'openai.response.timestamp': '2023-03-01T06:31:50.000Z', | ||
'openai.usage.completion_tokens': 0, | ||
'openai.usage.prompt_tokens': 0, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Inconsistent Metadata Across PII Configurations
The test expectations for OpenAI streaming responses are inconsistent between sendDefaultPii: false
and sendDefaultPii: true
configurations. When sendDefaultPii: false
, the gen_ai.response.finish_reasons
is expected as ["in_progress"]
and token usage fields (gen_ai.usage.*_tokens
) are expected to be 0
. However, when sendDefaultPii: true
, finish_reasons
is ["in_progress","completed"]
and token usage is correctly captured. These metadata fields should be consistently collected regardless of PII settings, as the mock streaming implementation emits all relevant data in both scenarios.
return; | ||
} | ||
|
||
const { response } = event as { response: OpenAIResponseObject }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Event Handling Error: Missing Response Property
The processResponsesApiEvent
function unsafely attempts to destructure a response
property from the event
object. Events of type response.output_text.delta
(e.g., ResponseOutputTextDeltaEvent
) do not contain a response
property. While these events are typically handled by an early return, if recordOutputs
is false or the delta
property is missing, they will proceed to the destructuring, causing a runtime error. This also applies to other unknown event types that lack a response
property. A check for the response
property's existence is required before destructuring.
Locations (1)
} catch (error) { | ||
captureException(error); | ||
span.end(); | ||
throw error; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Streaming Error Reporting Inconsistency
OpenAI streaming calls that error before the stream begins incorrectly report status: 'ok'
. This occurs because startSpanManual
, used for streaming, does not automatically set the span status to an error on exceptions, unlike startSpan
for non-streaming calls. This results in inconsistent error reporting and an incorrect test expectation.
Locations (2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like us to add some more jsdoc explanations to packages/core/src/utils/openai/streaming.ts
return; | ||
} | ||
if (streamEvent instanceof Error) { | ||
captureException(streamEvent); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m: when we capture an exception here, should we also set span status to errored?
also, I think we need to mark this as unhandled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially yeah. It's a series of events we're capturing in a single span, which makes things a bit vague. I guess if one fails it's fair to assume the span is in an error state, will update
setTokenUsageAttributes, | ||
} from './utils'; | ||
|
||
interface StreamingState { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m: It would be nice to have some doc strings about this interface and it's fields.
l: I'm not the biggest fan of having a big interface with a bunch of nullable fields. Can we type this a bit stronger? For example if chunk.usage
we should always have promptTokens
, completionTokens
, and totalTokens
defined. Having some helpers that construct state from input (builder pattern?) might also be a useful abstraction.
This adds support for OpenAI streaming responses in the Node.js SDK
What's new