Skip to content

Commit

Permalink
Yamlify GenAI events
Browse files Browse the repository at this point in the history
  • Loading branch information
lmolkova committed Oct 11, 2024
1 parent d3a09c3 commit e225e77
Show file tree
Hide file tree
Showing 2 changed files with 247 additions and 5 deletions.
10 changes: 5 additions & 5 deletions docs/gen-ai/gen-ai-events.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ The event name MUST be `gen_ai.user.message`.
| Body Field | Type | Description | Examples | Requirement Level |
|---|---|---|---|---|
| `role` | string | The actual role of the message author as passed in the message. | `"user"`, `"customer"` | `Conditionally Required`: if available and if not equal to `user` |
| `content` | `AnyValue` | The contents of the user message. | `What's the weather in Paris` | `Opt-In` |
| `content` | `AnyValue` | The contents of the user message. | `What's the weather in Paris?` | `Opt-In` |

## Assistant event

Expand Down Expand Up @@ -277,7 +277,7 @@ sequenceDiagram
A->>+I: #U+200D
I->>M: gen_ai.user.message: What's the weather in Paris?<br/>gen_ai.assistant.message: get_weather tool call<br/>gen_ai.tool.message: rainy, 57°F
Note left of I: GenAI Client span 2
I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F.
I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F
I-->>-A: #U+200D
```
Here's the telemetry generated for each step in this scenario:
Expand Down Expand Up @@ -313,7 +313,7 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":{"location":"Paris"}},"type":"function"}]}` |
| Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |

**GenAI Client span 2:**
Expand Down Expand Up @@ -348,7 +348,7 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":{"location":"Paris"}},"type":"function"}]}` |
| Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |

3. `gen_ai.tool.message`
Expand All @@ -364,7 +364,7 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F."}}` |
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}` |
| Event body (content not enabled) | `{"index":0,"finish_reason":"stop","message":{}}` |

### Chat completion with multiple choices
Expand Down
242 changes: 242 additions & 0 deletions model/gen-ai/events.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,26 @@ groups:
brief: >
This event describes the instructions passed to the GenAI system inside the prompt.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.system.message
requirement_level: opt_in
type: map
fields:
- id: content
type: undefined
stability: experimental
brief: >
The contents of the system message.
examples: ["You're a helpful bot"]
requirement_level: opt_in
- id: role
type: string
stability: experimental
brief: >
The actual role of the message author as passed in the message.
examples: ["system", "instruction"]
requirement_level:
conditionally_required: if available and not equal to `system`.`

- id: gen_ai.user.message
name: gen_ai.user.message
Expand All @@ -22,6 +42,26 @@ groups:
brief: >
This event describes the prompt message specified by the user.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.user.message
requirement_level: opt_in
type: map
fields:
- id: content
type: undefined
stability: experimental
brief: >
The contents of the user message.
examples: ["What's the weather in Paris?"]
requirement_level: opt_in
- id: role
type: string
stability: experimental
brief: >
The actual role of the message author as passed in the message.
examples: ["user", "customer"]
requirement_level:
conditionally_required: if available and not equal to `user`.

- id: gen_ai.assistant.message
name: gen_ai.assistant.message
Expand All @@ -30,6 +70,73 @@ groups:
brief: >
This event describes the assistant message passed to GenAI system or received from it.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.assistant.message
requirement_level: opt_in
type: map
fields:
- id: content
type: undefined
stability: experimental
brief: >
The contents of the tool message.
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: experimental
brief: >
The actual role of the message author as passed in the message.
examples: ["assistant", "bot"]
requirement_level:
conditionally_required: if available and not equal to `assistant`.
- id: tool_calls
type: map # TODO: it's an array
stability: experimental
brief: >
The tool calls generated by the model, such as function calls.
requirement_level:
conditionally_required: if available
fields:
- id: id
type: string
stability: experimental
brief: >
The id of the tool call.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required
- id: type
type: enum
members:
- id: function
value: 'function'
brief: Function
stability: experimental
brief: >
The type of the tool.
examples: ["function"]
requirement_level: required
- id: function
type: map
stability: experimental
brief: >
The function call.
requirement_level: required
fields:
- id: name
type: string
stability: experimental
brief: >
The name of the function.
examples: ["get_weather"]
requirement_level: required
- id: arguments
type: undefined
stability: experimental
brief: >
The arguments of the function.
examples: ['{"location": "Paris"}']
requirement_level: opt_in

- id: gen_ai.tool.message
name: gen_ai.tool.message
Expand All @@ -38,6 +145,33 @@ groups:
brief: >
This event describes the tool or function response message.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.tool.message
requirement_level: opt_in
type: map
fields:
- id: content
type: undefined
stability: experimental
brief: >
The contents of the tool message.
examples: ["rainy, 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: experimental
brief: >
The actual role of the message author as passed in the message.
examples: ["tool", "function"]
requirement_level:
conditionally_required: if available and not equal to `tool`.
- id: id
type: string
stability: experimental
brief: >
Tool call id that this message is responding to.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required

- id: gen_ai.choice
name: gen_ai.choice
Expand All @@ -46,3 +180,111 @@ groups:
brief: >
This event describes the Gen AI response message.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.choice
requirement_level: opt_in
type: map
note: >
If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
When response is streamed, instrumentations that report response events MUST reconstruct and report
the full message and MUST NOT report individual chunks as events.
If the request to GenAI model fails with an error before content is received,
instrumentation SHOULD report an event with truncated content (if enabled).
If `finish_reason` was not received, it MUST be set to `error`.
fields:
- id: index
type: int
stability: experimental
brief: >
The index of the choice in the list of choices.
examples: [0, 1]
requirement_level: required
- id: finish_reason
type: enum
members:
- id: stop
value: 'stop'
brief: Stop
- id: tool_calls
value: 'tool_calls'
brief: Tool Calls
- id: content_filter
value: 'content_filter'
brief: Content Filter
- id: length
value: 'length'
brief: Length
- id: error
value: 'error'
brief: Error
stability: experimental
brief: >
The reason the model stopped generating tokens.
requirement_level: required
- id: message
type: map
stability: experimental
brief: >
GenAI response message.
requirement_level: recommended
fields:
- id: content
type: undefined
stability: experimental
brief: >
The contents of the assistant message.
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: experimental
brief: >
The actual role of the message author as passed in the message.
examples: ["assistant", "bot"]
requirement_level:
conditionally_required: if available and not equal to `assistant`.
- id: tool_calls
type: map # TODO: it's an array
stability: experimental
brief: >
The tool calls generated by the model, such as function calls.
requirement_level:
conditionally_required: if available
fields:
- id: id
type: string
stability: experimental
brief: >
The id of the tool call.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required
- id: type
type: enum
members:
- id: function
value: 'function'
brief: Function
stability: experimental
brief: >
The type of the tool.
requirement_level: required
- id: function
type: map
stability: experimental
brief: >
The function that the model called.
requirement_level: required
fields:
- id: name
type: string
stability: experimental
brief: >
The name of the function.
examples: ["get_weather"]
requirement_level: required
- id: arguments
type: undefined
stability: experimental
brief: >
The arguments of the function.
requirement_level: opt_in

0 comments on commit e225e77

Please sign in to comment.