Skip to content

Commit

Permalink
README nit change
Browse files Browse the repository at this point in the history
  • Loading branch information
jjti committed Dec 27, 2023
1 parent 211f8b0 commit 04f61f0
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 9 deletions.
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Checks usage of [OpenTelemetry spans](https://pkg.go.dev/go.opentelemetry.io/ote

## Problem Statement

Tracing is an -- often celebrated [[1](https://andydote.co.uk/2023/09/19/tracing-is-better/), [2](https://charity.wtf/2022/08/15/live-your-best-life-with-structured-events/)] -- pillar of observability. But it's easy to shoot yourself in the foot when creating and managing OTEL spans. For two reasons:
Tracing is an often celebrated [[1](https://andydote.co.uk/2023/09/19/tracing-is-better/), [2](https://charity.wtf/2022/08/15/live-your-best-life-with-structured-events/)] pillar of observability. But it's easy to shoot yourself in the foot when creating and managing OTEL spans. For two reasons:

### Forgetting to call `span.End()`

Expand All @@ -29,11 +29,10 @@ func task(ctx context.Context) error {

### Forgetting to call `span.SetStatus(codes.Error, "msg")`

Setting spans' status to `codes.Error` matters for a couple reasons.
Setting spans' status to `codes.Error` matters for a couple reasons:

First, observability platforms and APMs differentiate "success" vs "failure" using [span's status codes](https://docs.datadoghq.com/tracing/metrics/).

Second, telemetry collector agents, like the [Open Telemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md#:~:text=Sampling%20Processor.-,status_code,-%3A%20Sample%20based%20upon), are configurable to sample `Error` spans at a higher rate than `OK` spans. Similarly, observability platforms like DataDog support trace retention filters based on spans' status. In other words, `Error` spans often receive special treatment with the assumption they are more useful for debugging.
1. observability platforms and APMs differentiate "success" vs "failure" using [span's status codes](https://docs.datadoghq.com/tracing/metrics/).
1. telemetry collector agents, like the [Open Telemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md#:~:text=Sampling%20Processor.-,status_code,-%3A%20Sample%20based%20upon), are configurable to sample `Error` spans at a higher rate than `OK` spans. Similarly, observability platforms like DataDog support trace retention filters based on spans' status. In other words, `Error` spans often receive special treatment with the assumption they are more useful for debugging.

```go
func _() error {
Expand Down
8 changes: 4 additions & 4 deletions testdata/test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ import (
"go.opentelemetry.io/otel/codes"
)

type testErr struct{}
type testError struct{}

func (e *testErr) Error() string {
func (e *testError) Error() string {
return "foo"
}

Expand Down Expand Up @@ -68,7 +68,7 @@ func _() error {
defer span.End()

if true {
return &testErr{} // want "this return statement may be reached without calling span.SetStatus"
return &testError{} // want "this return statement may be reached without calling span.SetStatus"
}

return nil
Expand All @@ -79,7 +79,7 @@ func _() (string, error) {
defer span.End()

if true {
return "", &testErr{} // want "this return statement may be reached without calling span.SetStatus"
return "", &testError{} // want "this return statement may be reached without calling span.SetStatus"
}

return "", nil
Expand Down

0 comments on commit 04f61f0

Please sign in to comment.