This is an example Phoenix application that makes use of OpenTelemetry tracing and sends traces to Datadog over the OpenTelemetry Collector.
You'll need Elixir, Docker and PostgreSQL to run the example app.
First, setup the app:
mix setup
Then export your Datadog configuration:
export DD_ENV=`whoami`-local
export DD_SERVICE=test-service
export DD_VERSION=1.0
export DD_API_KEY=<your Datadog API key>
In the same shell, start the collector using:
docker-compose up -d
Finally, run the application:
mix phx.server
Visit http://localhost:4000/posts a couple of times for the traces to be sent to Datadog and observe them in Datadog's APM page.
Here's an outline of the important parts of this example.
Most of the source code is that of a Phoenix application with OpenTelemetry configured. In order to do that, we needed the following dependencies:
opentelemetry_api
- instruments our application,opentelemetry
- collects traces within our runtime and passes them to the exporter,opentelemetry_exporter
- exports traces using OpenTelemetry Protocol to the OpenTelemetry Collector.
Instrumented code can be seen in PostController
's index
function.
The exporter is configured in config/config.exs
and by default reports traces to localhost:9090
.
The docker-compose.yaml
file specifies an otel-collector
service based on the opentelemetry-collector-contrib
image.
The -contrib
version of the collector is used because it includes the Datadog exporter.
We're passing our environment variables to the service and mounting the collector config file in a volume accessible to it.
OpenTelemetry Collector is configured via otel-collector-config.yaml
according to the Configuration guide.
In this simple setup, we're using:
- the OTLP receiver using an HTTP interface to receive traces from our application,
- the batch processor with a timeout of 10s as suggested by the Datadog exporter,
- and the Datadog exporter that actually sends traces to Datadog.
All of the above are used in a single traces
pipeline.