This is an external scaler for KEDA that intergrates with OpenTelemetry (OTEL) collector. The helm chart deploys also OTEL collector (using the upstream helm chart) where one can set up filtering so that scaler receives only those metrics that are needed for scaling decisions (example).
The application consist of three parts:
- receiver
- simple metric storage
- scaler
This component is implementation of OTLP Receiver spec,
so that it spawns a GRPC server (by default on port 4317
)
and stores all incoming metrics in the short term storage - simple metric storage.
Very simple metric storage designed to remember last couple of measurements (~ 10-100) for each metric vector. It can be configured with number of seconds to remember. Then during the write operation, it removes the stale measurements, so it effectively works as a cyclic buffer. Metrics are stored together with labels (key-value pairs) for later querying.
This component also spawns GRPC server (by default on port 4318
)
and can talk to KEDA operator by implementing the External Scaler contract.
It queries the internal in-memory metric storage for metric value and sends it to KEDA operator. The metric query is specified as a metadata on KEDA's ScaledObject CR, and it provides a limited subset of features as PromQL.
- (diagram link)
- [1] OTLP format
- [2] OTLP metric receiver
- [3] processors
- [4] https://opencensus.io - obsolete, will be replaced by OTEL
- [5] OpenCensus receiver
- [6] Prometheus receiver
- [7] OTLP exporter
By specifying an opencensus receiver in the helm chart values for OTEL collector, we will get the ability to get those metrics into our scaler.
OTEL collector contains numerous integrations on the receiver part. All of these various receivers open new ways of how to turn metric from OTEL receiver into KEDA scaler. For instance by using sqlqueryreceiver, one can achieve similar goals as with MySQL or PostgreSQL scalers. By using githubreceiver, one can hook to metrics from GitBub, etc.
OTEL collector provides various processors that are being applied on all incoming metrics/spans/traces and one achieve for instance metric filtering this way. So that not all the metric data are passed to scaler's short term memory. This way we can keep the OTEL scaler pretty lightweight.
OTTL lang:
If the simple metric query is not enough and one requires to combine multiple metric vectors into one or perform simple arithmetic operations on the metrics, there is the Metrics Generation Processor available as an option
Basically any scenario described in OTEL patterns or architecture should be supported. So no matter how the OTEL collectors are deployed, whether it's a fleet of sidecar containers deployed alongside each workload or some complex pipeline that spans multiple Kubernetes clusters, you will be covered.
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm upgrade -i keda kedacore/keda --namespace keda --create-namespace
helm repo add kedify-otel https://kedify.github.io/otel-add-on/
helm repo update
helm upgrade -i keda-otel kedify-otel/otel-add-on --version=v0.0.1-2
k apply -f examples/so.yaml