Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

First user guides (local setup with kind+Jaeger) #49

Merged
merged 2 commits into from
Nov 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
305 changes: 305 additions & 0 deletions GUIDE_KIND.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
## Basic Setup

Create a 2-node cluster using kind:
```
cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.21.1@sha256:fae9a58f17f18f06aeac9772ca8b5ac680ebbed985e266f711d936e91d113bad
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
taints: []
- role: worker
image: kindest/node:v1.21.1@sha256:fae9a58f17f18f06aeac9772ca8b5ac680ebbed985e266f711d936e91d113bad
networking:
disableDefaultCNI: true
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.245.0.0/16"
EOF
kind create cluster --config kind-config.yaml
```

Install Cilim with Hubble enabled:
```
cilium install && cilium hubble enable
```

Install cert-manager as it's a dependency of the OpenTelemetry operator:
```
kubectl apply -k github.com/cilium/kustomize-bases/cert-manager
```

Wait for cert-manager to become ready:
```
(
set -e
kubectl wait deployment --namespace="cert-manager" --for="condition=Available" cert-manager-webhook cert-manager-cainjector cert-manager --timeout=3m
kubectl wait pods --namespace="cert-manager" --for="condition=Ready" --all --timeout=3m
kubectl wait apiservice --for="condition=Available" v1.cert-manager.io v1.acme.cert-manager.io --timeout=3m
until kubectl get secret --namespace="cert-manager" cert-manager-webhook-ca 2> /dev/null ; do sleep 0.5 ; done
)
```

Deploy Jaeger operator:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is Jaeger a dependency for OpenTelemetry?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not generally. It's just part of the OpenTelemetry world, and it can be used sort of independently. You would probably use OTel SDK with Jaeger anyway, but the OTel collector is no essential for using Jaeger. Does this make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you read all of the configs in both of the OpenTelemetryCollector CRs that are defined here, you should be able to get an idea how the pieces connect together. I would be happy to get on a call and draw some kind of an arch diagram together also!

```
kubectl apply -k github.com/cilium/kustomize-bases/jaeger
```

Configure a memory-backed Jaeger instance:
```
cat > jaeger.yaml << EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-default
namespace: jaeger
spec:
strategy: allInOne
storage:
type: memory
options:
memory:
max-traces: 100000
ingress:
enabled: false
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
EOF
kubectl apply -f jaeger.yaml
```


Deploy OpenTelemetry operator:
```
kubectl apply -k github.com/cilium/kustomize-bases/opentelemetry
```

Configure a collector with Hubble receiver and Jaeger exporter:
```
cat > otelcol.yaml << EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otelcol-hubble
namespace: kube-system
spec:
mode: daemonset
image: ghcr.io/cilium/hubble-otel/otelcol:v0.1.0-rc.1
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
#- name: cilium-run
# hostPath:
# path: /var/run/cilium
# type: Directory
- name: hubble-tls
projected:
defaultMode: 256
sources:
- secret:
name: hubble-relay-client-certs
items:
- key: tls.crt
path: client.crt
- key: tls.key
path: client.key
- key: ca.crt
path: ca.crt
volumeMounts:
#- name: cilium-run
# mountPath: /var/run/cilium
- name: hubble-tls
mountPath: /var/run/hubble-tls
readOnly: true
config: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:55690
hubble:
endpoint: \${NODE_NAME}:4244 # unix:///var/run/cilium/hubble.sock
buffer_size: 100
tls:
insecure_skip_verify: true
ca_file: /var/run/hubble-tls/ca.crt
cert_file: /var/run/hubble-tls/client.crt
key_file: /var/run/hubble-tls/client.key
processors:
resource:
attributes:
# Jaeger UI doesn't allow searching traces without 'service.name'
# resource attributes (see https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/6048c3ff61ff97e93be6426093731713aba1eec2/pkg/translator/jaeger/traces_to_jaegerproto.go#L101-L104)
- key: service.name
value: hubble-otel
action: insert
batch:
timeout: 30s
send_batch_size: 100

exporters:
logging:
loglevel: debug
jaeger:
endpoint: jaeger-default-collector.jaeger.svc.cluster.local:14250
tls:
insecure: true

service:
telemetry:
logs:
level: info # debug
pipelines:
traces:
receivers: [hubble, otlp]
processors: [batch, resource]
exporters: [jaeger]

EOF
kubectl apply -f otelcol.yaml
```

This configuration will deploy the collector as a DaemonSet, you can see the pods by running:
```
kubectl get pod -n kube-system -l app.kubernetes.io/name=otelcol-hubble-collector
```

To view the logs, run:
```
kubectl logs -n kube-system -l app.kubernetes.io/name=otelcol-hubble-collector
```

You should now be able to view traces produced by Hubble in the Jaeger UI, which you can access by port-forwarding:
```
kubectl port-forward svc/jaeger-default-query -n jaeger 16686
```

## Getting More Visibility

The basic setup is done now. However, you probably won't find anything interesting just yet.
Let's get more traces generated, and enable DNS & HTTP visibility in Cilium.

First, deploy bookinfo app:
```
kubectl apply -k github.com/cilium/kustomize-bases/bookinfo
```

Enable HTTP visibility for the bookinfo app and all of DNS traffic:
```
cat > visibility-policies.yaml << EOF
---
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: default-allow
spec:
endpointSelector: {}
egress:
- toEntities:
- cluster
- world
- toEndpoints:
- {}
---
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: dns-visibility
spec:
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
- toFQDNs:
- matchPattern: "*"
- toEndpoints:
- {}
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: http-visibility
namespace: bookinfo
spec:
endpointSelector: {}
egress:
- toPorts:
- ports:
- port: "9080"
protocol: TCP
rules:
http:
- method: ".*"
- toEndpoints:
- {}
EOF
kubectl apply -f visibility-policies.yaml
```

The bookinfo app will produce Jaeger traces already, however to collect these
a sidecar is [recommended](https://github.com/jaegertracing/jaeger-client-python/issues/47#issuecomment-303119229).

Add sidecard config:
```
cat > otelcol-bookinfo.yaml << EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otelcol-bookinfo
namespace: bookinfo
spec:
mode: sidecar
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
config: |
receivers:
jaeger:
protocols:
thrift_binary: {}
thrift_compact: {}
exporters:
otlp:
endpoint: \${NODE_NAME}:55690

service:
telemetry:
logs:
level: info
pipelines:
traces:
receivers: [jaeger]
exporters: [otlp]

EOF
kubectl apply -f otelcol-bookinfo.yaml
```

Re-create bookinfo pods to add the sidecars:
```
kubectl delete pods -n bookinfo --all --wait=false
```

Generate some load on the bookinfo app, so that there are plenty of traces:
```
while true ; do kubectl -n bookinfo exec "$(kubectl -n bookinfo get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"; done
```
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ automatically for related flows, based on source/destination address, port and C
When an L7 flow contains common trace ID headers, those will be respected.

The functionality is package as a standalone program (see [`main.go`](main.go)) that can speak Hubble
API as well as OTLP, as well as an reciever for OpenTelemetry collector (see [`receiver/`](receiver)).
API as well as OTLP, as well as an receiver for OpenTelemetry collector (see [`receiver/`](receiver)).
Eventually it might prove that custom OpenTelemetry collector is the most suitable way of running the
Hubble adaptor, but at the time of writing prior to the initial realese it wasn't abundantly clear.
Hubble adaptor, but at the time of writing prior to the initial release it wasn't abundantly clear.

## Getting Started

Expand Down Expand Up @@ -73,7 +73,7 @@ and can be updated easily.

## Running `hubble-otel` (standalone)

To start Hubble adaptor in stanalone mode against a local collector, run:
To start Hubble adaptor in standalone mode against a local collector, run:

```
./dev-script/run-hubble-otel.sh [<flags>]
Expand Down