Another data collector that I wanted to explore as part of this observability section was Fluentd. An Open-Source unified logging layer.
Fluentd has four key features that makes it suitable to build clean, reliable logging pipelines:
Unified Logging with JSON: Fluentd tries to structure data as JSON as much as possible. This allows Fluentd to unify all facets of processing log data: collecting, filtering, buffering, and outputting logs across multiple sources and destinations. The downstream data processing is much easier with JSON, since it has enough structure to be accessible without forcing rigid schemas.
Pluggable Architecture: Fluentd has a flexible plugin system that allows the community to extend its functionality. Over 300 community-contributed plugins connect dozens of data sources to dozens of data outputs, manipulating the data as needed. By using plugins, you can make better use of your logs right away.
Minimum Resources Required: A data collector should be lightweight so that it runs comfortably on a busy machine. Fluentd is written in a combination of C and Ruby, and requires minimal system resources. The vanilla instance runs on 30-40MB of memory and can process 13,000 events/second/core.
Built-in Reliability: Data loss should never happen. Fluentd supports memory- and file-based buffering to prevent inter-node data loss. Fluentd also supports robust failover and can be set up for high availability.
- Write to files.
.log
files (difficult to analyse without a tool and at scale) - Log directly to a database (each application must be configured with the correct format)
- Third party applications (NodeJS, NGINX, PostgreSQL)
This is why we want a unified logging layer.
FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example.
Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination.
In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers.
Fluentd and Fluentbit will use the input plugins to transform that data to Fluent Bit format, then we have output plugins to whatever that output target is such as elasticsearch.
We can also use tags and matches between configurations.
I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures.
Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on each node in the cluster. Each Fluent Bit pod on each node will then read each container on that node and gather all of the logs available. It will also gather the metadata from the Kubernetes API Server.
Kubernetes annotations can be used within the configuration YAML of our applications.
First of all we can deploy from the fluent helm repository. helm repo add fluent https://fluent.github.io/helm-charts
and then install using the helm install fluent-bit fluent/fluent-bit
command.
In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using kubectl get all | grep fluent
this is going to show us our running pod, service and daemonset that we mentioned earlier.
So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file.
That ConfigMap will look something like:
Name: fluent-bit
Namespace: default
Labels: app.kubernetes.io/instance=fluent-bit
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit
app.kubernetes.io/version=1.8.14
helm.sh/chart=fluent-bit-0.19.21
Annotations: meta.helm.sh/release-name: fluent-bit
meta.helm.sh/release-namespace: default
Data
====
custom_parsers.conf:
----
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
fluent-bit.conf:
----
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
Name es
Match kube.*
Host elasticsearch-master
Logstash_Format On
Retry_Limit False
[OUTPUT]
Name es
Match host.*
Host elasticsearch-master
Logstash_Format On
Logstash_Prefix node
Retry_Limit False
Events: <none>
We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with kubectl get pods | grep fluent
and then use kubectl port-forward fluent-bit-8kvl4 2020:2020
open a web browser to http://localhost:2020/
I also found this really great medium article covering more about Fluent Bit
- Understanding Logging: Containers & Microservices
- The Importance of Monitoring in DevOps
- Understanding Continuous Monitoring in DevOps?
- DevOps Monitoring Tools
- Top 5 - DevOps Monitoring Tools
- How Prometheus Monitoring works
- Introduction to Prometheus monitoring
- Promql cheat sheet with examples
- Log Management for DevOps | Manage application, server, and cloud logs with Site24x7
- Log Management what DevOps need to know
- What is ELK Stack?
- Fluentd simply explained
- Fluent Bit explained | Fluent Bit vs Fluentd
See you on Day 82