You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am using this plugin to collect the logs from journald and separating the logs for docker, kubelet and rest other services.
Apart from this using the concat filter for multiline logs parsing as a single event.
All this seems to work fine, when the log generation is normal, but few of the applications generate huge amount logs including the debug logs which is where we are encountering the log delay for upto 1 day for only those particular applications.
Also, tried adding the multi worker for improving the performance, but it seems that is not supported by systemd plugin.
Is there any workaround on the same.
Below is my configuration file:
<source>
@type systemd
path /run/log/journal
#matches [{ "_SYSTEMD_UNIT": "syslog.service" }]
read_from_head false #Set to true if you want to read logs from starting
<storage>
@type local
persistent true
path /run/log/fluent/syslog.pos
</storage>
tag system_journal
</source>
<filter system_journal*>
@type record_transformer
enable_ruby
<record>
_SYSTEMD_UNIT ${if record.has_key?('_SYSTEMD_UNIT'); record['_SYSTEMD_UNIT'].gsub(/^(docker|kubelet|[a-z]*)/,'\1'); else 'infra'; end}
</record>
</filter>
<match system_journal*>
@type rewrite_tag_filter
<rule>
key _SYSTEMD_UNIT
pattern (docker|kubelet)
tag $1.${tag}
</rule>
<rule>
key _SYSTEMD_UNIT
pattern .+
tag infra
</rule>
</match>
<filter docker.system_journal*>
@type kubernetes_metadata
@id filter_kube_metadata
kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}"
verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || false}"
ca_file "#{ENV['KUBERNETES_CA_FILE']}"
skip_labels "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_LABELS'] || 'false'}"
skip_container_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_CONTAINER_METADATA'] || 'false'}"
skip_master_url "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_MASTER_URL'] || 'false'}"
skip_namespace_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_NAMESPACE_METADATA'] || 'false'}"
</filter>
<filter docker.system_journal*>
@type record_transformer
renew_record false
enable_ruby
<record>
k8s_clustername ${if record.has_key?('kubernetes'); record["kubernetes"]["host"].gsub(/^(.+)(master|worker).+/,'\1'); end}
</record>
</filter>
<match docker.system_journal*>
@type rewrite_tag_filter
<rule>
key _TRANSPORT
pattern ^(journal)
tag $1.docker
</rule>
</match>
<filter **>
@type concat
key MESSAGE
timeout_label @NORMAL
multiline_start_regexp /^(\d{4}[-/]\d{1,2}[-/]\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{0,3}|[IEW]\d{4} \d{1,2}:\d{1,2}:\d{1,2}.\d{0,3})/
multiline_end_regexp /\n$/
stream_identity_key CONTAINER_ID
flush_interval 60s
</filter>
<match **>
@type relabel
@label @NORMAL
</match>
<label @NORMAL>
<match **>
@type vmware_loginsight
@id out_vmw_li_all_container_logs
scheme https
ssl_verify false
# Loginsight host: One may use IP address or cname
path api/v1/events/ingest
host loginight.example.com
port 9543
# agent_id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
agent_id aexample-uuid-4b7a-8b09-fbfac4b46fd9
# Keys from log event whose values should be added as log message/text to
# Loginsight. Note these key/value pairs won't be added as metadata/fields
log_text_keys ["log","msg","message"]
# Use this flag if you want to enable http debug logs
http_conn_debug false
</match>
</label>
The text was updated successfully, but these errors were encountered:
Hi,
I am using this plugin to collect the logs from journald and separating the logs for docker, kubelet and rest other services.
Apart from this using the concat filter for multiline logs parsing as a single event.
All this seems to work fine, when the log generation is normal, but few of the applications generate huge amount logs including the debug logs which is where we are encountering the log delay for upto 1 day for only those particular applications.
Also, tried adding the multi worker for improving the performance, but it seems that is not supported by systemd plugin.
Is there any workaround on the same.
Below is my configuration file:
The text was updated successfully, but these errors were encountered: