We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
throttling config
<filter k8s_log.**> @type throttle group_key $.kubernetes.namespace_name group_bucket_period_s 60 group_bucket_limit 6000 group_reset_rate_s 100
<match .k8s_log.> @id copy_k8s_log log_level trace @type copy @id kafka_buffered_k8s_log reserve_data true @log_level trace @type kafka_buffered brokers {brokers list} default_topic fluent_data output_include_tag true #output_include_time true required_acks 1 kafka_agg_max_bytes 10000000 kafka_agg_max_messages 1000000 max_send_limit_bytes 9000000000 #to avoid MessageSizeTooLarge get_kafka_client_log true # #@type memory #flush_mode immediate #flush_thread_count 20 #chunk_limit_size 8MB #total_limit_size 64MB #overflow_action drop_oldest_chunk # @id out_prometheus_k8s_log @type prometheus name fluentd_output_status_num_records_total type counter desc The total number of outgoing records tag ${tag} hostname ${hostname}
failed to flush the buffer. retry_time=2 next_retry_seconds=2018-09-03 07:18:49 +0000 chunk="574f2567d5ab677a1bd50654f855d45d" error_class=ArgumentError error="wrong number of arguments (given 6, expected 0)" 2018-09-03 07:18:49 +0000 [warn]: #0 suppressed same stacktrace 2018-09-03 07:18:50 +0000 [info]: #0 following tail of /applog/container/logs/json/splunk_2018-09-03.0718.log 2018-09-03 07:18:50 +0000 [trace]: #0 [kafka_buffered_k8s_log] enqueueing all chunks in buffer instance=47336657585960 2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Send exception occurred: wrong number of arguments (given 6, expected 0) 2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Exception Backtrace : /usr/lib/ruby/gems/2.4.0/gems/ruby-kafka-0.7.0/lib/kafka/pending_message.rb:7:in initialize' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in new' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in produce2' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:322:in block in write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in block in each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in open' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in open' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:322:in each' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:284:in write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/compat/output.rb:131:in write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1094:in try_flush' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1319:in flush_thread_run' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:439:in block (2 levels) in start' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
initialize' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in
produce2' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:322:in
each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in
open' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in
each' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:284:in
write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1094:in
flush_thread_run' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:439:in
The text was updated successfully, but these errors were encountered:
No branches or pull requests
throttling config
<filter k8s_log.**>
@type throttle
group_key $.kubernetes.namespace_name
group_bucket_period_s 60
group_bucket_limit 6000
group_reset_rate_s 100
<match .k8s_log.>
@id copy_k8s_log
log_level trace
@type copy
@id kafka_buffered_k8s_log
reserve_data true
@log_level trace
@type kafka_buffered
brokers {brokers list}
default_topic fluent_data
output_include_tag true
#output_include_time true
required_acks 1
kafka_agg_max_bytes 10000000
kafka_agg_max_messages 1000000
max_send_limit_bytes 9000000000 #to avoid MessageSizeTooLarge
get_kafka_client_log true
#
#@type memory
#flush_mode immediate
#flush_thread_count 20
#chunk_limit_size 8MB
#total_limit_size 64MB
#overflow_action drop_oldest_chunk
#
@id out_prometheus_k8s_log
@type prometheus
name fluentd_output_status_num_records_total
type counter
desc The total number of outgoing records
tag ${tag}
hostname ${hostname}
failed to flush the buffer. retry_time=2 next_retry_seconds=2018-09-03 07:18:49 +0000 chunk="574f2567d5ab677a1bd50654f855d45d" error_class=ArgumentError error="wrong number of arguments (given 6, expected 0)"
2018-09-03 07:18:49 +0000 [warn]: #0 suppressed same stacktrace
2018-09-03 07:18:50 +0000 [info]: #0 following tail of /applog/container/logs/json/splunk_2018-09-03.0718.log
2018-09-03 07:18:50 +0000 [trace]: #0 [kafka_buffered_k8s_log] enqueueing all chunks in buffer instance=47336657585960
2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Send exception occurred: wrong number of arguments (given 6, expected 0)
2018-09-03 07:18:53 +0000 [warn]: #0 [kafka_buffered_fluent_logs] Exception Backtrace : /usr/lib/ruby/gems/2.4.0/gems/ruby-kafka-0.7.0/lib/kafka/pending_message.rb:7:in
initialize' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in
new'/usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/kafka_producer_ext.rb:16:in
produce2' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:322:in
block in write'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in
each' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:323:in
block in each'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in
open' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/buffer/memory_chunk.rb:80:in
open'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/event.rb:322:in
each' /usr/lib/ruby/gems/2.4.0/gems/fluent-plugin-kafka-0.6.6/lib/fluent/plugin/out_kafka_buffered.rb:284:in
write'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/compat/output.rb:131:in
write' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1094:in
try_flush'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1319:in
flush_thread_run' /usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:439:in
block (2 levels) in start'/usr/lib/ruby/gems/2.4.0/gems/fluentd-1.1.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
The text was updated successfully, but these errors were encountered: