You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/var/fluent-bit/state/flb-storage/tail.0
sh-4.2$ ls -ltr |wc
9902
sh-4.2$ ls -ltr| more
-rw------- 1 root root 2048984 Dec 12 18:46 1-1734028879.597843005.flb
-rw------- 1 root root 2048788 Dec 12 18:48 1-1734028951.455391458.flb
-rw------- 1 root root 2049120 Dec 12 18:48 1-1734028999.60364959.flb
-rw------- 1 root root 2048756 Dec 12 18:49 1-1734028949.226740813.flb
-rw------- 1 root root 2059566 Dec 12 18:50 1-1734029020.484330725.flb
/var/fluent-bit/state/flb-storage/emitter.9
sh-4.2$ ls -ltr|wc
1104
Fluent Bit Version Info
Fluent Bit is running as a Daemonset
public.ecr.aws/aws-observability/aws-for-fluent-bit:2.32.2.20241008
[fluent bit] version=1.9.10, commit=eba89f4660
EKS 1.30
Application Details
Fluentbit should concatenate stack traces or application logs print in multiple lines.
From only the information provided, I'm not sure what is happening in your application.
Can you follow all of the below recommendations, or let us know if any of them end up solving your issue?
Review the full fluent bit logs to see if our debugging guide applies to anything there
Check if there are lines saying "failed to flush chunk", and if so trace the specific plugin worker that saw that failure to see why the flush failed
If this behavior occurs at some point after the output buffer is filled, provide the fluent bit logs a few minutes around the point where the buffer is filled
Provide the fluent bit logs from a few minutes before to a few minutes after after the point where the S3 plugin starts misbehaving
Check (e.g. via utilization metrics) if the filesystem and memory buffers are static or steadily increasing (if the former, it's likely Fluent Bit is not receiving any logs in the first place, in which case the issue would likely be elsewhere)
Clarify the meaning of "not sending data"; i.e. is there 0 data sent, is there a trickle of data but not at the same rate as the input, does Fluent Bit start sending data again if you cut off the input data and let it process its backlog?
Verify that logs are being sent to the fluent bit process at all, or if your application itself is the source of the misbehavior
Get reproduction steps from a clean start for the issue
Try using other input plugins (e.g. tcp) to see if this issue occurs there as well
Describe the question/issue
Fluentbit is sending data to s3 and it works fine for sometime and then its stuck and not sending data to S3.
Configuration
Fluent Bit Log Output
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160287.631424024.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160331.698451330.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160364.698626403.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160430.908663992.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160462.846875919.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160468.165929808.flb
chunks are pending in below locations.
Fluent Bit Version Info
Fluent Bit is running as a Daemonset
public.ecr.aws/aws-observability/aws-for-fluent-bit:2.32.2.20241008
[fluent bit] version=1.9.10, commit=eba89f4660
EKS 1.30
Application Details
Fluentbit should concatenate stack traces or application logs print in multiple lines.
Metrices
sh-4.2$ curl -s http://127.0.0.1:2020/api/v1/storage | jq
{
"storage_layer": {
"chunks": {
"total_chunks": 12510,
"mem_chunks": 9,
"fs_chunks": 12501,
"fs_chunks_up": 3158,
"fs_chunks_down": 9343
}
},
"input_chunks": {
"tail.0": {
"status": {
"overlimit": true,
"mem_size": "5.7G",
"mem_limit": "572.2M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.1": {
"status": {
"overlimit": false,
"mem_size": "38.5K",
"mem_limit": "4.8M"
},
"chunks": {
"total": 1,
"up": 1,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.2": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"systemd.3": {
"status": {
"overlimit": false,
"mem_size": "64.6K",
"mem_limit": "0b"
},
"chunks": {
"total": 5,
"up": 5,
"down": 0,
"busy": 5,
"busy_size": "64.6K"
}
},
"tail.4": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "47.7M"
},
"chunks": {
"total": 1,
"up": 0,
"down": 1,
"busy": 0,
"busy_size": "0b"
}
},
"tail.5": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.6": {
"status": {
"overlimit": false,
"mem_size": "111.3K",
"mem_limit": "4.8M"
},
"chunks": {
"total": 3,
"up": 3,
"down": 0,
"busy": 2,
"busy_size": "75.0K"
}
},
"tail.7": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"storage_backlog.8": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "0b"
},
"chunks": {
"total": 3008,
"up": 3008,
"down": 0,
"busy": 1890,
"busy_size": "3.6G"
}
},
"emitter_for_multiline.0": {
"status": {
"overlimit": false,
"mem_size": "7.9M",
"mem_limit": "9.5M"
},
"chunks": {
"total": 200,
"up": 150,
"down": 50,
"busy": 150,
"busy_size": "7.9M"
}
}
}
}
The text was updated successfully, but these errors were encountered: