You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/s3.md
+31-30Lines changed: 31 additions & 30 deletions
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ The plugin can upload data to S3 using the
15
15
or [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
16
16
Multipart is the default and is recommended. Fluent Bit will stream data in a series
17
17
of _parts_. This limits the amount of data buffered on disk at any point in time.
18
-
By default, every time 5MiB of data have been received, a new part will be uploaded.
18
+
By default, every time 5 MiB of data have been received, a new part will be uploaded.
19
19
The plugin can create files up to gigabytes in size from many small chunks or parts
20
20
using the multipart API. All aspects of the upload process are configurable.
21
21
@@ -31,7 +31,7 @@ for details about fetching AWS credentials.
31
31
32
32
{% hint style="info" %}
33
33
The [Prometheus success/retry/error metrics values](administration/monitoring.md)
34
-
output by Fluent Bit's built-in http server are meaningless for S3 output. S3 has
34
+
output by the built-in http server in Fluent Bit are meaningless for S3 output. S3 has
35
35
its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers apologize
36
36
for this feature gap; you can [track our progress fixing it on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
37
37
{% endhint %}
@@ -43,7 +43,7 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
43
43
|`region`| The AWS region of your S3 bucket. |`us-east-1`|
44
44
|`bucket`| S3 Bucket name |_none_|
45
45
|`json_date_key`| Specify the time key name in the output record. To disable the time key, set the value to `false`. |`date`|
46
-
|`json_date_format`| Specify the format of the date. Supported formats are `double`, `epoch`, `iso8601` (2018-05-30T09:39:52.000681Z) and `_java_sql_timestamp_` (2018-05-30 09:39:52.000681) |`iso8601`|
46
+
|`json_date_format`| Specify the format of the date. Accepted values: `double`, `epoch`, `iso8601` (2018-05-30T09:39:52.000681Z), `_java_sql_timestamp_` (2018-05-30 09:39:52.000681).|`iso8601`|
47
47
|`total_file_size`| Specify file size in S3. Minimum size is `1M`. With `use_put_object On` the maximum size is `1G`. With multipart uploads, the maximum size is `50G`. |`100M`|
48
48
|`upload_chunk_size`| The size of each part for multipart uploads. Max: 50M | 5,242,880 bytes |
49
49
|`upload_timeout`| When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to `60m` to upload a new file every hour. |`10m`|
@@ -55,7 +55,7 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
55
55
|`use_put_object`| Use the S3 `PutObject` API instead of the multipart upload API. When enabled, the key extension is only available when `$UUID` is specified in `s3_key_format`. If `$UUID` isn't included, a random string appends format string and the key extension can't be customized. |`false`|
56
56
|`role_arn`| ARN of an IAM role to assume (for example, for cross account access.) |_none_|
57
57
|`endpoint`| Custom endpoint for the S3 API. Endpoints can contain scheme and port. |_none_|
58
-
|`sts_endpoint`| Custom endpoint for the STS API. |_none_|
58
+
|`sts_endpoint`| Custom endpoint for the STS API. |_none_|
59
59
|`profile`| Option to specify an AWS Profile for credentials. |`default`|
60
60
|`canned_acl`|[Predefined Canned ACL policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) for S3 objects. |_none_|
61
61
|`compression`| Compression type for S3 objects. `gzip` is currently the only supported value by default. If Apache Arrow support was enabled at compile time, you can use `arrow`. For gzip compression, the Content-Encoding HTTP Header will be set to `gzip`. Gzip compression can be enabled when `use_put_object` is `on` or `off` (`PutObject` and Multipart). Arrow compression can only be enabled with `use_put_object On`. |_none_|
@@ -65,15 +65,15 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
65
65
|`log_key`| By default, the whole log record will be sent to S3. When specifing a key name with this option, only the value of that key sends to S3. For example, when using Docker you can specify `log_key log` and only the log message sends to S3. |_none_|
66
66
|`preserve_data_ordering`| When an upload request fails, the last received chunk might swap with a later chunk, resulting in data shuffling. This feature prevents shuffling by using a queue logic for uploads. |`true`|
67
67
|`storage_class`| Specify the [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) for S3 objects. If this option isn't specified, objects store with the default `STANDARD` storage class. |_none_|
68
-
|`retry_limit`| Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or higher. For previous version, the number of retries is 5 and isn;t configurable. |`1`|
68
+
|`retry_limit`| Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or later. For previous version, the number of retries is `5` and isn't configurable. |`1`|
69
69
|`external_id`| Specify an external ID for the STS API. Can be used with the `role_arn` parameter if your role requires an external ID. |_none_|
70
70
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`1`|
71
71
72
72
## TLS / SSL
73
73
74
74
To skip TLS verification, set `tls.verify` as `false`. For more details about the
75
-
properties available and general configuration, refer to
0 commit comments