Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Craig Norris <[email protected]>
Signed-off-by: esmerel <[email protected]>
  • Loading branch information
esmerel and cnorris-cs authored Sep 13, 2024
1 parent 69943d9 commit d9de41a
Showing 1 changed file with 31 additions and 30 deletions.
61 changes: 31 additions & 30 deletions pipeline/outputs/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The plugin can upload data to S3 using the
or [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).
Multipart is the default and is recommended. Fluent Bit will stream data in a series
of _parts_. This limits the amount of data buffered on disk at any point in time.
By default, every time 5 MiB of data have been received, a new part will be uploaded.
By default, every time 5&nbsp;MiB of data have been received, a new part will be uploaded.
The plugin can create files up to gigabytes in size from many small chunks or parts
using the multipart API. All aspects of the upload process are configurable.

Expand All @@ -31,7 +31,7 @@ for details about fetching AWS credentials.

{% hint style="info" %}
The [Prometheus success/retry/error metrics values](administration/monitoring.md)
output by Fluent Bit's built-in http server are meaningless for S3 output. S3 has
output by the built-in http server in Fluent Bit are meaningless for S3 output. S3 has
its own buffering and retry mechanisms. The Fluent Bit AWS S3 maintainers apologize
for this feature gap; you can [track our progress fixing it on GitHub](https://github.com/fluent/fluent-bit/issues/6141).
{% endhint %}
Expand All @@ -43,7 +43,7 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
| `region` | The AWS region of your S3 bucket. | `us-east-1` |
| `bucket` | S3 Bucket name | _none_ |
| `json_date_key` | Specify the time key name in the output record. To disable the time key, set the value to `false`. | `date` |
| `json_date_format` | Specify the format of the date. Supported formats are `double`, `epoch`, `iso8601` ( 2018-05-30T09:39:52.000681Z) and `_java_sql_timestamp_` (2018-05-30 09:39:52.000681) | `iso8601` |
| `json_date_format` | Specify the format of the date. Accepted values: `double`, `epoch`, `iso8601` (2018-05-30T09:39:52.000681Z), `_java_sql_timestamp_` (2018-05-30 09:39:52.000681). | `iso8601` |
| `total_file_size` | Specify file size in S3. Minimum size is `1M`. With `use_put_object On` the maximum size is `1G`. With multipart uploads, the maximum size is `50G`. | `100M` |
| `upload_chunk_size` | The size of each part for multipart uploads. Max: 50M | 5,242,880 bytes |
| `upload_timeout` | When this amount of time elapses, Fluent Bit uploads and creates a new file in S3. Set to `60m` to upload a new file every hour. | `10m`|
Expand All @@ -55,7 +55,7 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
| `use_put_object` | Use the S3 `PutObject` API instead of the multipart upload API. When enabled, the key extension is only available when `$UUID` is specified in `s3_key_format`. If `$UUID` isn't included, a random string appends format string and the key extension can't be customized. | `false` |
| `role_arn` | ARN of an IAM role to assume (for example, for cross account access.) | _none_ |
| `endpoint` | Custom endpoint for the S3 API. Endpoints can contain scheme and port. | _none_ |
| `sts_endpoint` | Custom endpoint for the STS API. | _none_ |
| `sts_endpoint` | Custom endpoint for the STS API. | _none_ |
| `profile` | Option to specify an AWS Profile for credentials. | `default` |
| `canned_acl` | [Predefined Canned ACL policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) for S3 objects. | _none_ |
| `compression` | Compression type for S3 objects. `gzip` is currently the only supported value by default. If Apache Arrow support was enabled at compile time, you can use `arrow`. For gzip compression, the Content-Encoding HTTP Header will be set to `gzip`. Gzip compression can be enabled when `use_put_object` is `on` or `off` (`PutObject` and Multipart). Arrow compression can only be enabled with `use_put_object On`. | _none_ |
Expand All @@ -65,15 +65,15 @@ for this feature gap; you can [track our progress fixing it on GitHub](https://g
| `log_key` | By default, the whole log record will be sent to S3. When specifing a key name with this option, only the value of that key sends to S3. For example, when using Docker you can specify `log_key log` and only the log message sends to S3. | _none_ |
| `preserve_data_ordering` | When an upload request fails, the last received chunk might swap with a later chunk, resulting in data shuffling. This feature prevents shuffling by using a queue logic for uploads. | `true` |
| `storage_class` | Specify the [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) for S3 objects. If this option isn't specified, objects store with the default `STANDARD` storage class. | _none_ |
| `retry_limit` | Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or higher. For previous version, the number of retries is 5 and isn;t configurable. | `1` |
| `retry_limit` | Integer value to set the maximum number of retries allowed. Requires versions 1.9.10 and 2.0.1 or later. For previous version, the number of retries is `5` and isn't configurable. | `1` |
| `external_id` | Specify an external ID for the STS API. Can be used with the `role_arn` parameter if your role requires an external ID. | _none_ |
| `workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `1` |

## TLS / SSL

To skip TLS verification, set `tls.verify` as `false`. For more details about the
properties available and general configuration, refer to
[TLS/SSL](../../administration/transport-security.md).
properties available and general configuration, refer to
[TLS/SSL](../../administration/transport-security.md).

## Permissions

Expand All @@ -98,8 +98,8 @@ The S3 output plugin is used to upload large files to an Amazon S3 bucket, while
most other outputs which send many requests to upload data in batches of a few
megabytes or less.

When Fluent Bit recieves logs it stores them in chunks, either in memory or the
filesystem depending on your settings. Chunks are usually around 2 MB in size.
When Fluent Bit receives logs, it stores them in chunks, either in memory or the
filesystem depending on your settings. Chunks are usually around 2&nbsp;MB in size.
Fluent Bit sends chunks, in order, to each output that matches their tag. Most outputs
then send the chunk immediately to their destination. A chunk is sent to the output's
`flush` callback function, which must return one of `FLB_OK`, `FLB_RETRY`, or
Expand All @@ -108,7 +108,7 @@ then send the chunk immediately to their destination. A chunk is sent to the out
and success metrics available in Prometheus format through its monitoring interface.

The S3 output plugin conforms to the Fluent Bit output plugin specification.
Since S3's use case is to upload large files (over 2MB), its behavior is different.
Since S3's use case is to upload large files (over 2&nbsp;MB), its behavior is different.
S3's `flush` callback function buffers the incoming chunk to the filesystem, and
returns an `FLB_OK`. This means Prometheus metrics available from the Fluent
Bit HTTP server are meaningless for S3. In addition, the `storage.total_limit_size`
Expand All @@ -132,7 +132,8 @@ uploaded in the original order it was collected by Fluent Bit.
[opened an issue with a design](https://github.com/fluent/fluent-bit/issues/6141)
to allow S3 to manage its own output metrics.
- You must use `store_dir_limit_size` to limit the space on disk used by S3 buffer files.
- The original ordering of data inputted to Fluent Bit may not be preserved unless you enable `preserve_data_ordering On`.
- The original ordering of data inputted to Fluent Bit may not be preserved unless you enable
`preserve_data_ordering On`.

## S3 Key Format and Tag Delimiters

Expand All @@ -158,7 +159,7 @@ associated with the logs in question is `my_app_name-logs.prod`.
s3_key_format_tag_delimiters .-
```

With the delimiters as `.` and `-,` the tag splits into parts as follows:
With the delimiters as `.` and `-`, the tag splits into parts as follows:

- `$TAG[0]` = `my_app_name`
- `$TAG[1]` = `logs`
Expand All @@ -169,9 +170,9 @@ The key in S3 will be `/prod/my_app_name/2020/01/01/00/00/00/bgdHN1NM.gz`.
### Allowing a file extension in the S3 Key Format with $UUID

The Fluent Bit S3 output was designed to ensure that previous uploads will never be
over-written by a subsequent upload. The `s3_key_format` supports time formatters,
`$UUID`, and `$INDEX`. `$INDEX` is special because it is saved in the `store_dir`; if
you restart Fluent Bit with the same disk, then it can continue incrementing the
overwritten by a subsequent upload. The `s3_key_format` supports time formatters,
`$UUID`, and `$INDEX`. `$INDEX` is special because it is saved in the `store_dir`. If
you restart Fluent Bit with the same disk, it can continue incrementing the
index from its last value in the previous run.

For files uploaded with the `PutObject` API, the S3 output requires that a unique
Expand All @@ -182,20 +183,20 @@ specify minute granularity timestamps in the S3 key, with a small upload size, i
possible to have two uploads that have timestamps set in the same minute. This
requirement can be disabled with `static_file_path On`.

There are three cases where the `PutObject` API is used:
The `PutObject` API is used in these cases:

1. When you explicitly set `use_put_object On`.
1. On startup when the S3 output finds old buffer files in the `store_dir` from
a previous run and attempts to send all of them at once.
1. On shutdown. To prevent data loss the S3 output attempts to send all currently
buffered data at once.
- When you explicitly set `use_put_object On`.
- On startup when the S3 output finds old buffer files in the `store_dir` from
a previous run and attempts to send all of them at once.
- On shutdown. To prevent data loss the S3 output attempts to send all currently
buffered data at once.

You should always specify `$UUID` somewhere in your S3 key format. Otherwise, if the
`PutObject` API is used, S3 appends a random 8 character UUID to the end of your
`PutObject` API is used, S3 appends a random eight-character UUID to the end of your
S3 key. This means that a file extension set at the end of an S3 key will have the
random UUID appended to it. Disabled this with `static_file_path On`.

For example, we attempt to set a `.gz` extension without specifying `$UUID`.
For example, we attempt to set a `.gz` extension without specifying `$UUID`:

```python
[OUTPUT]
Expand All @@ -219,7 +220,7 @@ key in the S3 bucket might be:
The S3 output appended a random string to the file extension, since this upload
on shutdown used the `PutObject` API.

There are two ways of disabling this behavior.
There are two ways of disabling this behavior:

- Use `static_file_path`:

Expand Down Expand Up @@ -258,9 +259,9 @@ shuts down. If it can not send some data, on restart it will look in the `store_
for existing data and try to send it.

Multipart uploads are ideal for most use cases because they allow the plugin to
upload data in small chunks over time. For example, 1 GB file can be created from 200
5MB chunks. While the file size in S3 will be 1 GB, only 5 MB will be buffered on
disk at any one point in time.
upload data in small chunks over time. For example, 1&nbsp;GB file can be created
from 200 5&nbsp;MB chunks. While the file size in S3 will be 1&nbsp;GB, only
5&nbsp;MB will be buffered on disk at any one point in time.

One drawback to multipart uploads is that the file and data aren't visible in S3
until the upload is completed with a
Expand Down Expand Up @@ -338,7 +339,7 @@ fallback to use the [`PutObject` API](https://docs.aws.amazon.com/AmazonS3/lates

When you enable compression, S3 applies the compression algorithm at send time. The
size settings trigger uploads based on the size of buffered data, not the
final compressed size. It is possible that after compression, buffered data no longer
final compressed size. It's possible that after compression, buffered data no longer
meets the required minimum S3
[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
size. If this occurs, you will see a log message like:
Expand All @@ -350,7 +351,7 @@ compression, chunk is only 1063320 bytes, the chunk was too small, using PutObje

If you encounter this frequently, use the numbers in the messages to guess your
compression factor. In this example, the buffered data was reduced from
5,630,650 bytes to 1,063,320 bytes. The compressed size is 1/5 the actual data size.
5,630,650 bytes to 1,063,320 bytes. The compressed size is one-fifth the actual data size.
Configuring `upload_chunk_size 30M` should ensure each part is large enough after
compression to be over the minimum required part size of 5,242,880 bytes.

Expand Down Expand Up @@ -513,7 +514,7 @@ cmake -DFLB_ARROW=On ..
cmake --build .
```

Once compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.
After being compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format.

For example:

Expand Down

0 comments on commit d9de41a

Please sign in to comment.