|
2 | 2 |
|
3 | 3 | The Fluent Bit data pipeline incorporates several specific concepts. Data processing flows through the pipeline following these concepts in order. |
4 | 4 |
|
5 | | -## Filters |
| 5 | +```mermaid |
| 6 | +graph LR |
| 7 | + accTitle: Fluent Bit data pipeline |
| 8 | + accDescr: A diagram of the Fluent Bit data pipeline, which includes input, a parser, a filter, a buffer, routing, and various outputs. |
| 9 | + A[Input] --> B[Parser] |
| 10 | + B --> C[Filter] |
| 11 | + C --> D[Buffer] |
| 12 | + D --> E((Routing)) |
| 13 | + E --> F[Output 1] |
| 14 | + E --> G[Output 2] |
| 15 | + E --> H[Output 3] |
| 16 | +``` |
6 | 17 |
|
7 | | -[Filters](../pipeline/filters.md) let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing. |
| 18 | +## Inputs |
8 | 19 |
|
9 | | -## Buffer |
| 20 | +[Input plugins](../pipeline/inputs.md) gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs. |
10 | 21 |
|
11 | | -The [`buffer`](./buffering.md) phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode. |
| 22 | +## Parser |
12 | 23 |
|
13 | | -## Inputs |
14 | | - |
15 | | -Fluent Bit provides [input plugins](../pipeline/inputs.md) to gather information from different sources. Some plugins collect data from log files, and others gather metrics information from the operating system. There are many plugins to suit different needs. |
| 24 | +[Parsers](../pipeline/parsers.md) convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected. |
16 | 25 |
|
17 | | -## Outputs |
| 26 | +## Filter |
18 | 27 |
|
19 | | -[Output plugins](../pipeline/outputs.md) let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces. |
| 28 | +[Filters](../pipeline/filters.md) let you alter the collected data before delivering it to a destination. In production environments you need full control of the data you're collecting. Using filters lets you control data before processing. |
20 | 29 |
|
21 | | -## Parsers |
| 30 | +## Buffer |
22 | 31 |
|
23 | | -[Parsers](../pipeline/parsers.md) convert unstructured data to structured data. Use a parser to set a structure to the incoming data by using input plugins as data is collected. |
| 32 | +The [buffering](./buffering.md) phase in the pipeline aims to provide a unified and persistent mechanism to store your data, using the primary in-memory model or the file system-based mode. |
24 | 33 |
|
25 | | -## Route |
| 34 | +## Routing |
26 | 35 |
|
27 | 36 | [Routing](../pipeline/router.md) is a core feature that lets you route your data through filters, and then to one or multiple destinations. The router relies on the concept of [tags](./key-concepts.md#tag) and [matching](./key-concepts.md#match) rules. |
| 37 | + |
| 38 | +## Output |
| 39 | + |
| 40 | +[Output plugins](../pipeline/outputs.md) let you define destinations for your data. Common destinations are remote services, local file systems, or other standard interfaces. |
0 commit comments