Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
dominikriemer committed Oct 23, 2023
2 parents 85e43d5 + efe4be2 commit 73fc245
Show file tree
Hide file tree
Showing 15 changed files with 953 additions and 244 deletions.
49 changes: 30 additions & 19 deletions docs/02_concepts-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,43 @@ title: StreamPipes Concepts
sidebar_label: Overview
---

To understand how StreamPipes works, the knowledge of a few core concepts, illustrated below, will be helpful.
To understand how StreamPipes works, it is helpful to understand a few core concepts, which are illustrated below.
These encompass the entire data journey within StreamPipes: Starting with data collection ([adapters](#adapter)),
through data exchange ([data streams](#data-stream)) and data processing ([data processors](#data-processor) and [pipelines](#pipeline)),
to data persistence and distribution ([data sinks](#data-sink)).

<img className="docs-image" src="/img/02_concepts-overview/01_overview.png" alt="Overview of concepts"/>
<img src="/img/02_concepts-overview/components-overview.png" alt="Overview of concepts"/>

## Adapter
An adapter connects to any external data source and forwards received events to the internal StreamPipes system. Within StreamPipes, the output of adapters are available in form of the two primary building blocks **Data Set** and **Data Stream**.
Adapters can be either created by using StreamPipes Connect, a module to easily connect to new data sources directly from the user interface, or by defining an adapter using the provided Software Development Kit (SDK).
An adapter connects to any external data source (e.g., OPC-UA, MQTT, S7 PLC, Modbus) and forwards the events it receives to the internal StreamPipes system.
Adapters can either be created by using a predefined adapter for a data source available in our marketplace [StreamPipes Connect](./03_use-connect.md).
An overview of all available adapters can be found under the menu bar **📚 Pipeline Elements**.
When you select one of these adapters, you can easily connect to the data source using an intuitive and convenient UI dialog (see the Connect section for more details).
Alternatively, you can define your own adapter by [using the provided Software Development Kit (SDK)](./06_extend-tutorial-adapters.md).
Creating an adapter is always the first step when you want to get data into StreamPipes and process it further.

## Data Set / Data Stream
**Data Streams** and **Data Sets** represent the primary source for working with events in StreamPipes.
A stream is an ordered sequence of events, where an event typically consists of one or more observation values and additional metadata. The "structure" (or schema) of an event provided by a data stream or set is stored in the internal semantic schema registry of StreamPipes.
While data streams are typically unbounded, data sets have a fixed end and are internally "replayed" by the system from beginning to end once they are used as part of a pipeline.
As follows, although when referring to data streams, most concepts also apply for data sets.
## Data Stream
**Data streams** are the primary source for working with events in StreamPipes.
A stream is an ordered sequence of events, where an event typically consists of one or more observation values and additional metadata.
The `structure` (or `schema` as we call it) of an event provided by a data stream is stored in StreamPipes' internal semantic schema registry.
Data streams are primarily created by adapters, but can also be created by a [StreamPipes Function](./06_extend-sdk-functions.md).

## Data Processor
**Data Processors** in StreamPipes transform one or more input data streams into an output data stream.
Such transformations can be rather simple, e.g. filtering based on a predefined rule or more complex, e.g. applying rule-based or learning-based algorithms on the data.
Data Processors can be applied on any data stream that matches the input requirements of a processor. In addition, most processors can be configured by providing user-defined parameters directly in the user interface.
Processing elements define stream requirements that are a set of minimum properties an incoming event stream must provide. Data processors can keep state or perform stateless operations.
At runtime, data streams are processed by using one of the underlying runtime wrappers (see the developer guide for more details).
**Data processors** in StreamPipes transform one or more input streams into an output stream.
Such transformations can be simple, such as filtering based on a predefined rule, or more complex, such as applying rule-based or learning-based algorithms to the data.
Data processors can be applied to any data stream that meets the input requirements of a processor.
In addition, most processors can be configured by providing custom parameters directly in the user interface.
Processing elements define stream requirements, which are a set of minimum characteristics that an incoming event stream must provide.
Data processors can maintain state or perform stateless operations.

## Data Sink
**Data Sinks** consume event streams similar to Data Processors, but do not provide an output data stream. As such, data sinks typically perform some action or trigger a visualization as a result of a stream transformation.
Similar to data processors, sinks also require for the presence of specific input requirements of any bound data stream and can be customized.
StreamPipes provides several internal data sinks, e.g., to create notifications, visualize live data or persist historical data of incoming streams. In addition, various data sinks are provided to forward data streams to external systems such as databases.
**Data sinks** consume event streams similar to data processors, but do not provide an output data stream.
As such, data sinks typically perform some action or trigger a visualization as a result of a stream transformation.
Similar to data processors, sinks also require the presence of specific input requirements from each bound data stream and can be customized.
StreamPipes provides several internal data sinks, for example, to generate notifications, visualize live data, or persist historical data from incoming streams.
In addition, StreamPipes provides several data sinks to forward data streams to external systems such as databases.

## Pipeline
A pipeline in Apache StreamPipes describes the transformation process from a data stream to a data sink. Typically, a pipeline consists of at least one data stream (or data set), zero or more data processors and at least one data sink.
Pipelines are built by users in a graphical way using the **Pipeline Editor** and can be started and stopped at any time.
A pipeline in Apache StreamPipes describes the transformation process from a data stream to a data sink.
Typically, a pipeline consists of at least one data stream, zero or more data processors, and at least one data sink.
Pipelines are created graphically by users using the [Pipeline Editor](./03_use-pipeline-editor.md) and can be started and stopped at any time.
Loading

0 comments on commit 73fc245

Please sign in to comment.