Define Kafka Broker for storage of temporary data between multiple data processors #2960
Replies: 2 comments 1 reply
-
Hi @MichaelKoch11 this is currently not possible. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your answer. So it is not yet possible to run a pipeline completely on an edge node. It will only be possible with the next release to specify the extension service for adapters where the microservice is started for reading. For functions (Processor) and Sink Service, this does not yet seem to take place. Maybe I'll have to take a closer look at it myself and customize it for my use case with just a pub-sub storage first. |
Beta Was this translation helpful? Give feedback.
-
A nice feature will be, when I completely can start centrally over Streampipes an Data Pipeline on an Specific/Edge Device.
How could this be achieved.
Setup Docker or Kubernetes on both devices.
Then define a pipeline.
Read data via PCL4X on the local device and write it to a local extra broker (e.g. Kafka) instance. This broker only runs on the edge device and is independent of the central one. A second pipeline is then configured, which executes a few data transformations in several functions locally and then saves them again locally.
Once all transformations are completed, a pipeline is attached which transfers the transformed data to a central Kafka instance, where it can then be used by visualization tools, for example.
The problem now is that you can define for each pipeline that the intermediate storage between several functions takes place in the local Kafka instance and not via the defined at the start of the container.
Is there already a configuration option for this, in case I have overlooked it?
Beta Was this translation helpful? Give feedback.
All reactions