-
Notifications
You must be signed in to change notification settings - Fork 49
EventBridge Runtime
- Current State: Proposed
- Authors: shenlin
- Mailing List discussion: [email protected]
- Pull Request: #PR_NUMBER
- Released: <released_version>
- Will we add a new module? -- Yes.
- Will we add new APIs? -- Yes.
- Will we add new features? -- Yes.
-
Are there any problems with our current project? There are several problems in current RocketMQ EventBridge:
- Too much dependence:When using EventBridge, additional deployment of rocketmq and rocketmq-connect is required;
- Without its own Runtime, EventBridge cannot provide better services for EDA scenarios, including:
- Tracing:Comparing with the traditional synchronous call, in the EDA scenario, it is an essential function to support full link tracking query. In addition, Tracing needs to be well integrated with the open source ecosystem (like Zipkin, Skywalking, OpenTelemetry, etc.), which cannot be supported by the existing RocketMQ Connect.
- Exception handling, retry, dead letter queue:Configure flexible exception handling, retry, dead letter queue.
- Monitoring & Alarm:Whether there are rich metrics, whether to support monitoring and alarm.
- Back pressure and flow control: Better protect the downstream.
- User-friendly:A richer SDK and better integration with the Spring boot ecosystem allow users to use it at a lower cost;
-
What can we benefit from the proposed changes?
- Reduce dependence on external services.
- Provide better EDA integration experience.
-
What problem is this proposal designed to solve?
- Design the running link of Runtime.
- Design the SPI of Runtime.
- Design the control of Runtime.
-
What problem is this proposal NOT designed to solve?
- Runtime life cycle management, including how to create/modify/delete.
-
Are there any limits to this proposal?
Nothing specific.
- The event gateway will receive the events sent by the user to EventBridge, and deliver the events to the built-in storage, for example: RocketMQ.
- When the user configures a Rule to subscribe to events on the event bus, Push Runtime will listen to the BUS associated with the Rule, and once an event is received on the Bus, it will push the event to the Target specified by the Rule.
- If filtering and conversion are configured on the Rule, the event will be pre-processed before pushing.
There are two ways here:
- Solution 1-Independent mode: each Rule creates an MQ subscription separately to monitor whether there is an event on the BUS associated with the Rule; in this mode, each Rule subscription is an independent task.
- Solution 2-Sharing mode: A unified component in the Push Worker monitors the BUS associated with all Rules assigned to the current Worker; if an event occurs on the monitored BUS, the event is distributed to the specified Rule for filtering and conversion. Finally, it is pushed to the Target side through the Sink Connector;
Since the EventRule is different from the Streaming scenario, most of the time, there may be no events on the BUS monitored by the Rule; in order to reduce the cost of monitoring MQ, the recommended solution 2: a unified component in the Push Runtime monitors all the Rules assigned to the current Worker and share the resource.
Problem Analysis:
- If each running Push Runtime needs to load all the rules, when there are more and more rules, the Push Runtime cannot be expanded horizontally and will become a bottleneck.
Solution:
- The Push Runtime Cluster only needs to load the Rule assigned to it, and listen to the specified BUS according to the definition of the Rule. Once an event occurs on the BUS, it will filter the event according to the filtering rules and push it to the target terminal Target.
Responsibilities of the Push Runtime Manager:
- All Push Runtime life cycles are managed by Push Runtime Manger.
- How Rule is assigned to Push Runtime is managed by Push Runtime Manger.
- Method signature changes -- Nothing specific.
- Method behavior changes -- Nothing specific.
- CLI command changes -- Nothing specific.
- Log format or content changes -- Nothing specific.
Problem Analysis:
- EventBridge is an open architecture. The events received by the event gateway may not be stored in RocketMQ. Therefore, when designing PushWorker, it needs to support monitoring of different MQs. Therefore, we need to define a set of standard SPIs here, and implement different MQ monitoring component;
- After Push Runtime listens to the event on the Bus, it will filter and convert the event, and finally push the event to the destination specified in the Rule. Before pushing, events will be filtered and converted. These processes need to define SPI for later expansion and facilitate the co-construction of ecological partners.
Solution: At present, the OpenMessaging Connect API has defined a complete SPI and has a certain ecology. So, we consider to reuse OpenMessaging Connect API as our standard SPI.
This proposal do not needs to modify the API specification. This only add new runtime component to EventBridge.
We split this proposal into several tasks:
- Task1: Push Runtime Implementation
- Monitor the EventRule that needs to be processed by the current Runtime, and the Bus associated with the EventRule.
- Monitor the MQ component through long pooling. When a new event is detected, it will immediately write the event to the local Buffer, and the monitoring component will continue to monitor subsequent events; (event push does not block the listening thread)
- The Poll component obtains events from the Buffer, and according to the context information (Rule), filters and converts the events, and then sends them to the SinkConnector.
- Put can be an asynchronous operation or a synchronous operation.
- Before the Runtime submits the site each time, it will call PreCommit to obtain the offset successfully submitted by the current event, and then submit the site through the MQ monitoring component.
- Task2: Push Runtime Manger Implementation
- Assign the EventRule to the specified Push Runtime through the existing table of event_target_runner.
- Add event_push_worker table to manage all worker nodes
The Push Manager assigns Workers to the Rule-Targets according to the Metrics information such as the load of each Worker (such as CPU, Memeory, Push delay time, and the remaining size of the thread pool). In the early stage, it can be evenly distributed according to the number of Rule-Targets assigned to each Worker;
- Workers can be created in advance and then configured in the event_push_worker table. If the underlying infrastructure supports the ability to automatically create containers, you can call infrastructure services through PushManager, dynamically create Workers, and automatically register them in the event_push_worker table.
- All created Workers have a system variable: worker-name, to specify the information of the current worker. When the Worker starts, it can obtain the Rule-Target assigned to it according to the worker-name.
- In addition, the Rule-Target information will change during the running process. At this time, PushManger needs to notify the corresponding Worker of the latest changed Rule-Target information. After receiving the notification, the Worker reloads the latest Rule-Target.
- Task3: Push Runtime and Openmessaging Connect Ecological Integration.
- When developing and designing Push Runtime, rely on the API of Openmessaging Connect.
None.
None.