You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently polling Operate every N (configurable) seconds to update the process definition registry in the Connector runtime. This causes high data usage between Operate and ElasticSearch as Operate needs to fetch all process definitions for every polling request.
We can optimize this process by storing the pagination index in memory inside the Connector runtime and only poll for the next page, instead of starting from scratch every time.
Besides, we should investigate if it is possible to further optimize how the connector runtime imports and analyzes process definitions. Some ideas:
Can we add a quick check for the BPMN XML to determine if it contains any connectors before parsing the process model?
Why should we do it?
We received customer complaints about excessive data usage between Operate and Elastic caused by this polling.
From time to time, we are encountering issues in the clusters that have an abnormal amount of process definitions deployed, which causes delayed startup time and OOM errors.
The text was updated successfully, but these errors were encountered:
What should we do?
We are currently polling Operate every N (configurable) seconds to update the process definition registry in the Connector runtime. This causes high data usage between Operate and ElasticSearch as Operate needs to fetch all process definitions for every polling request.
We can optimize this process by storing the pagination index in memory inside the Connector runtime and only poll for the next page, instead of starting from scratch every time.
Besides, we should investigate if it is possible to further optimize how the connector runtime imports and analyzes process definitions. Some ideas:
Why should we do it?
We received customer complaints about excessive data usage between Operate and Elastic caused by this polling.
From time to time, we are encountering issues in the clusters that have an abnormal amount of process definitions deployed, which causes delayed startup time and OOM errors.
The text was updated successfully, but these errors were encountered: