You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have noticed the following bug shown in the video.
bug.mp4
In video, we can observe that when first time workflow is run, some of the forked tasks get stuck. Immediately after that, I ran same workflow which executes immediately.
In my use case application polls tasks first by calling 'all' endpoint which returns number of scheduled tasks in queue for each registered task. Then task is polled if count is greater than 0.
I have written C# console program which behaves similarly.
C# application is available here
Every 200ms task calls 'all' endpoint and prints the number of scheduled tasks for TEST_worker task.
For the video above I have observed following logs:
Notice the queue count is incorrect for first case. Even though we scheduled 12 tasks for execution only 2 of them are executed. Others are stuck in IN_PROGRESS state. Immediately after that workflow is run once again and we can see that queue count decreases from 12 to 0 as expected.
The issue happens at random. For the most part, workflow executes just fine.
Details
Conductor version: Built from the main branch.
Persistence implementation: Postgres
Queue implementation: Postgres
Lock: Not sure, I am using docker-compose.postgres.yaml to start Conductor, don't see redis or zookeeper containers.
To Reproduce
Steps to reproduce the behavior:
Start Conductor using docker-compose-postgres.yaml
Start the C# application
Start workflows, keep starting until you observe an issue.
Expected behavior
All of the forked tasks execute as expected.
The text was updated successfully, but these errors were encountered:
Describe the bug
I have noticed the following bug shown in the video.
bug.mp4
In video, we can observe that when first time workflow is run, some of the forked tasks get stuck. Immediately after that, I ran same workflow which executes immediately.
In my use case application polls tasks first by calling 'all' endpoint which returns number of scheduled tasks in queue for each registered task. Then task is polled if count is greater than 0.
I have written C# console program which behaves similarly.
C# application is available here
Every 200ms task calls 'all' endpoint and prints the number of scheduled tasks for
TEST_worker
task.For the video above I have observed following logs:
Notice the queue count is incorrect for first case. Even though we scheduled 12 tasks for execution only 2 of them are executed. Others are stuck in IN_PROGRESS state. Immediately after that workflow is run once again and we can see that queue count decreases from 12 to 0 as expected.
The issue happens at random. For the most part, workflow executes just fine.
Details
Conductor version: Built from the main branch.
Persistence implementation: Postgres
Queue implementation: Postgres
Lock: Not sure, I am using docker-compose.postgres.yaml to start Conductor, don't see redis or zookeeper containers.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All of the forked tasks execute as expected.
The text was updated successfully, but these errors were encountered: