-
Notifications
You must be signed in to change notification settings - Fork 520
Task.BACKGROUND_EXECUTOR is badly defined and prone to lockups #130
Comments
@grantland Are you still managing this project? Or can you point us to the right people who are? |
I think I'm the only somewhat active maintainer. This repo hasn't seen much activity since it's been pretty stable and most of the TPL spec has already been implemented. I also seem to have missed a few PRs (probably happened while my current team at FB was on lockdown for a project), which I just addressed. If there happens to be more activity here I wouldn't mind having some help 😛 This was a while ago, but I think this specific Once I limited the Your solution would solve the problem you're facing, but I'm a little hesitant to increase the max pool size of Additionally, I think we can split up your issue into two parts to find a solution:
In my experience, 1 rarely happens and when it does it either uncovers some badly written code in which cleaning it fixes the deadlock or that code requires a specific non- I see 2 being a much more important issue as nested With this, I think the least invasive solution would be to add another Do you think this solution would be good enough to solve the issue? Do you think there is anything I'm missing? |
Thanks @grantland ! So, I think we can agree on the following, correct me if I am wrong:
// BoltsExecutors
private static boolean isBackgroundThread(Thread thread) {
return thread.getName().startsWith(“bolts-background-thread-”);
}
// ImmediateExecutor
@Override
public void execute(Runnable command) {
int depth = incrementDepth();
try {
// Don’t request a new background thread if this is already a background thread
if (depth <= MAX_DEPTH || isBackgroundThread(Thread.getCurrentThread())) {
command.run();
} else {
BoltsExecutors.background().execute(command);
}
} finally {
decrementDepth();
}
} This is good news, but I think the parse SDK is still stuck at your first point, deadlocks occurring when using Task.BACKGROUND_EXECUTOR directly. You know that in the middle of each request Parse ends up either in What can we do? Another solution off the top of my head would be to leave everything as is but have a small queue (< core pool). Then some commands might be rejected because max pool size is small as well. Instead of canceling the task, we could schedule it for later using @Override
public void rejectedExecution(final Runnable runnable, final ThreadPoolExecutor e) {
Task.delay(1000).then(new Continuation() {
e.execute(runnable);
});
} |
@grantland any comments? Based on what you said my plan would be
These alone won’t solve Parse issues (see comment above), so
I can PR if we reach some kind of agreement :-) |
This follows parse-community/Parse-SDK-Android#646 where we found a possible bug in the executor, and would also propose some changes.
Bug?
It is told in comments that
This is strictly true but practically not with the blocking queue that is being used (
LinkedBlockingQueue
with infinite capacity). If I got this correctly, with that queue tasks can always be queued, so the max number of threads you’ll have is the core pool. maxPoolSize has no effect. So thisshould become
also official docs from
ThreadPoolExecutor
about queues:Queue strategy
Using the terminology from oracle site, the current strategy (despite the comments) is the unbounded queue, which fits the case of tasks completely independent of others. You will agree this is not
bolts
case. Can we move to the first strategy?I can read in comments that the bolts executor was designed trying to emulate the AsyncTask executor. But AsyncTasks are independent by design, can afford a big queue.
Bolts
Task
are dependent, chained, nested, and the docs suggest a different strategy for this design. We are experiencing lockups in the Parse SDK that you can read in the linked issue (there’s a huge comment explaining the internals). We can block theTask.BACKGROUND_EXECUTOR
forever very easily, in situations like the following, when running concurrently the same action:With 2 processors, this takes 3 concurrent actions to have the executor hang. I don’t think this is fixable from outside bolts, because
BACKGROUND_EXECUTOR
and use another, because the background executor is the fallback executor ofImmediateExecutor
. You can mention a custom executor in every task call, but that’s bad performance wise because you lose the simplicity ofImmediateExecutor
and don’t reuse threads.I propose to move the queuing strategy towards the direct handoffs strategy:
Proposals:
or
Using a big queue would have no effect on lockups. If there are 7 core threads needing other 7 threads to complete (as in my example), and the queue can hold 128 commands, this won’t be solved. We would have to wait for 128 requests to hang before having the first resolved. Ideally,
VERY_SMALL_NUMBER < CORE_POOL_SIZE
to ensure at least a request is fulfilled.The text was updated successfully, but these errors were encountered: