You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the job execution failed or the fwk crashed, the data directory of a job is sometimes left on the file system. It should be systematically cleaned after execution, no matter what happens.
If I understand correctly, when the job failed, it is the responsibility of the client to delete it. So that means that if there is a bug it must be in one of the clients, right? Do you remember why you created this issue?
As for whether this is still an issue, I’m not sure. I can see in the console output that a job that finishes with a status of DONE gets cleaned up properly, but I don’t know about cleanup after a restart or crash.
We haven’t installed v1.10 yet, but I’ll make note of that configuration property.
I'm going to close the issue for now. We'll reopen it when Benetech confirms it is still an issue.
From [email protected] on February 20, 2012 16:05:30
When the job execution failed or the fwk crashed, the data directory of a job is sometimes left on the file system. It should be systematically cleaned after execution, no matter what happens.
Original issue: http://code.google.com/p/daisy-pipeline/issues/detail?id=149
The text was updated successfully, but these errors were encountered: