Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the data directory of a volatile job is not always deleted if things went wrong #149

Closed
ghost opened this issue Apr 13, 2014 · 2 comments
Closed

Comments

@ghost
Copy link

ghost commented Apr 13, 2014

From [email protected] on February 20, 2012 16:05:30

When the job execution failed or the fwk crashed, the data directory of a job is sometimes left on the file system. It should be systematically cleaned after execution, no matter what happens.

Original issue: http://code.google.com/p/daisy-pipeline/issues/detail?id=149

@bertfrees
Copy link
Member

@rdeltour I saw that on 25 Apr 2016 Javi implemented a org.daisy.pipeline.ws.cleanuponstartup configuration setting (daisy/pipeline-framework@43b4adbc), I think in response to daisy/pipeline-framework#105. This also fixes the "when the fwk crashed" part of this issue right?

If I understand correctly, when the job failed, it is the responsibility of the client to delete it. So that means that if there is a bug it must be in one of the clients, right? Do you remember why you created this issue?

@bertfrees
Copy link
Member

John Brugge says:

As for whether this is still an issue, I’m not sure. I can see in the console output that a job that finishes with a status of DONE gets cleaned up properly, but I don’t know about cleanup after a restart or crash.

We haven’t installed v1.10 yet, but I’ll make note of that configuration property.

I'm going to close the issue for now. We'll reopen it when Benetech confirms it is still an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants