-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recovery on failure in the middle of an upload/download #62
Comments
Hello. When uploading: if upload terminated in a random place, upload is not finished. next time upload will start from scratch. I think additional charges will be only chargest for requests (i.e. $0.05 for 1000 requests now). file will not be uploaded so you won't be charged for additional storage. there can be a race-condition - after upload is finished When downloading ( same, download will start from scratch. downloaded data, stored in temporaty file before crash is not reused (and left on disk if this was crash or removed from disk if it was Ctrl-C). I think yo'll pay $0.05 for 1000 requests + for bandwidth. no other race conditions here. For retrieving (i.e. expensive operation) File either retrieved or no. There can be race condition (i.e. if file retrieved but this record did not reach journal due to crash within short few milliseconds info) Two things can be improved here:
Also, you really can workaround race condition during upload by your self. Wait 24h (don't upload anything), Note that when we talk about race conditions here, assuming there is no bug in software, and crash time is really random, those race conditions are really rare.
if that size big for you (i.e. it's high %% of all your data). it's recommended to split to small parts, because: |
note:
race conditions apply here only if you retrieve twice. no race condition if you retrieve once + download. quote from documentation of
quote from documentation of
i.e. |
Hi, Thanks for the very quick feedback. If Amazon Glacier doesn't "commit" the file until it finished uploading successfully, then the worst that can happen is that we need to start again from scratch, which takes time, but doesn't cost more (no upload per GB fee). I'm ignoring the request fee which is not really a problem for me. And good point the file split, it does look like splitting will be a good idea in my case anyway then. So we are left with a feature to resume downloads, which I think would be useful to avoid paying retrieval fees twice in case of crashes/failures, and a better handling of the race conditions, which again sound like a good idea to me. Thx, |
yes, except if race condition happen.
when you downloading you don't pay high retrieval fee. only bandwith and requests fee. retrieval fee paid when you retrieve file
yes, I will leave this ticket open as enhancement. most likely in the future I split it into several tickets. it's unlikely that all things listed here can be implemented soon (enhancements are low priority for me, bugfixes is high, some of those enhancements are hard to implement and some are not really important - I don't think other software vendors ever care about rare race conditions) |
No problem at all, it's a free software, I fully understand if you don't have the time or the will to implement some or all of what is discussed here. |
Hi,
Apologies if this is explained in the doc and I missed it (in which case please point me to the right place), but what happens if an upload/download stops suddendly before it is finished?
e.g. the server crashes, power cut, or someone hits CTRL-C by mistake.
WIll mt-aws-glacier handle that well, and:
If not, is this something you would consider adding?
Failing this I would have to split my reasonnably big backup files (tens of GB) to limit the risk, which is not very convenient.
Thx,
Thibault.
The text was updated successfully, but these errors were encountered: