-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3: Upload with storage class #43
Comments
I can't recall the exact reasoning at the time, but thinking aloud here - to simplify the scope of this application. Can you tell me why and/or how different it would be to just set a lifecycle rule (or default class) to the bucket you upload to vs having zfsbackup-go set the class for you? It also isn't necessarily a common use case that you'd want to keep recent backups on Glacier. My guess is most users would adopt storage classes to align with RTO objectives, and can rely on lifecycle rules accordingly (e.g. Hot -> Warm after 90 days -> Cold after 180 days -> Frozen after 365 days). I wouldn't be opposed to a PR I just would like to understand your use case better. |
Thank you for the reply. You're right to ask for my use-case, I understand not wanting to bloat your software. My use-case is storing personal backups offsite. This is data from my home NAS, so I don't mind the extra delay in case of restoring as the contents are only for me and I don't have any hard requirements on availability. Besides I would only access these in case something goes really wrong (fire, water, too many dead disks, or whatever) because I already have enough local ZFS snapshots for when I fuck something up. As I see it there are two advantages to storing directly to glacier:
AFAICT storing directly to glacier isn't possible with a lifecycle rule. I don't have a lot of experience with S3 so I might have missed something. |
I see - I think this should be a pretty straight forward change, I can probably whip it up later this week, but feel free to take a stab at it. The option should probably be passed in as a env var: I confused S3 and GCS a bit, the latter lets you set the default storage class at the bucket level, and the former doesn't - but you can set up a lifecycle rule with 0 days to get close to this. |
I've just come across your project, and plan to start using it soon. An option to set the S3 storage class on upload would be useful, as the minimum time for a lifecycle rule is 30 days, so for every upload you incur at least one month of "full" cost for S3-Standard before it can be moved to another storage class. My use case for this would be to immediately upload with the S3-Standard-IA storage class, which is about 1/2 the cost of S3 Standard, with a slightly higher retrieval cost (and charged at a minimum of 30 days storage). Edit: I'm also just a home user backup up my NAS, which is largely where backups of other systems in the house go, plus a few home services, databases, etc. |
…ent variable AWS_S3_STORAGE_CLASS. See s3.StorageClass* constants for possible values. someone1#43
…ent variable AWS_S3_STORAGE_CLASS. (#422) See s3.StorageClass* constants for possible values. #43 Co-authored-by: Robert Cunius Jr <[email protected]>
I saw that in the README glacier is said to be supported via lifecycle rules. I was wondering if there's a reason why the upload's storage class is not configurable.
From a quick look at s3manager's API reference it should be quite easy: UploadInput has a
StorageClass
field, soAWSS3Backend.Upload
could easily forward a configurable value to that.Is there a reason this hasn't been done? Would you be open to a PR to add this?
The text was updated successfully, but these errors were encountered: