-
Hello, Why is it that, when I set an s3 storage backend, it sends all my This is an issue in my (probably super niche) use case. We typically spin up a fresh R docker container, clone in a git repo, execute relevant code, and then spin down the container. This works well for projects where we can cleanly separate code and data, and store them in git and s3, respectively. So a typical script would do something along the lines of
This is very close to the I am now realizing that this skipping is only possible if one has a record of the object hashes, which are stored in I could commit
Which is easy and fine, but leaves me wondering - is there a particular reason |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The pattern I had envisioned is for users to select a small number of large data objects to push to S3 and then keep the rest of If you want all of |
Beta Was this translation helpful? Give feedback.
The pattern I had envisioned is for users to select a small number of large data objects to push to S3 and then keep the rest of
_targets/
on Git/GitHub, including metadata. Also, in theory you could have different targets going to different buckets, and it is unclear which bucket_targets/meta/meta
would go to in that case.If you want all of
_targets/
to live in a single bucket, including metadata, I suggest something likeaws.s3::s3sync()
.