S3 backend can't PUT
files that don't fit in a single buffer from XRootD
#39
Labels
PUT
files that don't fit in a single buffer from XRootD
#39
We recently found that the S3 backend can't
PUT
objects that don't fit into a single buffer handed to cURL by XRootD. When we test this using raw curl to PUT through the plugin, these buffers show up as ~1MiB. When we use Pelican, they're even smaller (and more sporadically sized).Unfortunately, the S3 protocol doesn't give us a nice solution for this, and it looks more and more that a fix will impose some limitations on the size of objects the S3 plugin can handle writing. In particular, these limitations are described in the AWS docs for the S3 "multipart upload flow", and they relate to the fact each object can be transferred in at most 10,000 chunks. Without knowing the size of an ahead of time, we have to impose a size limit on these chunk buffers, and thus a limit on the total size of the object.
Our observation of transfers in the OSDF indicates a very thin tail for object reads over 50GB, so we're thinking of placing the cap somewhere around there. This isn't a perfect proxy for the distribution of written objects, but it's a starting point.
The text was updated successfully, but these errors were encountered: