You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to transfer a large file (hundreds of GB) from AmazonS3 to AzureStorage an error was found.
com.azure.storage.blob.models.BlobStorageException: Status code 400, "<?xml version="1.0" encoding="utf-8"?><Error><Code>BlockListTooLong</Code><Message>The block list may not contain more than 50,000 blocks.
Although the size is not a problem, the default block size is 4MB which limits the maximum transfer file size to ~200GB.
Additionally, was found that the transfer time has an hardcoded timeout of 1 hour, which, for a large file, it may not suffice, so this value should also be configurable.
Expected Behavior
There should not be a hardline limitation on file size, unless it is established. If so, the value should be configurable.
Steps to Reproduce
Have a large file (superior to 200GB) transfer (uploaded) to AzureStorage and confirm in the logging.
Possible Implementation
So there is a need to increase this block size value or, at least, let it be configurable to nullify this limitation.
Since the issue seems to be from the proposed change will be to update the transfer properties like
var parallelTransferOptions = new ParallelTransferOptions()
.setBlockSizeLong(blockSizeInMb * Constants.MB)
.setMaxConcurrency(maxConcurrency)
.setMaxSingleUploadSizeLong(maxSingleUploadSizeInMb * Constants.MB)
This change was already tested in an internal distribution to ensure it works.
The text was updated successfully, but these errors were encountered:
Bug Report
Describe the Bug
While trying to transfer a large file (hundreds of GB) from AmazonS3 to AzureStorage an error was found.
com.azure.storage.blob.models.BlobStorageException: Status code 400, "<?xml version="1.0" encoding="utf-8"?><Error><Code>BlockListTooLong</Code><Message>The block list may not contain more than 50,000 blocks.
Although the size is not a problem, the default block size is 4MB which limits the maximum transfer file size to ~200GB.
Additionally, was found that the transfer time has an hardcoded timeout of 1 hour, which, for a large file, it may not suffice, so this value should also be configurable.
Expected Behavior
There should not be a hardline limitation on file size, unless it is established. If so, the value should be configurable.
Steps to Reproduce
Have a large file (superior to 200GB) transfer (uploaded) to AzureStorage and confirm in the logging.
Possible Implementation
So there is a need to increase this block size value or, at least, let it be configurable to nullify this limitation.
Since the issue seems to be from the proposed change will be to update the transfer properties like
This change was already tested in an internal distribution to ensure it works.
The text was updated successfully, but these errors were encountered: