Replies: 10 comments 8 replies
-
Hi, thanks for sharing. I did a little bit of research, and as I understand this is an AWS specific technique, not a formal HTTP standard, though Google Cloud seems to also have adopted it. Do you know how widespread the adoption of this technique is? I am hesitant to create something that isn't widely supported, so want to have some evidence this has broad acceptance and adoptions with commonly used upload servers. |
Beta Was this translation helpful? Give feedback.
-
Thanks for investigating.. it doesn't really have much to do with HTTP actually.. it's not like a download where the |
Beta Was this translation helpful? Give feedback.
-
My initial understanding was that you were looking for an implementation of the S3 multipart upload protocol. After doing some research, I indeed see that this protocol is commonly implemented by cloud storage providers, but I also noted that each implementation is slightly different in the detail (e.g. uses service-specific headers for authentication, pre-signed urls that require use of the service specific SDKs, etc). Based on that I don't think this is a viable route for the core plugin, though I could see value in someone building an extension that can convert a single large upload into the protocol described using a service-specific SDK. Upon reading your initial comment again, I wonder if what you're asking is only to allow a 'byte range' like parameter for the upload task that would upload only a portion of a larger file, and that you would implement the S3 multi-upload protocol yourself at the application layer, using the enhanced Upload functionality of the downloader. You would be responsible for maintaining state across the multiple uploads, and for checking and sending the completion signals. Is that correct? The completion signal requires you to collect the ETag of each upload - that is not something the Upload task currently passes back, so how would you deal with that? I wonder if there is a middle ground in the form of a It is also possible that we could create concrete versions of the abstract Let me know what you think. |
Beta Was this translation helpful? Give feedback.
-
Yes exactly, that was actually what I was asking for. But I did not realize that it's not possible to access the headers of a completed upload. I don't think it's necessary for the core package to directly add full support for the AWS HTTP Protocol.. But making it possible would be nice. ie.
I don't really see a reason why it would be required to add the |
Beta Was this translation helpful? Give feedback.
-
Apologies - I am learning about S3 and multi-part uploads as we go here. I found Amplify documentation for uploads, and it seems like this is taking care of all of the complexity related to authentication etc, and actually uses multi-part uploads when the file size is > 5Mb. Similar functionality exists for GCP using Firebase (though not sure if it switches to multi-part uploads). My question then is: why do you need to use the background_downloader? Is using the Amplify approach not working, or does it have limitations? Adding a range selector and returning the response headers (in addition to the response body) may not be a bad idea anyway (I'll look into that after I've finished another improvement I am working on) but I am trying to understand why you would want to go to the low-level implementation of the multi-part upload yourself.
I see a lot of developer struggle with S3, and multi-part upload is more complex, so I thought perhaps this is an opportunity to streamline that by offering a standardized way to create a |
Beta Was this translation helpful? Give feedback.
-
Sorry for the confusion. I am not actually trying to use S3, it was just an example that it's a common practice. I'm actually implementing a more obscure video upload API which essentially works the same way.. Although in this case my own backend would generate those signed part URLs and send those to my flutter app. And my flutter app would upload the video file to those signed part URLs, collect the etags and once all completed send them back to my backend. My backend would than forward it to that video hosting service.. Right now i've implemented it with the |
Beta Was this translation helpful? Give feedback.
-
Got it. I've moved this to the 'Discussions' tab under 'Ideas' and plan to implement this in a future version. I'm currently focused on a few other things, so this may take some time. Thanks for your patience. |
Beta Was this translation helpful? Give feedback.
-
@hpoul I have good and bad news. Good news is that V8.1.0 now returns I'm going to close this discussion, but happy to elaborate further if that's helpful. |
Beta Was this translation helpful? Give feedback.
-
Has there been any progress on this discussion? I am planning to implement what you have been discussing, but I am still in the stage of reading the code. Since I am not familiar with Kotlin, I feel a bit lost at the moment. |
Beta Was this translation helpful? Give feedback.
-
Implemented in V8.6.0 using "Range" header |
Beta Was this translation helpful? Give feedback.
-
When uploading large files (e.g. videos) it could be beneficial to upload them in chunks with separate requests. Currently this could probably be done by locally saving single files locally, essentially duplicating the storage requirement..
I think it would be beneficial to add something like
offset
andlength
parameters to theUploadTask
which would only upload a certain part of a file in that request, instead of the whole file.For example AWS supports multi part uploads, and suggests using them for files > 100MB.
Beta Was this translation helpful? Give feedback.
All reactions