Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple buckets? #5836

Closed
ffxsam opened this issue Feb 21, 2018 · 37 comments
Closed

Support for multiple buckets? #5836

ffxsam opened this issue Feb 21, 2018 · 37 comments
Labels
feature-request Request a new feature p3 storage Issues tied to the storage category

Comments

@ffxsam
Copy link

ffxsam commented Feb 21, 2018

In the documentation examples:

Amplify.configure(
    Auth: {
        identityPoolId: 'XX-XXXX-X:XXXXXXXX-XXXX-1234-abcd-1234567890ab', //REQUIRED - Amazon Cognito Identity Pool ID
        region: 'XX-XXXX-X', // REQUIRED - Amazon Cognito Region
        userPoolId: 'XX-XXXX-X_abcd1234', //OPTIONAL - Amazon Cognito User Pool ID
        userPoolWebClientId: 'XX-XXXX-X_abcd1234', //OPTIONAL - Amazon Cognito Web Client ID
    },
    Storage: {
        bucket: '', //REQUIRED -  Amazon S3 bucket
        region: 'XX-XXXX-X', //OPTIONAL -  Amazon service region
    });

What if the web app needs to interact with more than one bucket? It would be nice to have a system where we could specify several and interact with them via their names.

    Storage: {
      bucketOne: {
        bucket: '', //REQUIRED -  Amazon S3 bucket
        region: 'XX-XXXX-X', //OPTIONAL -  Amazon service region
      },
      bucketTwo: { ... }
    });
@tim-thompson
Copy link

tim-thompson commented Jul 5, 2018

Don't know what the progress on this has been but as it is still open thought I would comment.
I had this same restriction today and after some digging found that it is possible to pass a bucket option into various calls as follows:

Storage.vault.get(key, {bucket: 'alternative-bucket-name'});

Using this I've managed to successfully use multiple buckets in the same app. If it is not specified then it defaults back to the bucket in the global configuration for Amplify.

@jnreynoso
Copy link

Hi, what is valud @tim-thompson ? Storage.vault

@annjawn
Copy link

annjawn commented Nov 8, 2018

Is there any update on this? Support for multiple buckets is a really desirable feature.

@tim-thompson
Copy link

@annjawn I posted a solution further up this page that works for all my scenarios. If you need more info then I've written about it on my blog in more detail - http://tim-thompson.co.uk/aws-amplify-multiple-buckets.

@annjawn
Copy link

annjawn commented Nov 9, 2018

@tim-thompson i have tried the storage.vault method but it did not work for me for some reason. Also, it looks like only get works with storage.vault however the code suggests otherwise. I've found a solution btw. I am doing storage.config() before each operation by setting the appropriate bucket name. It's less than efficient, but it's getting the job done.

@rizerzero
Copy link

@annjawn Hi, do you have a blog post on your method ? thanks in advance 👍

@10ky
Copy link

10ky commented Dec 2, 2018

If you are able to get content off of a bucket using this statement:

Storage.vault.get(key, {bucket: 'alternative-bucket-name'});

It would be a security issue. Unless you allow it in an IAM role attached the a user. I believe amplify uses this role "s3_amplify_...". This role should be modified automatically according to your aws-exports.js file when you do amplify push. I don't see how the above statement would affect "amplify push".

@10ky
Copy link

10ky commented Dec 2, 2018

@mlabieniec is this feature request removed from the aws-amplify milestone on Jul 19? I thought this is a good feature to have. I think I have a use case where all my resized photos in S3 can be in a separate bucket. Right now, the S3image and album library resizes photo at the client side. If my photo files are very large, that would not be desirable. If the resize file is put in the same directory as in the private user directory, a lambda trigger would not work because S3 trigger does not support regular expression prefix match.

@ngocketit
Copy link

It would be very convenient to have this supported. Currently, I have to call Amplify.configure() with new bucket every time I want to do something with non-default bucket.

@hoang-innomize
Copy link

We are also looking for this feature, we are building an app that requires access to multiple buckets, so it would be better if we don't have to specify the bucket when configuring amplify (or we can use default bucket), some APIs also need to allow us specify bucket such as get pre-signed url

@stale
Copy link

stale bot commented Jun 15, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@dylan-westbury
Copy link

Something similar with configuring API, you can specify an array of endpoints. Array of buckets could be nice

@DmitriWolf
Copy link

I also would like to see a feature added to Amplify to support the use of multiple buckets.
Great work. Thank you all, contributors.

@Off2Race
Copy link

I used @tim-thompson's suggestion and it worked for me as well. The documentation for Storage.get probably needs to be updated but the following works fine:

Storage.get(key1, {bucket: 'bucket-1'});  
Storage.get(key2, {bucket: 'bucket-2'});  

I've only tried it for "public" access (any authenticated user of my app) but looking at the code I don't see a reason why it wouldn't work in other scenarios too. In effect, the bucket you specify during the Amplify.configure appears to be a default that can be overridden.

@jimji1005
Copy link

above only works if buckets are in the same region, unfortunately. 🤦‍♂

@Ramesh-Chathuranga
Copy link

if you want to add more s3 bucket to your project, use Configuration when uploading file.
this is an example code for multi s3 bucket.

function=(fileName, file, isUser=false)=>{
  if(isUser){
  Storage.configur({ AWSS3: {
          bucket: 'bucketA',
          region: "us-exxxx"
        }});
  return Storage.put(fileName, file, {
        contentType: file.type,
      });
}else {
   Storage.configur({ AWSS3: {
          bucket: 'bucketB',
          region: "us-exxxx"
        }});
  return Storage.put(fileName, file, {
        contentType: file.type,
      });
}}

@dtelaroli
Copy link

I'm used to use one bucket and event trigger by environment/account.
It would be great a native cli support for that.

@aelbokl
Copy link

aelbokl commented Aug 4, 2020

I am just commenting to keep the bot from killing this thread. This feature is much needed and has many use cases.

@KesleyDavid
Copy link

Also needing this feature

@sammartinez sammartinez self-assigned this Oct 9, 2020
@cody1024d
Copy link

Adding my need for this functionality too

@PatrykMilewski
Copy link
Contributor

Would be nice to have that

@harrysolovay harrysolovay transferred this issue from aws-amplify/amplify-js Nov 11, 2020
@sammartinez
Copy link
Contributor

cc @renebrandel that we need Amplify CLI to do the implementation first prior to us doing anything on Amplify JS

@nikhname nikhname added the feature-request Request a new feature label Nov 11, 2020
@nikhname nikhname added the storage Issues tied to the storage category label Nov 11, 2020
@r0zar
Copy link
Contributor

r0zar commented Nov 13, 2020

Does #3977 solve for this use case? I imagine that it would.

@stleon
Copy link

stleon commented Dec 20, 2020

amplify import storage
Scanning for plugins...
Plugin scan successful
? Please select from one of the below mentioned services: S3 bucket - Content (Images, audio, video, etc.)
Amazon S3 storage was already added to your project.

It would be very useful if there were any support for multiple storages.

@arealmaas
Copy link

Ping 🙋

@santhedan
Copy link

This is a must have feature similar to how you allow multiple APIs to be invoked. Workaround exists for web but not for iOS and Android.

@nathanagez
Copy link

Any status on this feature ? :)

@sammartinez sammartinez removed their assignment Sep 2, 2021
@johnrusch
Copy link

yeah, unable to list objects from another bucket using this workaround currently. Would love support for this feature.

@majirosstefan
Copy link

It would be nice if Amplify CLI supported importing multiple buckets (even from different regions) and maybe it would also ask which bucket should be "the default one" (I mean which bucket should be used when calling Storage methods without "bucket" param)

I also think that it would be really nice if there were some kind of official blog posts, Twitch streams, Youtube videos (whatever) about how to implement it ourselves (e.g. via patching) or how to create pull requests for AWS Amplify CLI in general (some kind of walkthrough of Amplify CLI).

Looking at https://github.com/aws-amplify/amplify-cli/tree/master/packages/amplify-cli it's not straightforward to me (a person outside Amplify team / mobile dev) how actually CLI works, and I am not sure if I want to spend time on this.

@rmjwilbur
Copy link

Agreed. It makes sense that one doesn't want to put all their eggs in one bucket. I would use this feature. Will explore workarounds for now. Thanks

@dragosiordachioaia
Copy link

How is this still not solved 4 years after the issue was created? So many real-life projects need to have more than one S3 buckets...

@macsinte
Copy link

macsinte commented Mar 19, 2023

+1 to this, I'm starting to think that Amplify is very very limited when it comes to some real world scenarios, and I truly don't understand why these issues are not getting surfaced sooner. We like to brag about how scalable Amplify can be, and how it could be used by large companies, but the more I use it, the more I realize that other than the authentication mechanism (which is a pain in the ass to do yourself), it is not scalable, and more suited for startup companies.

@macsinte
Copy link

macsinte commented Mar 19, 2023

It is also baffling how, just as @dragosiordachioaia mentioned above, since the issue was created 4 years ago, it has not yet been addressed. I go back to Amazon's leadership principles of "Customer Obsession" and "Bias for Action", which are clearly not taken seriously here. :)

@annjawn
Copy link

annjawn commented Mar 20, 2023

I think it's a relatively straightforward implementation. In React projects I create a reusable configuration class which I initialize everytime I need to use Storage with the bucket and storage level I want. Something like this....

//StorageService.js
import { Storage } from 'aws-amplify';
import { SetS3Config } from 'src/services/aws_services';

export default class StorageService{
    constructor(props){       
        const bucket = (props && props.bucket) ? props.bucket: '<default_bucket>';
        this.prefix =  (props && props.prefix) ? props.prefix : '';
        const config = (process.env.NODE_ENV === 'production')? configProdData :configDevData;
        Storage.configure({
             bucket,
             level : props.prefix ? props.prefix: 'public',  //this can be overridden at list, put, get level
             region: config.Auth.region,
             identityPoolId: config.Auth.identityPoolId
        });
    }

   // Other class methods for storage list, put, get, remove etc
   // You don't have to define these storage action methods here since Storage.configure 
   // overrides the configuration globally, but I like to keep everything pertaining to Storage together
   const list = async(key) => { 
            ....
            await Storage.list(key, {
              customPrefix: { public: this.prefix }, //you can also override protected and private prefixes, keeping it '' for all three will mean your app will have access to bucket's root, and ALL prefixes not just public/, private/, protected/ NOTE: make sure your auth and unauth IAM policies are set properly.
               pageSize : 'ALL'});
   }
   const upload = async(...) => { ... }
   ....
}

In the above, configProdData and configDevData is basically Amplify configure JSON (typically amplify.prod.json and amplify.dev.json that you import in StorageService.js). This is also where I define all my actions pertaining to Amplify Storage (list, get, put etc.).

Now whenever I want to use "Storage Service" in a component or a custom Hook, all I do is

import StorageService from 'src/services/storage_services';

//initializes storage with <default_bucket>
const storage = new StorageService(); 
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at public level or potentially all prefixes not just public/

//initializes storage with <default_bucket>
const storage = new StorageService({ level: 'protected' }); 
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at protected level


//initializes storage with <some_other_bucket>
const storage = new StorageService({ bucket: '<some_other_bucket>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> at public level or potentially all prefixes not just public/

//Or
const storage = new StorageService({ bucket: '<some_other_bucket>', prefix: '<my_custom_prefix>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> under <my_custom_prefix>

and so on...

Another important thing to note is that all the buckets you want your app to access should be included in the IAM policy Auth_Role and Unauth_Role, for the Cognito IdentityPool otherwise it won't work (as defined here). Also, if you plan your app to access the bucket at other prefixes or even at the bucket's root level as defined by customPrefix your IAM policy should be set appropriately, not just with public/, private/, protected/ as the documentation shows, but with all the prefixes your app expects to access, or no prefix at all if you want your app to be able to access ALL prefixes under a bucket (although) not recommended just for security purposes).

Overall, I like this level of control rather than Amplify having an opinionated way of handling multiple buckets, which I am sure many will have different opinions about a native implementation. This method keeps the implementation flexibility on me so I can manage the Storage state at any given point simply by passing the bucket name and level. Some may disagree, but it has worked for me. I use this Storage service configuration class in pretty much all my React projects and it has evolved over time, but the implementation of this one class and usage is pretty consistent across how it is used across my multiple projects.

@majirosstefan
Copy link

majirosstefan commented Mar 20, 2023

One year later after my first post, but I got it working for mobile projects - React Native ("aws-amplify": "^4.3.11", "aws-amplify-react-native": "^6.0.2").

It took me few minutes to figure out that I need to edit roles, but it's working for public/ prefix (in this case I did not need to call Storage.configure method).

  const RESOURCE_STORAGE_CONFIG = {
      level: "public" as StorageAccessLevel,
      bucket: RESOURCE_BUCKET_NAME,
      region: "us-east-2",
      expires: 60 * RESOURCE_EXPIRES_MINUTES,
    };

const resourceUrl =  await Storage.get(resourceS3Url, RESOURCE_STORAGE_CONFIG);

@charlieforward9
Copy link

charlieforward9 commented Jun 14, 2023

Just read through this start to finish. I see there are some good workarounds, but it would definitely be nice to have some more up-to-date & flexible documentation rather than this daunting quote:

Amplify projects are limited to exactly one S3 bucket.

Copy link

This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request Request a new feature p3 storage Issues tied to the storage category
Projects
None yet
Development

No branches or pull requests