Replies: 10 comments 36 replies
-
Interesting idea! I already use part of this in my current codebase (I create a uuid for each file and store that with the item it belongs to), so I'm definely open for it! I'll think about it more in the coming days and will try to write up my thoughts in the weekend. :) |
Beta Was this translation helpful? Give feedback.
-
You could also track the user id in the files table. This is is available in the JWT token in the authorization header. Can be nullable if uploading files with admin secret. storage.files
|
Beta Was this translation helpful? Give feedback.
-
I'll just add to SELECT permissions. What we can do is allow all That means users are only allowed to read the metadata of the file in |
Beta Was this translation helpful? Give feedback.
-
On INSERT and UPDATE permissions. We can set those directly in Hasura too. This is an example of permissions on a table (ex |
Beta Was this translation helpful? Give feedback.
-
Not sure if this will be of interest/inspiration or anything but i created a filesystem (file and folders download/uploads) based on hasura here: https://github.com/SkinyMonkey/hasura-fs It's aiming at some features that are not needed by HBP but just in case it gives you some ideas. I did exactly the model you described (events to delete file on s3 etc) |
Beta Was this translation helpful? Give feedback.
-
Our proposed solution above uses Hasura permissions. If a user can read the row with the Do we want to complement this approach with a "signed URL" approach where a user can get a signed URL that is only valid for x amount of time? Do we allow one or the other (access key vs signed URLs)? Or do we skip signed URLs for now? Personally, I've never used signed URLs when developing an app. So from personal experience, I think the access key approach is enough. |
Beta Was this translation helpful? Give feedback.
-
vs
I see these 2 versions throughout the discussion, not sure if it is intended but I personally think we should do the second version. Users should be able to define permissions for uploading the file itself (not everyone should be able to upload a file). When storage receives an upload request, it tries to insert the file's metadata in
I would avoid having references from different schemas that need to be tracked (migrations, metadata, etc) simply because this might prove to be hard if we split the repos. Ideally, schemas are a way of scoping specific features/services. |
Beta Was this translation helpful? Give feedback.
-
Are you aiming for small or big files? |
Beta Was this translation helpful? Give feedback.
-
I think it's a very interesting idea, it solves a lot of limitations that the current storage has! I do have a few small questions/remarks:
|
Beta Was this translation helpful? Give feedback.
-
Maybe I just over-read it but how will the Storage console change with this new API? |
Beta Was this translation helpful? Give feedback.
-
in this post I'm outlining Storage version 2. We're discussed this internally to better support storage with Hasura and we think you'll like it.
What we'd like to get is feedback on things we have missed and suggestions on improvements. Or if you like the approach, please tell us that too. Any feedback is appreciated.
TLDR
We'll create a new
files
table in a newstorage
schema.This would be the new approach for uploading a file:
nhost.storage.upload(file)
.storage.files
.file_id
is returned referencing the newly inserted record instorage.files
.file_id
column in the database with thefile_id
.This approach uses only Hasura permissions. If you can read the file in the
storage.files
table (using Hasura permissions) you can also construct the URL to read the file.Full version
Store all file metadata in a new storage schema. This means that all file upload must go via the Storage API so the file metadata and the actual files are in sync.
File storage
Files are stored in an S3 server in a single predefined bucket. One bucket per Storage service.
Restrictions
All file upload and download go via the Storage server. Never directly via S3.
Schema
Schema:
storage
Tables:
files
migrations
(for feature schema changes)Files table
The
files
table in thestorage
schema will have the following columns:How to store a file
Let's say you have two tables:
The way you'd add a file to a product is via the
product_images
table. After uploading a file and getting afile_id
back you need to insert or update theproduct_images
table with the file id.Permissions
Anyone can upload a file.
All other permissions is handled by Hasura permissions using GraphQL:
create - inserting a new row with the
file_id
read - read the files' metadata (to construct the URL to read the actual file)
update - updating an existing row with a new
file_id
delete - delete the file from
storage.files
.Access Token
We use the
access_token
saved in thestorage.files
to construct our URL like this:/storage/file/${file.accessToken}/${file.name}
.This means if you can read the metadata in the
storage.files
table you're allowed to also get the actual file.This is secure because the
accessToken
is a non-guessable UUID.CDN
Since the
accessToken
is part of the URL there is no need to do "look-up requests" on incoming file requests. Instead the file can be served directly which means a CDN can easily be put in front of Storage.Endpoints and Functions
/storage/upload
-POST
- to upload a new file/storage/file
-GET
- to get a fileCode example
Get products and the products' images:
Display images:
Overview
Beta Was this translation helpful? Give feedback.
All reactions