You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users only have a certain amount of space in Permanent, but the current SFTP implementation does not enforce that storage until the point of upload to S3 (which doesn't happen until the file has been fully uploaded to the sftp service).
This means someone with 4GB of storage on their account might attempt to upload a 100GB file, but would not be informed of the failure until after all 100GB had been "transferred" (as opposed to the moment they hit 4.000001 GB in a given upload).
In addition to being a lesser user experience, this is also a vulnerability since it means that anybody could upload ANY sized file and that file would take up temporary storage space (which has a cost, of course, in addition to being a limited resource currently as described in #145).
We should have the SFTP service track how much data is "in flight" for a given user, and enforce restrictions. This becomes a much more complex problem in a world where there are multiple instances of the SFTP service (since the services would need to somehow be aware of one another's activities, and we currently are using in-memory data stores instead of a centralized datastore such as redis).
The text was updated successfully, but these errors were encountered:
Users only have a certain amount of space in Permanent, but the current SFTP implementation does not enforce that storage until the point of upload to S3 (which doesn't happen until the file has been fully uploaded to the sftp service).
This means someone with 4GB of storage on their account might attempt to upload a 100GB file, but would not be informed of the failure until after all 100GB had been "transferred" (as opposed to the moment they hit 4.000001 GB in a given upload).
In addition to being a lesser user experience, this is also a vulnerability since it means that anybody could upload ANY sized file and that file would take up temporary storage space (which has a cost, of course, in addition to being a limited resource currently as described in #145).
We should have the SFTP service track how much data is "in flight" for a given user, and enforce restrictions. This becomes a much more complex problem in a world where there are multiple instances of the SFTP service (since the services would need to somehow be aware of one another's activities, and we currently are using in-memory data stores instead of a centralized datastore such as redis).
The text was updated successfully, but these errors were encountered: