Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup for storage-users uploads #9797

Open
JonnyBDev opened this issue Aug 13, 2024 · 3 comments
Open

Cleanup for storage-users uploads #9797

JonnyBDev opened this issue Aug 13, 2024 · 3 comments
Labels
Category:Enhancement Add new functionality

Comments

@JonnyBDev
Copy link

Is your feature request related to a problem? Please describe.

Our users are uploading big files and sometimes their uploads will fail due to bad internet connection / closing of the notebook. This will save the files in the folder /storage/users/uploads. We had a full disk last weekend because one user uploaded some big videos and the upload got interrupted. We are using external storage for spaces (S3). Due to the usage of this storage engine, we did not equip the VM with much hardware like huge disks. Because multiple uploads got interrupted, out disk went to 100% and we couldn't do anything. Not even running the clean command in the container because the disk was full. After watching our monitoring graphs, we've seen a steady linear increase over the last two months. This could have been prevented if we had some kind of mechanism to automatically clean non-processing and expired uploads.
See for more information on that topic

Describe the solution you'd like

One maintainer gave a example of a fix for this. A fix would be a go routine, like used for other things, to periodically run the clear command for non-processing and expired files. Being able to configure the job would be the cherry on top.

Describe alternatives you've considered

One alternative would be to run a cronjob on the actual system to exec into the container and run the command.

Additional context

See for more information on that topic

@kobergj kobergj moved this from Qualification to Feature Requests in Infinite Scale Team Board Aug 16, 2024
@kobergj kobergj added the Category:Enhancement Add new functionality label Aug 16, 2024
@mmattel
Copy link
Contributor

mmattel commented Sep 2, 2024

Also see: #9962 (Patch Release 5.0.7)

@MichaelSasser
Copy link

I had the same issue, though, I had alerts in place that warned me about it early, and I was able to clean the local user storage manually by running the command: ocis storage-users uploads sessions --clean. I think you can just take a look at the files in question by omitting the --clean.

Having some kind of retention mechanism that kicks in if the files expire after a configured time or when the storage grows larger than a configured storage limit would really be appreciated.

One tip, I got from a retired sysadmin some time ago: If you create a dummy file (i.e., with dd), just 10 GB in size or so, you can delete it if you ran out of storage and deal with your problem, instead of being overwhelmed with the things you can't do anymore with a full disk. That works too, with a swapfile on the same partition (just remember to re-create it).

@mmattel
Copy link
Contributor

mmattel commented Nov 4, 2024

You can use the ocis postprocessing restart command as described in the admin docs with the next release, a backport is planned.

It states: # Restarts all uploads where postprocessing is finished, but upload is not finished.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category:Enhancement Add new functionality
Projects
Status: Feature Requests
Development

No branches or pull requests

4 participants