Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to setup a 'Resume' feature when the program or computer crashes? #1256

Closed
avalbrec opened this issue Dec 14, 2024 · 16 comments
Closed
Labels
state:Done This issue has been resolved/dismissed type:Enhancement New feature or request

Comments

@avalbrec
Copy link

I know this is asking probably too much, but considering a movie on Realesgran Plus takes like 5 DAYS to complete, SURE WOULD BE NICE to setup something in the QT version that lets you resume at certain points. There is just a LOT of chance for a program this heavy to go wrong during the process. Just a thought. The animated stuff is easy, but for Live Action movies, the burden goes up at least 10x.

@github-actions github-actions bot added the state:Backlog This issue will be worked on in the future label Dec 14, 2024
@k4yt3x k4yt3x added the type:Enhancement New feature or request label Dec 14, 2024
@k4yt3x
Copy link
Owner

k4yt3x commented Dec 15, 2024

Yeah I know that's a feature someone's gonna ask at some point. The issue is that if your computer crashes or something, and you're encoding into, let's say, MP4, then FFmpeg wouldn't get a chance to finish writing the video trailer and you'll have a broken file. I don't think I'll want to write something to fix that.

Then let's say you use something that can still be used even if no trailer is written like MKV or FLV, then you only need to continue processing then concat the two videos together. This can be done manually relatively easily, and chances are you will need to manually review the file to make sure you're combining them correctly too. I can provide an option for you to set an offset of where to start processing to skip the first n frames, but I think that's the most that can be done on my end?

@avalbrec
Copy link
Author

If you did MKV based on M2S2 like on a blu-ray, they stack up great. All we really need is a recipe script that can create a render-map to the 000 (breaking up the movie in 4ths or 5ths) with each being rendered seperately. They could then be concated, as you say, in a shell/Terminal. To make it even easiler, you could probably disable audio and subtitles (since these can be easily added back in to the recombined, rendered file later with ffmpeg. Starting at a specific frame could get weird as there is chance of corruption in the last few frames before a crash - probalby easiler to just remake that section of the movie (from said recipe text file). Just a thought. Your app is amazing, but with live action, the 'all or nothing' has led to nothing on nearly 10 tries so far, and my PC is pretty stable most of the time.

Thanks for anything you can do.

@Iss-in
Copy link

Iss-in commented Dec 21, 2024

this is really necessary, often conversions took hours and very good chance that it will be interrupted if you dont have consistent power supply ( came here after it happened with me :( )

i know its impossible to resume after a crash, but can we have an option to pause and resume if feasible ?

@k4yt3x
Copy link
Owner

k4yt3x commented Dec 23, 2024

but can we have an option to pause and resume if feasible ?

We already have that. I assume what you want is to pause and save the state, then being able to close the program and resume from where you left off when you next open it?

The issue with this is that I can't save the encoder state (if there's a way then I'm not aware of it), which means we'll start encoding a new video then concatenate them together. I have concerns this will produce artifacts where the two videos connect, and it will also add a lot of complex code to implement the concatenation.

I'll see if there's a intermediate file format that will allow me to close and reopen at a later time to keep appending frames.

@Iss-in
Copy link

Iss-in commented Dec 23, 2024

yea, with backup solutions like ups, you barely have enough time to pause and hibernate/shutdown with such intensive tasks

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 18, 2025

I looked into the containers and video codecs, and while it seems it is possible to keep appending frames to the video there are a few issues:

  • This feature only works for a selection of container formats and codecs. If we were to support recovering from a crash/power loss, we'll need to distinguish between them to see if recovery is possible for the current container/codec pair.
  • Since the processing isn't terminated gracefully, the output video file will be left in an unknown state. The encoder will not have a chance to write the trailer, and the filesystem might not have written the cached data to disk. I'm not sure if files will be recoverable in all cases even if it's in a format that's supposed to work.
  • How the recovery mechanism should be implemented is an issue. Should we count the number of frames in the output video and skip those frames when continuing processing? Should we store the processing information into a file? Should we write every x seconds of video into a separate file and then concatenate them when done?

Even if we can answer all these questions, I think it will add a lot of complexity to the software and will require a lot of time to implement. While being able to recover from a power loss is a nice-to-have, I am not inclined to implementing it right now since it's not a critical feature and I really don't have to work on this when there are more critical issues to address.

If power loss or brownouts occur very frequently in your area and you don't have a big-enough UPS to absorb that risk, perhaps consider renting hardware on cloud?

I'm still open to ideas and feel free to keep adding comments to this thread. I'm closing this issue since I'm not planning to implement it at least as of now.

@k4yt3x k4yt3x closed this as completed Jan 18, 2025
@github-actions github-actions bot added state:Done This issue has been resolved/dismissed and removed state:Backlog This issue will be worked on in the future labels Jan 18, 2025
@Pete4K
Copy link

Pete4K commented Jan 18, 2025

I think the better idea will be a feature where after every finished file the queue will be saved so that you can open it after the crash. That sounds not that complex I think. So that you have the point "Continue with the last configuration". Handbrake has such a feature, too.

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 18, 2025

That will require serializing the processing configs and saving them to a file, which will have to implemented anyways if we want to implement presets. It will be significantly easier to implement than trying to continue processing a video half-way done.

@Pete4K
Copy link

Pete4K commented Jan 18, 2025

Ok, then in the end it is not important for me. Because I can add every file very simple and when it breaks up I can see it in the filesize, when the last didn't fully load. So it is not one of the most important things. When I had to do this in Topaz it would be a very important feature because it takes so much time to make preferences for every file.

@avalbrec
Copy link
Author

The main reason for this is to help with individual conversions that are taking a week or more to get done. Saving the queue is silly because unless you are doing BUNCHES of files, you can pretty easily tell what didn't complete (it won't have a timecode in your file-browser). Anyway, I've run into an easy fix from my end. I am using FFMPEG to segment each film into 5 or more parts and processing each in turn (a failure then only damages the one that was being processed - which can be restarted). Then I am placing all of these back together in Kdenlive and recombine and scale them down to 1080. There is liley no good way to 'pick up where you left off', but you can break the job up into smaller chunks.

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 19, 2025

@avalbrec To be able to somewhat reliably recover progress processing a single video, something similar is required if it were to be included in v2x. The difference being all the logic of splitting, managing, and merging files will have to be implemented in C++. That'll be quite a bit of code straying from the core functionality. Perhaps I can make some scripts for it later, or someone else.

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 19, 2025

I do have plans to add pre and post processing hooks to execute commands. Maybe that could be later expanded to automate some of these.

@avalbrec
Copy link
Author

@avalbrec To be able to somewhat reliably recover progress processing a single video, something similar is required if it were to be included in v2x. The difference being all the logic of splitting, managing, and merging files will have to be implemented in C++. That'll be quite a bit of code straying from the core functionality. Perhaps I can make some scripts for it later, or someone else.

I think FFMPEG can help if going between keyframes. Another option is for users to just use Kdenlive to extract chunks of the video with the 'Render Selection' feature. This biggest improvements need to be in how Vulcan is used to speed the overall thing up. I have a decent card (Nvidia RTX 2070 Super) and it wants 2 DAYS to rending 10 minutes with Realesgan Plus. Why can't some of my CPU cores lend a hand?

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 19, 2025

I don't think your CPU cores will help much anyways. They're terrible at processing ML stuff like this. You can try processing with pure CPU and it'll go something like 0.02 frames/s. You'll also need to install vulkan-swrast to do that, which most systems don't come with and I have no clue how to install it on Windows.

The above is for pure software inferencing. If you have an Intel CPU with integrated graphics or something, it might be a bit faster, but not by that much. It pales in comparison to a dGPU.

It's normal that realesrgan-plus is slow. 1) it's for real-life footage, and 2) it's a 4x model. Your 2070S also is 6 years old, with only 8G of VRAM. Two days sounds a bit much still. Can you check if you have the right device selected (your 2070, not your Intel iGPU)?

@avalbrec
Copy link
Author

I don't think your CPU cores will help much anyways. They're terrible at processing ML stuff like this. You can try processing with pure CPU and it'll go something like 0.02 frames/s. You'll also need to install vulkan-swrast to do that, which most systems don't come with and I have no clue how to install it on Windows.

The above is for pure software inferencing. If you have an Intel CPU with integrated graphics or something, it might be a bit faster, but not by that much. It pales in comparison to a dGPU.

It's normal that realesrgan-plus is slow. 1) it's for real-life footage, and 2) it's a 4x model. Your 2070S also is 6 years old, with only 8G of VRAM. Two days sounds a bit much still. Can you check if you have the right device selected (your 2070, not your Intel iGPU)?

It's seeing the right one. For anime stuff, 5 FPS is great and works pretty well (similar to using NLMeans denoising), but for anything live-action, A.I. still sucks I.M.O. For really old films that will never get a B.R. release, it can really help, but between the wierd artifacting, painful slowness and 1 in 5 chance a job will actually complete, it's just not ready for real use yet.

@k4yt3x
Copy link
Owner

k4yt3x commented Jan 19, 2025

How well the models work is a completely different topic. I don't train the models, so there's not much I can help on that front.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state:Done This issue has been resolved/dismissed type:Enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants