-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added kwarg to fwp to allow for constant tensor output e.g., in the c… #184
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha. Nice new wrench in the annoying saga of constant output. Can we just return a memory error if the output tensor exceeds 2GB instead of doing the constant output check?
No haha we've talked about this, the problem is that the tensors mid-network are memory hogs thanks to their filter channels and we don't inspect this mid-network. I guess we could inspect this mid network but seems like a pain. |
Oh right. I guess the easiest way to inspect mid network would be to add a mem check to the inference loop through the layers. We could also just check the model layers for max number of filter channels and use this multiplier, and array input size, to check max memory? |
Yeah, the arithmetic in the second option sounds complicated and you would have to know information about the s/t enhancement. Adding a memory check in the inference loop sounds like an option but i don't loveeee it... Hard to explain why. I guess we just don't really understand exactly whats going on here and checking bad outputs seems like the most sure-fire way of doing this? |
I think if we for sure know its a 2GB cap then the second option wouldnt be too bad. We could just do |
but then you have to go scrape the model for the enhancement factors and the biggest combination of enhancement * filter count... And this is only considering the convolutional layers, what if we introduce new custom layers with weirder things going on? Like the concatenation layers will add a new filter channel but it's hard to parse that out. |
Well the models are already scraped for enhancement factors, so we're partially there. sup3r/sup3r/models/abstract.py Line 110 in 92ac031
|
Yeah yeah but the rest is challenging, you need to check every layer for what size tensor it would output. You could have a bigger tensor prior to s/t enhancement because of a ton of filters or something else going on because of funky custom layers like the high-res concat layers. I think the current approach is fine. |
Yeah that's fair. |
added kwarg to fwp to allow for constant tensor output e.g., in the c…
…ase of precip=0