Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to create standalone program for that? #271

Open
Veshurik opened this issue Mar 6, 2019 · 13 comments
Open

Is it possible to create standalone program for that? #271

Veshurik opened this issue Mar 6, 2019 · 13 comments

Comments

@Veshurik
Copy link

Veshurik commented Mar 6, 2019

I'm just interested, why no one still not create special program for that.
Well, when you don't need any server relations... Just drag the art, and got the upscaled version.
Or is it really difficult somehow?..

@SekiBetu
Copy link

SekiBetu commented Mar 6, 2019

don't know what you mean of "that" but waifu2x do have some softwares like this:
https://github.com/lltcggie/waifu2x-caffe/releases

@Veshurik
Copy link
Author

Veshurik commented Mar 6, 2019

It doesn't work for me, because I have AMD.

And I mean program with interface.

@SekiBetu
Copy link

SekiBetu commented Mar 6, 2019

how about this one
https://github.com/DeadSix27/waifu2x-converter-cpp/releases
it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

@nagadomi
Copy link
Owner

nagadomi commented Mar 6, 2019

Note that waifu2x-converter-cpp supports only old models.
Most Deep Learning frameworks only support CUDA (NVIDIA GPU).
It is one reason why I provide Web interface.

@Veshurik
Copy link
Author

Veshurik commented Mar 9, 2019

how about this one
https://github.com/DeadSix27/waifu2x-converter-cpp/releases
it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

Yeah, thanks, it works!
It is using the same system as for site bigjpg.com? Because I think, those images are better through bigjpg.com...

But of course, it takes a lot of time to process many images... I think, you can waste the same time process them on WEB...

And, by the way, if you drag and drop many files (I dropped 121, for example), then last image won't be process, and black screen shows.
1

@SekiBetu
Copy link

how about this one
https://github.com/DeadSix27/waifu2x-converter-cpp/releases
it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

Yeah, thanks, it works!
It is using the same system as for site bigjpg.com? Because I think, those images are better through bigjpg.com...

But of course, it takes a lot of time to process many images... I think, you can waste the same time process them on WEB...

And, by the way, if you drag and drop many files (I dropped 121, for example), then last image won't be process, and black screen shows.
1

you can open a issue there:https://github.com/DeadSix27/waifu2x-converter-cpp/issues
because it's not made by nagadomi

@unit2x
Copy link

unit2x commented Mar 24, 2019

Note that waifu2x-converter-cpp supports only old models.
Most Deep Learning frameworks only support CUDA (NVIDIA GPU).
It is one reason why I provide Web interface.

I'm curious about the GPU you're using for http://waifu2x.udp.jp/
RTX 2080 ? 💃

@nagadomi
Copy link
Owner

@unit2x
waifu2x.udp.jp is hosted on EC2 GPU instances(Tesla M60).

@nihui
Copy link

nihui commented Apr 14, 2019

you can try waifu2x-ncnn-vulkan, works on almost all GPU
https://github.com/nihui/waifu2x-ncnn-vulkan

@brlin-tw
Copy link
Contributor

brlin-tw commented Apr 22, 2019

I would like to ask if a GPU is really needed if one just want to convert images using the existing in-repo models(i.e. without training), and in this case what dependencies are required in the runtime?

@nagadomi
Copy link
Owner

The code of this repo does not support CPU processing. It is unbelievably slow.
waifu2x-caffe or other 3rd party implementations support CPU processing.

@gladkikhartem
Copy link

gladkikhartem commented Aug 24, 2019

@nagadomi
Yes, CPU processing is much slower, but it's easy to deploy and for infrequent access will be much cheaper to host.
For example if you use Google Cloud Run - converting a 2560x1600 image using 1 vCPU will take 250-400 seconds and have on-demand cost of 0.05 cents (0.016 cents CPU/RAM cost and the rest 0.034 cents is data transfer cost)
https://cloud.google.com/products/calculator/#id=50da2413-8b35-4c8e-8d3d-72bf71216e07

If are running API on AWS G3 instance (0.75$/h) - to overcome this price you have to process at least ~5000 conversions/hour to achieve the same CPU/RAM cost per conversion.

Of course user-experience matters a lot and fast conversions are essential.

P.S.
If you want to deploy CPU version of API - you can use this button: https://github.com/gladkikhartem/waifurun

@nagadomi
Copy link
Owner

the number of requests to waifu2x.udp.jp in the past 3 days is as follows. (counted recaptcha requests).

2019/08/24: 54890
2019/08/23: 59016
2019/08/22: 62710

it's about 2500 req/hour. the server should process requests in an average of 1.5 seconds.
also current half of the cost is data transfer cost. it is can not be reduced I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants