-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python or C++ predict scripts? #74
Comments
With python, you can using this script |
Thanks @ThienAnh. That was helpful. |
@ThienAnh
|
See your issue in |
@ThienAnh |
@YaoXinatTHU you are wellcome |
@ThienAnh Hi ThienAnh, do you happen to have the prediction code for c++?? :-) Will appreciate your response. Thank you so much!! Anyone who has a copy of a c++ prediction code can also response by helping me. |
@pewpewpeww I will share c++ code tonight. (Now i had out of office) |
@pewpewpeww This is code predict sample in c++ |
@ThienAnh Thank you very much for your help!! :-) I referred to the code and realised that it requires a Utility.h file to run. Do you have it too?
|
@ThienAnh How the C++ inference speed on CPU? |
@pewpewpeww Utility.h only common funtions for read image, convert string2char. You can ignore it. @jinfagang I don't test on CPU. (So i think it is very low) |
@ThienAnh Thank you so much. The code predict sample in c++ works. When i use the function called "predict" , I got a error "Check failed: error == cudaSuccess (2 vs. 0) out of memory" .I know it means my gpu memory may be small (about 11GB) . So i was thinking is there some key words like "batch_size" which i could modified. By the way , I use test prototxt file "pspnet101_cityscapes_713.prototxt" ,but there isn't "batch_size" or something like that .Do you have any idea? Any way , thank you ,thank you,thank you . |
@cobbwho default of batch_size is 1. Because if batch_size>1 --> Will be out of memory. |
@jinfagang I test a image about 473*473. GPU version use about 0.4s . CPU version use about 6.5s. It's too slow. |
@ThienAnh thank you .emmmm,I'm so sorry that the problem about "out of memory" I mentioned is a mistake . "batch_size" only happened in trainning progress., but the probleam hanppened in my prediction progress. I modified the input size from 713 to 473 ,and it's work. Any way ,thank you . I noticed another issue#75 about the predict result is wrong result . I have the same problem as you. I read the prediction code you uploaded called "PredictSample.zip" and I think It's right ,but the result is wrong too. At least the result is correct before forward delivery. But after a forward pass, the output data is 0 in output_layer. Seriously, I don't know why. Did you get the right result now? |
Hi, is there python or C++ scripts available?
The text was updated successfully, but these errors were encountered: