-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to train raptor and dog character #17
Comments
How much memory does your computer have? How many threads are you running? That label size is very wrong. Did you recompile the caffe code in the external library? If not try that. |
Hi Neo-X, |
You computer has plenty of memory. I have run the code successfully on Ubuntu 14.04 and 16.04. The code doesn't really use CUDA. I have seen this issue before and it always has todo with a mismatch between the network dimensions and the controller action/state output/input size. Double check that your new biped controller action dimensions match the network description file. |
Hi Neo-X, thanks for the reply, but the problem is that I can't even train the original raptor and dog character using the original code. Downloaded from git and compiled straight away, no changes. Is there possibly a mismatch between the dog and raptor network description and controller? |
It's not immediately obvious why the label size will be such a large value. Maybe you can step into the code and see what is causing the label size to be so large. It is probably the reason for the large memory consumption as well, since it's allocating memory to store such large labels. |
Might need to rebuild the protoc files in caffe and recompile caffe again. |
@Neo-X You mean regenerate caffe.pb.cc and caffe.pb.h? |
Yes. |
@xbpeng I think I have solved the label size issue, something to do with AddData() in memory_data_layer.cpp, but I still have memory issues. I am confused with the relationship between trainer_replay_memory_size and the number of tuples. What is the upper limit for the total number of tuples? It doesn't seem to be set by trainer_replay_memory_size, because from what is shown in the terminal, the "Num Tuples:" will exceed trainer_replay_memory_size. |
trainer_replay_memory_size is the size of the replay memory, the number of the most recent tuples to store. We don't currently set any limits on the maximum number of tuples. You can set the maximum number of training iterations with trainer_max_iter. |
This looks like integer wrap around. I think I fixed this before by rebuilding the protoc stuff for caffe. |
I've recompiled caffe,but there seems to be other problems in the source code.I find something wrong in the step() function when training .After sample initialization,the network doesnt update and the program stops.I've no idea where to modify. |
If you can provide the output and some more details I may be able to help. |
Is this compiled in Debug mode? There should be more output or maybe in the arg_file the number of training updates is set to 0? |
I remembered i met some problems when uncompressed the external files,but it always exists |
Hi, I have tried training both the raptor and the dog using the original code, and both wasn't successful due to large memory usage. Ubuntu keeps killing the program after running out of memory and swap.
I tried reducing memory usage by changing trainer_replay_mem_size and trainer_num_init_samples, and I get the error:
Actor Iter 0
Update Net 0:
F0302 12:29:19.209728 6773 memory_data_layer.cpp:93] Check failed: labels.size() / label_size_ == num (210847752626046076 vs. 32) Number of labels must be the same as data.
*** Check failure stack trace: ***
Aborted (core dumped)
I can run the simulation but can't train it. I didn't change any of the code. Can you tell me what went wrong? Is this caused by the version of caffe?
Thanks a lot!
The text was updated successfully, but these errors were encountered: