You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, i'm trying to train the OpenPose model on a custom dataset. I have created the LMDB file as in the tutorial on https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/training with the normal COCO option (1->b)->COCO). In d_setLayers.py I set sAddFoot, sAddMpii, sAddFace, sAddHands, sAddDome = 0 to only train with the 17 keypoints of the COCO images. Then the pose_training.prototxt contains models: "COCO_17;COCO_17_17", whereas in poseModel.cpp there is no model with this indices. When I manually trying to set models, like models: "COCO_25_17" in the .prototxt, training will give me the error below. Do you have any suggestions where the error comes from? Is it required to resize input images to a specified size (e.g. division through 3 etc)?
Besides: Is training with 17 keypoints on a custom dataset actually supported by OpenPose training?
did you solve it? I also met this problem.at first, it tells me that
"mSources.size() != mModels.size()".
So I remove one model ,but i tells me that
"String (COCO_17_17) does not correspond to any model (COCO_18, DOME_18, ...)"
remove another the same
So I choose check in openpose_caffe_train/src/caffe/openpose/poseModel.cpp
and I find there's no "COCO_17" or "COCO_17_17"
I have tried "COCO_23_17",but it doesn't work?
So should I choose to change the release cafe_train?
Issue Summary
Hello, i'm trying to train the OpenPose model on a custom dataset. I have created the LMDB file as in the tutorial on https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/training with the normal COCO option (1->b)->COCO). In d_setLayers.py I set sAddFoot, sAddMpii, sAddFace, sAddHands, sAddDome = 0 to only train with the 17 keypoints of the COCO images. Then the pose_training.prototxt contains models: "COCO_17;COCO_17_17", whereas in poseModel.cpp there is no model with this indices. When I manually trying to set models, like models: "COCO_25_17" in the .prototxt, training will give me the error below. Do you have any suggestions where the error comes from? Is it required to resize input images to a specified size (e.g. division through 3 etc)?
Besides: Is training with 17 keypoints on a custom dataset actually supported by OpenPose training?
F0727 17:19:08.886023 31787 eltwise_layer.cpp:34] Check failed: bottom[0]->shape() == bottom[i]->shape() bottom[0]: 10 18 46 46 (380880), bottom[1]: 10 66 46 46 (1396560)
*** Check failure stack trace: ***
@ 0x7fccb2e6d0cd google::LogMessage::Fail()
@ 0x7fccb2e6ef33 google::LogMessage::SendToLog()
@ 0x7fccb2e6cc28 google::LogMessage::Flush()
I0727 17:19:08.897316 31790 metaData.cpp:175] datasetString: COCO; imageSize: [640 x 428]; metaData.annotationListIndex: 62257; metaData.writeNumber: 0; metaData.totalWriteNumber: 121251; metaData.epoch: 0
@ 0x7fccb2e6f999 google::LogMessageFatal::~LogMessageFatal()
@ 0x7fccb33a76a7 caffe::EltwiseLayer<>::Reshape()
@ 0x7fccb32c89d3 caffe::Net<>::Init()
@ 0x7fccb32cad6e caffe::Net<>::Net()
@ 0x7fccb326d5ac caffe::Solver<>::InitTrainNet()
@ 0x7fccb326db83 caffe::Solver<>::Init()
@ 0x7fccb326de9f caffe::Solver<>::Solver()
@ 0x7fccb3262ce1 caffe::Creator_AdamSolver<>()
@ 0x55e1879a99f8 (unknown)
@ 0x55e1879a5f31 (unknown)
@ 0x7fccb1c6db97 __libc_start_main
@ 0x55e1879a6aba (unknown)
op_transform_param {
stride: 8
max_degree_rotations: "45"
crop_size_x: 368
crop_size_y: 368
center_perterb_max: 40.0
center_swap_prob: 0.0
scale_prob: 1.0
scale_mins: "0.333333333333"
scale_maxs: "1.5"
target_dist: 0.600000023842
number_max_occlusions: "2"
sigmas: "7.0"
models: "COCO_17;COCO_17_17"
sources: "/home/christian/Desktop/openpose_train/dataset/lmdb_coco"
probabilities: "1.0"
source_background: "/home/christian/Desktop/openpose_train/dataset/lmdb_background"
prob_only_background: 0.0
media_directory: ""
normalization: 0
add_distance: false
}
The text was updated successfully, but these errors were encountered: