Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad/Wrong poses after custom training #64

Open
hannes56a opened this issue Sep 17, 2021 · 3 comments
Open

Bad/Wrong poses after custom training #64

hannes56a opened this issue Sep 17, 2021 · 3 comments

Comments

@hannes56a
Copy link

hannes56a commented Sep 17, 2021

Hi all, I am trying to train CosyPose on my own dataset.

After playing with TLESS dataset i tried with my own data (one object):

  1. Create "synthetic data" with "start_dataset_recording.py"
  2. Train detector on this snyt dataset
  3. Train coarse pose and refine pose on this synt dataset
  4. Start inference with "webcam" (realsense D435)
  5. Be happy, that the pose estimation looks good for this first try. (The realsense looked on my computerscreen with opened 3d viewer with the trained object)
    image

As my next step i tried to render better data with "BlenderProc". I adjust the "bop_object_physics_positioning" example to render data.
As before I train detector, coarse and refine pose. But the detection of the pose is now wrong. And i think its not unprecise, i think its wrong. Because after visualizing the result of 6d estimation one corner of the object is always in the center of the bounding box of the object detection. Here is an image as example:
image

What do you think about it? Is it just a very unprecised detection or is something wrong? Perhaps I have an error in rendering the data? Perhaps there is something wrong with my model, or i do something wrong with the camera parameters? Does anyone have an idea?

Currently i try to render new data with one model (27) of the tless dataset, and I will try to train on this data than.
--> Done. I used the *.ply and *.obj (urdf) files from object 27 of the TLESS dataset, and do all the rest on my own: rendering data, train detector, train coarse/refiner pose. After that pose estimation work (a bit unprecise, but ok).
So my issue must be somewhere in my 3d modelfiles.
The last difference i see is, that my 3d model is always in positives values (x/y/z). The TLESS model is "arround 0/0/0", so it has also negativ values... But that can not be a problem?? Or can it??

Please help!

Here are my used files for rendering data /training:
urdfs-versuch03.zip
models.zip

@azad96 Perhaps you have an idea?

@azad96
Copy link
Contributor

azad96 commented Sep 22, 2021

@hannes56a, I am not sure man. I'd look into it, but I no longer have access to my codebase. What is the size of the training set. Do you have enough variety in the training set? Did you check the evaluation metrics with paper_training_logs.ipynb?

@hannes56a
Copy link
Author

Thank you for your answer. I do not think, that it is an issue in training or size of training set. Because i tried the same (same trainingset size, training parameter, etc...) only with another 3d object model. And with this other object model it works...

@lxwkobe24
Copy link

Thank you for your answer. I do not think, that it is an issue in training or size of training set. Because i tried the same (same trainingset size, training parameter, etc...) only with another 3d object model. And with this other object model it works...

hello,could you tell me how to train 2D detector ,thanks!
my english is bad ,sorry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants