Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems when projecting 3d bounding box during training #1034

Open
UmangBDhanani opened this issue Oct 27, 2023 · 1 comment
Open

Problems when projecting 3d bounding box during training #1034

UmangBDhanani opened this issue Oct 27, 2023 · 1 comment

Comments

@UmangBDhanani
Copy link

Hello @xingyizhou. Thank you for your wonderful work and I am following it for my fusion project. However I am stuck in a issue. I was trying to train a network and used debug=2 for the initial visualization of the data just to see if the data was feed properly, as I am using my custom kitti format data. However there is a error in the 3d projection as seen in the figure
Screenshot from 2023-10-27 23-12-55

I also tried the debug mode in the convert kitti to coco.py file and there it ran perfectly with well aligned boxes in the image. Also I noticed that for the projection of 3d bounding box from convert_kitti_to_coco.py file, the location, dimension and rotation of the annotation are directly taken into calculation. But in the training file, the ground truth data (gt_det) are saved in the metadata and is post processed before projection into image. How is this organized or why the ground truth data is postprocessed as ground truth projection during training is faulty as seen in the image above. Thank you and hopefully you got the point what I am trying to convey.

@UmangBDhanani
Copy link
Author

Hello @xingyizhou. I found that the 3d ground truth bounding box label estimation during the training gives a little bit different answers for the location and rotation of the 3d bounding box compared to the ground truth label. And this is the reason for the false projection of the bounding box in the image. Why it is giving the false estimation of location and rotation. Could you point out the potential issue ? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant