-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real-time processing for driller pose estimation #25
Comments
hi @JRvilanova |
The depth image is saved in PNG as uint16 in milimeter scale. To verify, you can increase the arg debug to 3, then it will save some PLY files in the debug_folder. You can see if those point cloud make sense. |
@wenbowen123 the depth image from realsense camera is uint16 and in milimeter scale already. |
I tried what you suggested (including adding the mask manually) and the results are not good in both the .ply file but neither in the pose estimation. I attach photos of the image with the pose (which is the one with a little point) and the mask. I am currently using the.obj from the dataset. Could this be an issue? |
@unnamed333user @JRvilanova if you upload the debug folder, I can take a look. |
@JRvilanova The usual issue I have observed is that the scale of the CAD object is larger by a factor of 1000. As @wenbowen123 suggested its easier to understand if you load the .ply from the debug folder alongside your cad object. If the scales don't match up. Please make sure the scale is the same and give it a try. That should resolve it. |
@wenbowen123 @abhishekmonogram for me everything is normal. |
@unnamed333user I dont have permission to access the gdrive file. |
let's try that link again |
@unnamed333user I'm in contact with Mona on email (this seems to be the same data), please coordinate with her. |
is your issue solved ? @JRvilanova |
No, I am still getting the same error, haven't solve it. I see there is a huge difference btw the depth.png I get with the demo data and the one i get with my own data. Here you have a link to my debug folder: https://drive.google.com/drive/folders/1313FyxSBlh0fu4PPcB68oow0In_uWiIr?usp=sharing Moreover, this is the code I am using to convert the RealSense depth data (when saved to .png has 3 channels). depth = reader.get_depth(i) Another source of problem could be the K matrix. Do I have to get the one from the depth or color camera? |
hi @JRvilanova let try my code to get the rgb-d images from realsense. you dont have to convert anything here because the depth image from realsense camera is uint16 and in milimiter already (the same fromat with depth image from kinect). import cv2 pipeline = rs.pipeline() pipeline.start(config) i = 0
finally: my intrinsic setting is: |
I tried with your code and I have a better results but struggling the same way as you. The starting poses are way better than the ones 10 seconds later. I upoaded to the shared folder the new debug files i get. Thanks a lot @unnamed333user for the code. I was using it in order to trying real time; however, when i try to do the pose estimation with these images, i am asked by the program to use 'uint8' as 'uint16' is not supported. Do you have any hint we could that be happening?. |
hi @JRvilanova i didnt meet the issue you mentioned :O Glad that your result is better than before, but I think we are still missing something here, because we are testing the object was in the training set then there's no way the result is that bad. |
@JRvilanova you did not use the correct driller model. You need to get a 3D model for your test driller (e.g. with BundleSDF). Or you can try model-free setup |
@trungpham2606 none of the testing objects are from the training set. All the failures so far are because of something wrongly setup |
@unnamed333user are you a colleague of Mona? As I mentioned, she emailed me the same data. |
In vis_score.png, the first column is the rendered view of your model. The second column is the test image. If the two objects are different, chances are that the wrong model is used. |
no. In addition, I'm testing with potted meat can while Mona's testing the driller ? Iam using the model in the YCB webpage. |
Adding a .obj file of the object allowed me to detect that object correctly under very specific conditions. Then, I was able to solve what I asked in this issue, I will close it and open a new one to solve the poor performance of the object after the object leaves the scene or after fast movements. Thanks so much for your help guys! |
Hi @JRvilanova, please tell me how you generate the mask. |
There are multiple ways to go about it. It could be using SAM or any other segmentation network |
Your link does not work. |
@wenbowen123 I think I've met the same problem. I'm sure the camera K matrix correct, CAD model in mm units, depth imgs correspondent to rgb. But pose estimations are false. |
@AdventurerDXC your depth image is in the wrong format. You can viz its generated point cloud scene_raw.ply in the debug folder. |
Thanks for such an amazing job!
I am currently trying to use a RealSense D455 camera on a real-time application; however, i get no pose estimation with my own kinect driller. For this purpose I am using the mask and the obj from the kinect_driller_seq that you suggested in the repo. I also updated the K matrix that corresponds to my camera. I have seen the depth values that are provided in the dataset and is completely different from what i get from the RealSense (the ones from the dataset are completely black). Could this issue have any impact?
What could go wrong or how can i change the way i am approaching this application?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered: