How to convert 3D point cloud (extracted from sparse 3D reconstruction) from pixels to millmeters?

3 visualizzazioni (ultimi 30 giorni)
I have found a 3D point cloud using 3D sparse reconstruction (like this example http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html )
Now I was wondering how I can convert the (X,Y,Z) in this point cloud to actual real world measurements in millimeters.

Risposte (1)

Dima Lisin
Dima Lisin il 25 Ago 2014
Hi Kijo,
In this example the (X, Y, Z) world coordinates are already in millimeters.
  7 Commenti
Dima Lisin
Dima Lisin il 25 Ott 2014
Unfortunately, not. The sparse reconstruction example uses a single calibrated camera and a checkerboard in the scene, and the example that uses a calibrated stereo pair of cameras does dense reconstruction. But you should be able to take these two examples and implement sparse reconstruction using a calibrated stereo pair. The steps are as follows:
  1. Calibrate your stereo cameras. If you have R2014b, use the Stereo Camera Calibrator app.
  2. Take a pair of stereo images.
  3. Undistort each image. You do not need to rectify them.
  4. Detect, extract, and match point features.
  5. Use the triangulate function to get 3D coordinates of the matched points. You would need to pass the stereoParametes object into triangulate, and the resulting 3D coordinates will be with respect to the optical center of camera 1.
Luca
Luca il 29 Ott 2014
Hi Dima, Thanks for the step by step guide. So I followed that and I am happy with the results.
The only problem I have is that the Z values in point cloud are negative!
Might that be coming from the fact that I am using a different checkerboard pattern? because I am getting this warning: " Warning: The checkerboard must be asymmetric: one side should be even, and the other should be odd. Otherwise, the orientation of the board may be detected incorrectly ." I am using Bouguet's pattern in his Caltech's toolbox.
Also I realized if I change the order of images when reading and extracting features, then I will have positive Z values in point cloud, but the results are way off and don't match with the scene! if I read right images first and then left images when I stereo calibrate the cameras, then I have to read them the same order when extracting features, right?

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by