开发者

How to compute the rotation and translation between 2 cameras?

开发者 https://www.devze.com 2023-02-22 19:14 出处:网络
I am aware of the chessboard camera calibration technique, and have implemented it. If I have 2 cameras viewing the same scene, and I calibrate both simultaneously with the chessboard technique, can

I am aware of the chessboard camera calibration technique, and have implemented it.

If I have 2 cameras viewing the same scene, and I calibrate both simultaneously with the chessboard technique, can I compute the rot开发者_C百科ation matrix and translation vector between them? How?


If you have the 3D camera coordinates of the corresponding points, you can compute the optimal rotation matrix and translation vector by Rigid Body Transformation


If You are using OpenCV already then why don't you use cv::stereoCalibrate.

It returns the rotation and translation matrices. The only thing you have to do is to make sure that the calibration chessboard is seen by both of the cameras.

The exact way is shown in .cpp samples provided with OpenCV library( I have 2.2 version and samples were installed by default in /usr/local/share/opencv/samples).

The code example is called stereo_calib.cpp. Although it's not explained clearly what they are doing there (for that You might want to look to "Learning OpenCV"), it's something You can base on.


If I understood you correctly, you have two calibrated cameras observing a common scene, and you wish to recover their spatial arrangement. This is possible (provided you find enough image correspondences) but only up to an unknown factor on translation scale. That is, we can recover rotation (3 degrees of freedom, DOF) and only the direction of the translation (2 DOF). This is because we have no way to tell whether the projected scene is big and the cameras are far, or the scene is small and cameras are near. In the literature, the 5 DOF arrangement is termed relative pose or relative orientation (Google is your friend). If your measurements are accurate and in general position, 6 point correspondences may be enough for recovering a unique solution. A relatively recent algorithm does exactly that.

Nister, D., "An efficient solution to the five-point relative pose problem," Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.26, no.6, pp.756,770, June 2004 doi: 10.1109/TPAMI.2004.17


Update:

Use a structure from motion/bundle adjustment package like Bundler to solve simultaneously for the 3D location of the scene and relative camera parameters.

Any such package requires several inputs:

  1. camera calibrations that you have.
  2. 2D pixel locations of points of interest in cameras (use a interest point detection like Harris, DoG (first part of SIFT)).
  3. Correspondences between points of interest from each camera (use a descriptor like SIFT, SURF, SSD, etc. to do the matching).

Note that the solution is up to a certain scale ambiguity. You'll thus need to supply a distance measurement either between the cameras or between a pair of objects in the scene.

Original answer (applies primarily to uncalibrated cameras as the comments kindly point out):

This camera calibration toolbox from Caltech contains the ability to solve and visualize both the intrinsics (lens parameters, etc.) and extrinsics (how the camera positions when each photo is taken). The latter is what you're interested in.

The Hartley and Zisserman blue book is also a great reference. In particular, you may want to look at the chapter on epipolar lines and fundamental matrix which is free online at the link.

0

精彩评论

暂无评论...
验证码 换一张
取 消