If you have the full relative-3D values of two images looking at the same scene (relative x,y,z), along w开发者_如何学编程ith the extrinsic/intrinsic parameters between them, how do you project the points from one scene into the other scene, in opencv?
You can't do that in general. There is an infinite number of 3D points (a line in 3d) that get mapped to one point in image space, in the other image this line won't get mapped to a single point, but a line (see the wikipedia article on epipolar geometry). You can compute the line that the point has to be on with the fundamental matrix.
If you do have a depth map, reproject the point into 3D - using the equations on the top of the opencv page on camera calibration, especially this one (it's the only one you need):
u and v are your pixel coordinates, the first matrix is your camera matrix (for the image you are looking at currently), the second one is the matrix containing the extrinsic parameters, Z you know (from your depth map), X and Y are the ones you are looking for - you can solve for those parameters, and then use the same equation to project the point into your other camera. You can probably use the PerspectiveTransform function from opencv to do the work for you, however I can't tell you from the top of my head how to build the projection matrix.
Let the extrinsic parameters be R and t such that camera 1 is [I|0] and camera 2 is [R|t]. So all you have to do is rotate and the translate point cloud 1 with R and t to have it in the same coordinate system as point cloud 2.
Let the two cameras have projection matrices
P1 = K1 [ I | 0]
P2 = K2 [ R | t]
and let the depth of a given point x1 (homogeneous pixel coordinates) on the first camera be Z, the mapping to the second camera is
x2 = K2*R*inverse(K1)*x1 + K2*t/Z
There is no OpenCV function to do this. If the relative motion of the cameras is purely rotational, the mapping becomes a homography so you can use the PerspectiveTransform function.
( Ki = [fxi 0 cxi; 0 fyi cyi; 0 0 1] )
精彩评论