I created a simple test appl开发者_运维知识库ication to perform translation (T) and rotation (R) estimation from the essential matrix.
- Generate 50 random Points.
- Calculate projection pointSet1.
- Transform Points via matrix (R|T).
- Calculate new projection pointSet2.
- Then calculate fundamental matrix F.
- Extract essential matrix like
E = K2^T F K1
(K1, K2
- internal camera matrices). - Use SVD to get
UDV^T
.
And calculate restoredR1 = UWV^T
, restoredR2 = UW^T
. And see that one of them equal to initial R.
But when I calculate translation vector, restoredT = UZU^T
, I get normalized T.
restoredT*max(T.x, T.y, T.z) = T
How to restore correct translation vector?
I understand! I don't need real length estimation on this step. When i get first image, i must set metric transformation (scale factor) or estimate it from calibration from known object. After, when i recieve second frame, i calculate normilized T, and using known 3d coordinates from first frame to solve equation (sx2, sy2, 1) = K(R|lambdaT)(X,Y,Z); and find lambda - than lambdaT will be correct metric translation...
I check it, and this is true/ So... maybe who know more simple solution?
精彩评论