开发者

Using OpenCV descriptor matches with findFundamentalMat

开发者 https://www.devze.com 2023-03-04 03:55 出处:网络
I posted earlier with a problem regarding the same program but received no answers. I\'ve since corrected the issue I was experiencing at that point, only to face a new problem.

I posted earlier with a problem regarding the same program but received no answers. I've since corrected the issue I was experiencing at that point, only to face a new problem.

Basically I am auto correcting stereo image pairs for rotation and translation using an uncalibrated approach. I use feature detection algorithms such as SURF to find points in two images, a left and right stereo image pair, and then using SURF again I match the points between the two images. I then need to use these matched points to find the fundamental matrix which I can use to correct the images.

My issue is this. My matching points are stored in a single vector of descriptor matches, which is then filtered for outliers. findFundamentalMat takes as input two separate arrays of matching points. I don't know how to convert from my vector to my two separate arrays.

cout << "< Matching descriptors..." << endl;
vector<DMatch> filteredMatches;
crossCheckMatching( descriptorMatcher, descriptors1, descriptors2, filteredMatches, 1 );
cout << filteredMatches.size() << " matches" << endl << ">" << endl;

The vector is created.

void crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
                         const Mat& descriptors1, const Mat& descriptors2,
                         vector<DMatch>& filteredMatches12, int knn=1 )
{
    filteredMatches12.clear();
    vector<vector<DMatch> > matches12, matches21;
    descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
    descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );
    for( size_t m = 0; m < matches12.size(); m++ )
    {
        bool findCrossCheck = false;
        for( size_t fk = 0; fk < matches12[m].size(); fk++ )
        {
            DMatch forward = matches12[m][fk];

            for( size_t bk = 0; bk < matches21[forward.trainIdx].size(); bk++ )
            {
                DMatch backward = matches21[forward.trainIdx][bk];
                if( backward.trainIdx == forward.queryIdx )
                {
                    filteredMatches12.push_back(forward);
                    findCrossCheck = true;
                    break;
                }
            }
            if( findCrossCheck ) break;
        }
    }
}

The matches are cross checked and stored within filteredMatches.

cout << "< Computing homography (RANSAC)..." << endl;
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold );
cout << ">" << endl;

The homography is found based on a threshold which is set at run time in the command prompt.

//Mat drawImg;
if( !H12.empty() ) // filter outliers
{
    vector<char> matchesMask( filteredMatches.size(), 0 );
    vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
    vector<Point2f> po开发者_StackOverflow社区ints2; KeyPoint::convert(keypoints2, points2, trainIdxs);
    Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
    for( size_t i1 = 0; i1 < points1.size(); i1++ )
    {
        if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
            matchesMask[i1] = 1;
    }
    /* draw inliers
    drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 ); */
}

The matches are further filtered to remove outliers.

...and then what? How do I split what's left into two Mat's of matching points to use in findFundamentalMat?

EDIT

I have now used my mask to make a finalMatches vector as such (this replaces the final filtering procedure above):

Mat drawImg;
if( !H12.empty() ) // filter outliers
{
    size_t i1;
    vector<char> matchesMask( filteredMatches.size(), 0 );
    vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
    vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
    Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
    for( i1 = 0; i1 < points1.size(); i1++ )
    {
        if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
            matchesMask[i1] = 1;
    }
    for( i1 = 0; i1 < filteredMatches.size(); i1++ )
    {
        if ( matchesMask[i1] == 1 )
            finalMatches.push_back(filteredMatches[i1]);
    }
    namedWindow("matches", 1);
    // draw inliers
    drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
    imshow("matches", drawImg);
}

However I still do not know how to split my finalMatches DMatch vector into the Mat arrays which I need to feed into findFundamentalMat, please help!!!

EDIT

Working (sort of) solution:

Mat drawImg;
vector<Point2f> finalPoints1;
vector<Point2f> finalPoints2;
if( !H12.empty() ) // filter outliers
{
    size_t i, idx;
    vector<char> matchesMask( filteredMatches.size(), 0 );
    vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
    vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
    Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);

    for( i = 0; i < points1.size(); i++ )
    {
        if( norm(points2[i] - points1t.at<Point2f>((int)i,0)) < 4 ) // inlier
            matchesMask[i] = 1;
    }

    for ( idx = 0; idx < filteredMatches.size(); idx++)
    {
        if ( matchesMask[idx] == 1 ) {
            finalPoints1.push_back(keypoints1[filteredMatches[idx].queryIdx].pt);
            finalPoints2.push_back(keypoints2[filteredMatches[idx].trainIdx].pt);
        }
    }    

    namedWindow("matches", 0);
    // draw inliers
    drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
    imshow("matches", drawImg);
}

And then I feed finalPoints1 and finalPoints2 into findFundamentalMat as Mat's. Now my only problem is that my output is not remotely as expected, the images are all screwed up :-/


Your match array are offsets into the descriptor arrays. Since each descriptor has a corresponding keypoint, you can simply iterate and build two arrays of keypoints from the indices. These keypoints can then be fed into findFundamentalMat.

Edit:

I believe your mistake is in generating finalMatches where you are losing information. The vector filteredMatches is overloaded. The indices where matchesMask is 1 show the indices into keypoints1 while the indices stored into finalMatches are the indices into keypoints2. By scrunching down into finalMatches, you are in effect losing the first set of indices.

Try the following:

Have a loop that counts how many actual matches there are:

int num_matches = 0;
for( int idx = 0; idx < matchesMask.size(); idx++ )
{
    if ( matchesMask[idx] == 1 )
        num_matches++;
}

Now declare CvMats of correct size:

matched_points1  = cvCreateMat(2,numPoints,CV_32F);
matched_points2  = cvCreateMat(2,numPoints,CV_32F);

Now iterate over the filteredMatches and insert: (Exact syntax may differ, you get the idea)

offset = 0;
for (int idx = 0; idx < matchesMask.size(); idx++)
{
    if ( matchesMask[idx] == 1 ) {
        matched_points1[2*offset] = keypoints1[idx].pt.x;
        matched_points1[2*offset+1] = keypoints1[idx].pt.y;
        matched_points2[2*offset] = keypoints2[filteredMatches[idx]].pt.x;
        matched_points2[2*offset+1] = keypoints2[filteredMatches[idx]].pt.y;
        offset++;
    }
}    
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号