开发者

OpenCV - Image Stitching

开发者 https://www.devze.com 2023-03-06 11:12 出处:网络
I am using following code to stitch to input images. For an unknown reason the output result is crap!

I am using following code to stitch to input images. For an unknown reason the output result is crap! It seems that the homography matrix is wrong (or is affected wrongly) because the transformed image is like an "exploited star"! I have commented the part that I guess is the source of the problem but I cannot realize it. Any help or point is appriciated!

Have a nice day, Ali

void Stitch2Image(IplImage *mImage1, IplImage *mImage2) 
{ 

    // Convert input images to gray 
    IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1); 

    cvCvtColor(mImage1, gray1, CV_BGR2GRAY); 
    IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1); 

    cvCvtColor(mImage2, gray2, CV_BGR2GRAY); 
    // Convert gray images to Mat 
    Mat img1(gray1); 
    Mat img2(gray2); 
    // Detect FAST keypoints and BRIEF features in the first image 
    FastFeatureDetector detector(50); 
    BriefDescriptorExtractor descriptorExtractor; 
    BruteForceMatcher<L1<uchar> > descriptorMatcher; 
    vector<KeyPoint> keypoints1; 
    detector.detect( img1, keypoints1 ); 
    Mat descriptors1; 
    descriptorExtractor.compute( img1, keypoints1, descriptors1 );

/* Detect FAST keypoints and BRIEF features in the second image*/


    vector<KeyPoint> keypoints2; 
    detector.detect( img1, keypoints2 ); 
    Mat descriptors2; 
    descriptorExtractor.compute( img2, keypoints2, descriptors2 ); 
    vector<DMatch> matches; 
    descriptorMatcher.match(descriptors1, descriptors2, matches); 
    if (matches.size()==0) 
            return; 
    vector<Point2f> points1, points2; 
    for(size_t q = 0; q < matches.size(); q++) 
    { 
            points1.push_back(keypoints1[matches[q].queryIdx].pt); 
            points2.push_back(keypoints2[matches[q].trainIdx].pt); 
    } 
    // Create the result image 
    result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3); 
    cvZero(result)开发者_如何学Go; 

   // Copy the second image in the result image 

    cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height)); 
    cvCopy(mImage2, result); 
    cvResetImageROI(result); 

  // Create warp image 
    IplImage* warpImage = cvCloneImage(result); 
    cvZero(warpImage); 

  /************************** Is there anything wrong here!? *******************/ 
   // Find homography matrix 
    Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0); 
    CvMat HH = H; // Is this line converted correctly? 
   // Transform warp image 
    cvWarpPerspective(mImage1, warpImage, &HH); 
  // Blend 
    blend(result, warpImage);
  /*******************************************************************************/ 

    cvReleaseImage(&gray1); 
    cvReleaseImage(&gray2); 
    cvReleaseImage(&warpImage); 
}


This is what I would suggest you to try, in this order:

1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html

2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!

3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.

4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.

My hunch is that it is the descriptors (so 1) or 2) should fix it).


Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.


Your homography, might calculated based on wrong matches and thus represent bad allignment. I suggest to path the matrix through additional check of interdependancy between its rows.

You can use the following code:

bool cvExtCheckTransformValid(const Mat& T){

    // Check the shape of the matrix
    if (T.empty())
       return false;
    if (T.rows != 3)
       return false;
    if (T.cols != 3)
       return false;

    // Check for linear dependency.
    Mat tmp;
    T.row(0).copyTo(tmp);
    tmp /= T.row(1);
    Scalar mean;
    Scalar stddev;
    meanStdDev(tmp,mean,stddev);
    double X = abs(stddev[0]/mean[0]);
    printf("std of H:%g\n",X);
    if (X < 0.8)
       return false;

    return true;    
}
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号