I'm trying to do template matching basically on java. I used straightforward algorithm to find match. Here is the code:
minSAD = VALUE_MAX;
// loop through the search image
for ( int x = 0; x <= S_rows - T_rows; x++ ) {
for ( int y = 0; y <= S_cols - T_cols; y++ ) {
SAD = 0.0;
// loop through the template image
for ( int i = 0; i < T_rows; i++ )
for ( int j = 0; j < T_cols; j++ ) {
pixel p_SearchIMG = S[x+i][y+j];
pixel p_TemplateIMG = T[i][j];
SAD += abs( p_SearchIMG.Grey - p_TemplateIMG.Grey );
}
}
// save the best found position
if ( minSAD > SAD ) {
minSAD = SAD;
// give me VALUE_MAX
position.bestRow = x;
position.bestCol = y;
position开发者_高级运维.bestSAD = SAD;
}
}
But this is very slow approach. I tested 2 images (768 × 1280) and subimage (384 x 640). This lasts for ages. Does openCV perform template matching much faster or not with ready function cvMatchTemplate()?
You will find openCV cvMatchTemplate() is much mush quicker than the method you have implemented. What you have created is a statistical template matching method. It is the most common and the easiest to implement however is extremely slow on large images. Lets take a look at the basic maths you have a image that is 768x1280 you loop through each of these pixels minus the edge as this is you template limits so (768 - 384) x (1280 - 640) that 384 x 640 = 245'760 operations in which you loop through each pixel of your template (another 245'760 operations) therefore before you add any maths in your loop you already have (245'760 x 245'760) 60'397'977'600 operations. Over 60 billion operations just to loop through your image It's more surprising how quick machines can do this.
Remember however its 245'760 x (245'760 x Maths Operations) so there are many more operations.
Now cvMatchTemplate() actually uses the Fourier Analysis Template matching operation. This works by applying a Fast Fourier Transform (FFT) on the image in which the signals that make up the pixel changes in intensity are segmented into each of the corresponding wave forms. The method is hard to explain well but the image is transformed into a signal representation of complex numbers. If you wish to understand more please search on goggle for the fast fourier transform. Now the same operation is performed on the template the signals that form the template are used to filter out any other signals from your image.
In simple it suppresses all features within the image that do not have the same features as your template. The image is then converted back using a inverse fast fourier transform to produce an images where high values mean a match and low values mean the opposite. This image is often normalised so 1's represent a match and 0's or there about mean the object is no where near.
Be warned though if they object is not in the image and it is normalised false detection will occur as the highest value calculated will be treated as a match. I could go on for ages about how the method works and its benefits or problems that can occur but...
The reason this method is so fast is: 1) opencv is highly optimised c++ code. 2) The fft function is easy for your processor to handle as a majority have the ability to perform this operation in hardware. GPU graphic cards are designed to perform millions of fft operations every second as these calculations are just as important in high performance gaming graphics or video encoding. 3) The amount of operations required is far less.
In summery statistical template matching method is slow and takes ages whereas opencv FFT or cvMatchTemplate() is quick and highly optimised.
Statistical template matching will not produce errors if an object is not there whereas opencv FFT can unless care is taken in its application.
I hope this gives you a basic understanding and answers your question.
Cheers
Chris
[EDIT]
To further answer your Questions:
Hi,
cvMatchTemplate can work with CCOEFF_NORMED and CCORR_NORMED and SQDIFF_NORMED including the non-normalised version of these. Here shows the kind of results you can expect and gives your the code to play with.
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html#Step%202
The three methods are well cited and many papers are available through Google scholar. I have provided a few papers bellow. Each one simply uses a different equation to find the correlation between the FFT signals that form the template and the FFT signals that are present within the image the Correlation Coefficient tends to yield better results in my experience and is easier to find references to. Sum of the Squared Difference is another method that can be used with comparable results. I hope some of these help:
Fast normalized cross correlation for defect detection Du-Ming Tsai; Chien-Ta Lin; Pattern Recognition Letters Volume 24, Issue 15, November 2003, Pages 2625-2631
Template Matching using Fast Normalised Cross Correlation Kai Briechle; Uwe D. Hanebeck;
Relative performance of two-dimensional speckle-tracking techniques: normalized correlation, non-normalized correlation and sum-absolute-difference Friemel, B.H.; Bohs, L.N.; Trahey, G.E.; Ultrasonics Symposium, 1995. Proceedings., 1995 IEEE
A Class of Algorithms for Fast Digital Image Registration
Barnea, Daniel I.; Silverman, Harvey F.;
Computers, IEEE Transactions on Feb. 1972
It is often favoured to use the normalised version of these methods as anything that equals a 1 is a match however if not object is present you can get false positives. The method works fast simply due to the way it is instigated in the computer language. The operations involved are ideal for the processor architecture which means it can complete each operation with a few clock cycles rather than shifting memory and information around over several clock cycles. Processors have been solving FFT problems for many years know and like I said there is inbuilt hardware to do so. Hardware based is always faster than software and statistical method of template matching is in basic software based. Good reading for the hardware can be found here:
Digital signal processor Although a Wiki page the references are worth a look an effectively this is the hardware that performs FFT calculations
A new Approach to Pipeline FFT Processor Shousheng He; Mats Torkelson; A favourite of mine as it shows whats happening inside the processor
An Efficient Locally Pipelined FFT Processor Liang Yang; Kewei Zhang; Hongxia Liu; Jin Huang; Shitan Huang;
These papers really show how complex the FFT is when implemented however the pipe-lining of the process is what allows the operation to be performed in a few clock cycles. This is the reason real time vision based systems utilise FPGA (specifically design processors that you can design to implement a set task) as they can be design extremely parallel in the architecture and pipe-lining is easier to implement.
Although I must mention that for FFT of an image you are actually using FFT2 which is the FFT of the horizontal plain and the FFT of the vertical plain just so there is no confusion when you find reference to it. I can not say I have an expert knowledge in how the equations implemented and the FFT is implemented I have tried to find good guides yet finding a good guide is very hard so much I haven't yet found one (Not one I can understand at least). One day I may understand them but for know I have a good understanding of how they work and the kind of results that can be expected.
Other than this I can't really help you more if you want to implement your own version or understand how it works it's time to hit the library but I warn you the opencv code is so well optimised you will struggle to increase its performance however who knows you may figure out a way to gain better results all the best and Good luck
Chris
精彩评论