I am trying to track motion of a toy car. I have recorded few videos and now trying to calculate rotation.
My problem is extracting features from object surface is quit challenging due to motion blur. Below image shows a cropped image from a video frame. The distortion happen in horizontal lines. The distortion seen in this image happens when object is moving. When the object is not moving there is no distortion.
Image shows distorted image of the car when its moving forward in a diagonal path cross the image frame.
I tried a wiener filter, based on median and 开发者_Python百科variance but it didn't do much improvement. It only gave me a smoothed image as if Gaussian blur was applied on it.
What type of enhancements should I do to get a better image?
video - 720 x 576 frames - 25fps
from the picture provided it looks like you need to de-interlace the video rather than just trying to filter what's there; i remember doing this by just taking every other scan line and then doing a resize to put it back in perspective.
i found a pretty cool site that talks about deinterlacing in case you'd like to see if you might have other possibilities:
http://www.100fps.com/
(oh, and i have not inspected the image very closely so it's possible that there is some other interlacing scheme going on than just every other line; in which case my first answer wouldn't work properly. and it does imply that you will lose some resolution but that's just the nature of interlaced video...)
Given that your camera outputs interlaced video, you are better off using one field of the video. Either only use the even lines of the image or only the odd lines. The image will be squashed but you won't be mixing two images together.
Yep, that image needs to be de-interlaced. Correcting "distortion" due to linear movement is a different thing, you need to do a linear directional filtering depending on the speed of the vehicle, the distance to the camera and the obturation speed. You have to first calculate the impulse response for a given set of conditions (those above, which represent the deviation or the distance between the same point taken at the beggining of the capture and the end of it), and then apply the inverse filtering. You may need to use some filtering or image processing toolkit, if using Matlab it's going to be easy.
Did you try:
deconvblind
Follow the example on deconvblind mathworks. It might work well on your example image. Another example - Image Restoration
The following algorithm is a very simple de-interlaceing method:
cv::Mat input = cv::imread("img.jpg");
cv::Mat tmp(input.rows/2, input.cols*2, input.type(), input.data);
tmp = tmp.colRange(0, input.cols);
cv::Mat output;
cv::resize(tmp, output, Size(), 1, 2);
精彩评论