I load a binary image in OpenCV using cvLoadImage as follows:
IplImage* myImg=cvLoadImage(<ImagePath> ,-1); //-1 so that it is read as it is.
When I check myImg->width and myImg->widthStep, I was surprised to find that both of them have slightly different values. Then I went back to look 开发者_StackOverflowat other images in the dataset, and found that for most cases, the two values were equal, however for some sizeable number of images, the two differed by a value of 1 or 2 mostly.
I though that only for colored images when the number of channels are more than 1, the two values are different, otherwise they are same. Am I wrong? Has anyone noticed this strange behavior before?
Thanks!
Apparently if the width is not a multiple of 4 it gets padded up to a multiple of 4, for performance and alignment reasons. So if width is e.g. 255 then widthStep would be 256.
The widthstep of an image can be computed as follow :
IplImage *img = cvCreateImage( cvSize(318, 200), IPL_DEPTH_8U, 1);
int width = img->width;
int nchannels = img->nChannels;
int widthstep = ((width*sizeof(unsigned char)*nchannels)%4!=0)?((((width*sizeof(unsigned char)*nchannels)/4)*4) + 4):(width*sizeof(unsigned char)*nchannels);
The next pseudocode is a small snippet based on @Nizar FAKHFAKH's answer. It's basically the same thing, but a little clearer (at least for me, jeje). Here it goes:
int size_row_raw = width * n_channels;
int rem = size_row_raw % 4;
int width_step = (rem == 0) ? size_row_raw : size_row_raw + rem;
Here i assume that size(unsigned char) = 1, which i think is true for c# because sizeof(byte)=1. If we'd like to add this variable to the previous pseudocode, then the change is simple, just change the first line to something like:
int size_row_raw = width * sizeof(datatype_of_interest) * n_channels;
精彩评论