I've read that the signed char and unsigned char types are not guaranteed to be 8 bits on every platform, but sometimes they开发者_JAVA技巧 have more than 8 bits. If so, using OpenCv how can we be sure that CV_8U is always 8bit? I've written a short function which takes a 8 bit Mat and happens to convert, if needed, CV_8SC1 Mat elements into uchars and CV_8UC1 into schar. Now I'm afraid it is not platform independent an I should fix the code in some way (but don't know how).
P.S.: Similarly, how can CV_32S always be int, also on machine with no 32bit ints?
Can you give a reference of this (I've never heard of that)? Probably you mean the padding that may be added at the end of a row in a cv::Mat. That is of no problem, since the padding is usually not used, and especially no problem if you use the interface functions, e.g. the iterators (c.f.). If you would post some code, we could see, if your implementation actually had such problems.
// template methods for iteration over matrix elements.
// the iterators take care of skipping gaps in the end of rows (if any)
template<typename _Tp> MatIterator_<_Tp> begin();
template<typename _Tp> MatIterator_<_Tp> end();
the CV_32S
will be always 32-bit integer because they use types like those defined in inttypes.h
(e.g. int32_t
, uint32_t
) and not the platform specific int
, long
, whatever.
精彩评论