The numeric extension for boost::gil contains algorithms like this:
template <typename Channel1,typename Channel2,typename ChannelR>
struct channel_plus_t : public std::binary_function<Channel1,Channel2,ChannelR> {
ChannelR operator()(typename channel_traits<Channel1>::const_reference ch1,
typename channel_traits<Channel2>::const_reference ch2) const {
return ChannelR(ch1)+ChannelR(ch2);
}
};
When filled with two uint8 channel values, an overflow will occur if ChannelR is also uint8.
I think the calculation should
- use a different type for the processing (how to derive this from the templated channel types?)
- clip the result to the range of the ChannelR type to get a saturated result (using
boost::gil::channel_traits<ChannelR>::min_value()
/ ...max_value()
?)
How to do this in a way that allows for performance-optimized results?
- Convert to the biggest possible type开发者_运维技巧? Sounds counter productive...
- Provide an arsenal of template specializations? Any better idea?
I don't see what the problem is here... my reaction is "so don't set ChannelR to uint8 if that's going to break"
You seem to be doing the equivalent of arguing that code like
uint8 a=128;
uint8 b=128;
uint8 c=a+b; // Uh-Oh...
should do something clever (e.g saturating arithmetic).
I'd suggest the solution is to use more precision, or define your own channel_saturating_plus_t
with the behaviour you require, much as I'd suggest the solution to the above is
uint16 c=uint16(a)+uint16(b)
or
uint8 c=saturating_add(a,b);
And be thankful that the creators of GIL even thought to expose the result type as a separate type parameter; there's plenty of libs out there which wouldn't!
精彩评论