开发者

bounding reciprocal of a number

开发者 https://www.devze.com 2022-12-19 13:05 出处:网络
I have met several cases where people computing reciprocal of a number with very small absolute value. They say the result should be upper bounded, since the reciprocal is very big.

I have met several cases where people computing reciprocal of a number with very small absolute value. They say the result should be upper bounded, since the reciprocal is very big.

(1) I wonder about the reason why is that?

e.g. in page 18 of this paper http://www-stat.stanford.edu/~tibs/ftp/boost.ps, the first paragraph, the reciprocal 开发者_运维百科of a probability is computed. The author said "Since this number can get large if p is small, threshold this ratio at zmax" and a upper bound in [2,4] would be fine. I wonder if it is because the precision is huge when the reciprocal is huge, but bounding by a value in [2,4] does not mean the value for the reciprocal is huge?

Another example, which is in my previous post about inverse-distance-weighted interpolation, inverse distance weighting interpolation, do we have to lower bound the distance before taking its reciprocal, or just only deal with the case when the distance is exactly 0?

(2) If the number has absolute value very large so that its reciprocal is very close to 0, do we have to lower bound the reciprocal?

(3) If we indeed have to upper bound the reciprocal of a number, which way is better, lower bounding the number or upper bounding its reciprocal?

Thanks and regards!


For (3), if you use a hard cutoff, the two approaches are the same.

With regard to (2), that depends entirely on how you're using it. Usually this sort of thing comes up when you're computing some sort of weight which you don't want to diverge to infinity, because that will break your algorithm. Sometimes a weight of zero doesn't matter as much. Sometimes it does. It will vary with how you're using it in your algorithm.

There are two possible answers to your first question (both are correct, but in different circumstances).

The first possibility is that the weight function that the algorithm should really be using isn't a reciprocal at all, but rather something more like a gaussian -- rounded hump with long tails. In some circumstances, a thresholded reciprocal is a good enough cheap approximation.

The second possibility is that the term whose reciprocal is being taken can never be exactly zero in the situation that is being modeled, but is likely to be zero in the algorithm due to floating-point approximation error. This is especially likely when the reciprocal term is the difference of two well-scaled values. In this situation, it makes sense to threshold at the expected approximation error to avoid over-large (or infinity) reciprocals throwing off the algorithm.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号