开发者

Computing generalized mean for extreme values of p

开发者 https://www.devze.com 2023-02-13 01:01 出处:网络
How do I compute the generalized mean for extreme values of p (very close to 0, or very large) with rea开发者_StackOverflowsonable computational error?As per your link, the limit for p going to 0 is t

How do I compute the generalized mean for extreme values of p (very close to 0, or very large) with rea开发者_StackOverflowsonable computational error?


As per your link, the limit for p going to 0 is the geometric mean, for which bounds are derived.

The limit for p going to infinity is the maximum.


I have been struggling with the same problem. Here is how I handled this: Let gmean_p(x1,...,xn) be the generalized mean where p is real but not 0, and x1, ..xn nonnegative. For M>0, we have gmean_p(x1,...,xn) = M*gmean_p(x1/M,...,xn/M) of which the latter form can be exploited to reduce the computational error. For large p, I use M=max(x1,...,xn) and for p close to 0, I use M=mean(x1,..xn). In case M=0, just add a small positive constant to it. This did the job for me.


I suspect if you're interested in very large or small values of p, it may be best to do some form of algebraic manipulation of the generalized-mean formula before putting in numerical values.

For example, in the small-p limit, one can show that the generalized mean tends to the n'th root of the product x_1*x_2*...x_n. The higher order terms in p involve sums and products of log(x_i), which should also be relatively numerically stable to compute. In fact, I believe the first-order expansion in p has a simple relationship to the variance of log(x_i):

Computing generalized mean for extreme values of p

If one applies this formula to a set of 100 random numbers drawn uniformly from the range [0.2, 2], one gets a trend like this:

Computing generalized mean for extreme values of p

which here shows the asymptotic formula becoming pretty accurate for p less than about 0.3, and the simple formula only failing when p is less than about 1e-10.

The case of large p, is dominated by that x_i which has the largest magnitude (lets call that index i_max). One can rearrange the generalized mean formula to take the following form, which has less pathological behaviour for large p:

Computing generalized mean for extreme values of p

If this is applied (using standard numpy routines including numpy.log1p) to another 100 uniformly distributed samples over [0.2, 2.0], one finds that the rearranged formula agrees essentially exactly with the simple formula, but remains valid for much larger values of p for which the simple formula overflows when computing powers of x_i.

Computing generalized mean for extreme values of p

(Note that the left-hand plot has the blue curve for the simple formula shifted up by 0.1 so that one can see where it ends due to overflows. For p less than about 1000, the two curves would otherwise be indistinguishable.)


I think the answer here should be to use a recursive solution. In the same way that mean(1,2,3,4)=mean(mean(1,2),mean(3,4)), you can do this kind of recursion for generalized means. What this buys you is that you won't need to do as many sums of really large numbers and you decrease the likelihood of creating an overflow. Also, the other danger when working with floating point numbers is when adding numbers of very different magnitudes (or subtracting numbers of very similar magnitudes). So to avoid these kinds of rounding errors it might help to sort your data before you try and calculate the generalized mean.


Here's a hunch:

First convert all your numbers into a representation in base p. Now to raise to a power of 1/p or p, you just have to shift them --- so you can very easily do all powers without losing precision.

Work out your mean in base p, then convert the result back to base two.


If that doesn't work, an even less practical hunch:

Try working out the discrete Fourier transform, and relating that to the discrete Fourier transform of the input vector.

0

精彩评论

暂无评论...
验证码 换一张
取 消