开发者

Analysis of algorithms

开发者 https://www.devze.com 2023-03-29 03:59 出处:网络
I am开发者_如何学运维 reading algorithm analysis topic. Here is the text snippet from the book

I am开发者_如何学运维 reading algorithm analysis topic. Here is the text snippet from the book

When n doubles, the running time goes up by a factor of 2 for linear programs, 4 for quadratic programs, and 8 for cubic programs. Programs that run in logarithmic time take only an additive constant longer when n doubles, and programs that run in O(n log n) take slightly more than twice as long to run under the same circumstances.

These increases can be hard to spot if the lower-order terms have relatively large coefficients and n is not large enough.

My question is what does author mean lower-order terms have relatively large coefficients? Can any one explain with example

Thanks!


Suppose your algorithm actually executes n^2 + 1000 n computations when run on n elements. Now for n = 1 you need 1001 computations, and for n = 2 you need 2004. The difference from linear growth is tiny, and you can hardly spot the quadratic contribution!

Asymptotically, however, your algorithm takes O(n^2) steps, so asymptotically (as n gets large) doubling the input size quadruples your runtime. But for our small value, doubling from 1 to 2 did not quadruple the runtime! The lower-order term is n, and its coefficient (1000) is large compared to the coefficient of the leading-order termn^2 (which is 1).

This shows how the asymptotic complexity does not say anything about particular, especially small values. It's merely a limiting statement about the behaviour as n gets large.


When using O notation, you specify the largest term of the function that is your performance bound. For example, if the performance was always bound by f = c3n3 + c2n2 + c1n + c0, you would say that is is O(n3). The author is saying that when n is small, the coefficients may have a larger impact than n on the performance, for example if c2 were very large and c3 very small, the performance may appear to be O(n2) until the size of n dominates the coefficients if you only go by the relative performance for specific small instances of n.


Asymptotic notation refers to the bounds of the runtime as n->infinity. So, a function that is O(n log n) may have an actual runtime of .1*n log n + 100000*n.

In this case, the 100000*n term is the "lower-order term". As n->infinity, this term is overpowered by the .1*n log n term.

However, as you can see, for small n, the 100000*n term will dominate the runtime.


For instance if you have an O(n) algorithm at lower scales you could have T(n) = 490239n + (insert ridiculous constant here) which means that the performance would look bad but as the scales increase you see that the increase is always linear.

Real world example is merge sort, O(n logn) problem is that recursion has a computational cost or overhead which is a factor of n which is a smaller order than nlogn so it gets discarded in the Big-O, problem is that that factor gets to be quite large as well and affects performance.

0

精彩评论

暂无评论...
验证码 换一张
取 消