When I 开发者_如何学Pythonanalyzed the complexity of the code segment below, I found that it is O(n/2). But while searching the internet I discovered that it is probably O(n). I'd like to know who's correct.
void function(int n) {
int i = 1, k = 100;
while (i < n) {
k++;
i += 2;
}
}
What is the point of the variable k in the above method? Regardless big-O notation talks about the behavior in the limit (as the value of n approaches infinity). As such, big-O notation is agnostic to BOTH scaling factors and constants. Which is to say, for any constant "c" and scaling factor "s"
O(f(n)) is equivalent to O(s*f(n) + c)
In your case f(n) = n, s = 1/2, and c = 0. So...
O(n) = O(n/2)
O(n) is the same as O(n/2)
The idea of big-O notation is to understand how fast an algorithm will run as you give it a larger input. So, for example, if you double the size of your input, will the program take twice as long or will it take 4 times as long.
Since both n and n/2 behave identically as you vary the value of N (i.e. if you increase N by a factor of 10, both N itself and N/2 scale identically).
O(n/2) = O(0.5n) = O(n). See Wikipedia for more on this.
If f is O(g), then there exist some c and n such that for all x > n, |f(x)| <= c * |g(x)|. That is, from input n onwards, c * g(x) dominates f(x).
It follows that O(n/2) = O(n), because,
- If f(x) = x/2 and g(x) = x, then we set c = 0.5 and n = 0.
- If f(x) = x and g(x) = x/2, then we set c = 2 and n = 0.
Note that there are infinitely many values for c and n that you can use to prove this. (In the above I minimized them, but that is not necessary.)
精彩评论