I want to know how it is possible to determine the run time of an algorithm written in pseudocode so that I can familiarize myself with run time. So for example, how do you know what the run time of an algorithm that will compare 2 arrays to determine if they are not the same?
Array 1 = [1, 5, 3, 2, 10, 12] Array 2 = [3, 2, 1, 5, 10, 12] So these two arrays are not the same since they are ordered differently.
My pseudocode is:
1) set current pointer to first number in first array
2) set second pointer to first number in second array 3) while ( current pointer != " ") compare with same position element in other array 4) if (current pointer == second pointer) move current pointer to next number move second pointer to next number5) else (output that arrays are not the same) end lo开发者_运维问答op
So I am assuming first off my code is correct. I know step 4 executes only once since it only takes 1 match to display arrays are not the same. So step 4 takes only constant time (1). I know step 1 and 2 only execute once also.
so far I know run time is 3 + ? (? being the run time of loop itself)
Now I am lost on the loop part. Does the loop run n times (n being number of numbers in the array?), since worst case might be every single number gets matched? Am I thinking of run time in the right way?
If someone can help with this, I'll appreciate it.
Thanks!
What you are asking about is called the time-complexity of your algorithm. We talk about the time complexity of algorithms using so called Big-O notation.
Big-O notation is a method for talking about the approximate number of steps our algorithms take relative to the size of the algorithms input, in the worst possible case for an input of that size.
Your algorithm runs in O(n)
time (pronounced "big-oh of n" or "order n" or sometimes we just say "linear time").
You already know that steps 1,2, and 4 all run in a constant number of steps relative to the size of the array. We say that those steps run in O(1)
time ("constant time").
So let's consider step 3:
If there are n elements in the array, then step 3 needs to do n comparisons in the worst case. So we say that step 3 takes O(n)
time.
Since the algorithm takes O(n)
time on step 3, and all other steps are faster, we say that the total time complexity of your algorithm is O(n)
.
When we write O(f)
, where f
is some function, we mean that the algorithm runs within some constant factor of f
for large values.
Take your algorithm for example. For large values of n (say n = 1000), the algorithm doesn't take exactly n steps. Suppose that a comparison takes 5 instructions to complete in your algorithm, on your machine of choice. (It could be any constant number, I'm just choosing 5 for example.) And suppose that steps 1, 2, 4 all take some constant number of steps each, totalling 10 instructions for all three of those steps.
Then for n = 1000 your algorithm would take:
Steps 1 + 2 + 4 = 10 instructions. Step 3 = 5*1000 = 5000 instructions.
This is a total of 5010 instructions. This is about 5*n instructions, which is a constant factor of n
, which is why we say it is O(n)
.
For very large n, the 10 in f = 5*n + 10
becomes more and more insignificant, as does the 5. For this reason, we simply reduce the function to f is within a constant factor of n for large n
by saying f is in O(n)
.
In this way it's easy to describe the idea that a quadratic function like f1 = n^2 + 2
is always larger than any linear function like f2 = 10000*n + 50000
when n is large enough, by simply writing f1 as O(n)
and f2 as O(n^2)
.
You are correct. The running time is O(n) where n is the number of elements in the arrays. Each time you add 1 element to the arrays, you would have to execute the loop 1 more time in the worst case.
精彩评论