开发者

How can I calculate the Big O complexity of my program?

开发者 https://www.devze.com 2023-02-21 18:44 出处:网络
I have a Big O notation question. Say I have a Java program that does the following things: Read an Array of Integers into a HashMap that keeps track of how many occurrences of the Integers exists i

I have a Big O notation question. Say I have a Java program that does the following things:

  1. Read an Array of Integers into a HashMap that keeps track of how many occurrences of the Integers exists in the array. [1,2,3,1] would be [1->2, 2->1, 3->1].

  2. Then I grab the Keys from the HashMap and place them in an Array:

    Set<Integer> keys = dictionary.keySet();
    Integer[] keysToSort = new Integer[keys.size()];
    keys.toArray(keysToSort);
    
  3. Sort the keyArray using Arrays.sort.

  4. Then iterate through the sorted keyArray grabbing the corresponding value from the HashMap, in order to display or format the results.

I think I know the following:

  • Step 1 is O(n)
  • Step 3 is O(n log n) if I'm to believe the Java API
  • Step 4 is O(n)

  • Step 2: When doing this type of calculation I should know how Java implements the Set class toArray method. I would assume that it iterates through the HashMap retrieving the Keys. If that's the case I'll assume its O(n).开发者_JAVA技巧

If sequential operations dictate I add each part then the final calculation would be O(n + n·log n + n+n) = O(3n+n·log n).

Skip the constants and you have O(n+n log n). Can this be reduced any further or am I just completely wrong?


I believe O(n + nlogn) can be further simplified to just O(nlogn). This is because the n becomes asymptotically insignificant compared to the nlogn because they are different orders of complexity. The nlogn is of a higher order than n. This can be verified on the wikipedia page by scrolling down to the Order of Common Functions section.


When using complex data structures like hash maps you do need to know how it retrieves the object, not all data structures have the same retrieval process or time to retrieve elements.

This might help you with the finding the Big O of complex data types in Java: http://www.coderfriendly.com/wp-content/uploads/2009/05/java_collections_v2.pdf


  • Step 2 takes O(capacity of the map).

  • Step 1 and 4 can get bad if you have many keys with same hash code (i.e. O(number of those keys) for a single lookup or change, multiply with the number of those lookups/changes).

  • O(n + n·log n) = O(n·log n)


You are correct to worry a little about step 2. As far as I can tell the Java API does not specify running times for these operations.

As for O(n + n log n) Treebranch is right. You can reduce that to O(n log n) the reason being that for some base value n0 n log n > c*n forall c /= 0, n > n0 this is obviously the case, since no matter what number you chose for c you could use an n0 set to 2^c+1


First,

Step 1 is only O(n) if inserting integers into a HashMap is O(1). In Perl, the worse case for inserting into a hash is O(N) for N items (aka amortised O(1)), and that's if you discount the length of the key (which is acceptable here). HashMap could be less efficient depending on how it addresses certain issues.

Second,

O(N) is O(N log N), so O(N + N log N) is O(N log N).


One thing big O doesn't tell you is that how big the scaling factor is. It also assume you have an ideal machine. The reason this is imporant is that read from a file is likely to be far more expensive than everything else you do.

If you actually time this you will get something which is startup cost + read time. The startup cost is likely to be the largest for even one million records. The read time will be propertional to the number of bytes read (i.e. the length of the numbers can matter) If you have 100 million the read time is likely to be more important. If you have one billion records, alot will depend on the number of unique entries rather than the total number of entries. The number of unique entries is limited to ~2 billion.

BTW: To perform the counting more efficiently, try TIntIntHashMap which can minimise object creation making it several times faster.

Of course I am only talking about real machines which big O doesn't consider ;)

The point I am making is that you can do a big O calculation but it will not be informative as to how a real application will behave.

0

精彩评论

暂无评论...
验证码 换一张
取 消