开发者

What makes table lookups so cheap?

开发者 https://www.devze.com 2023-04-02 16:49 出处:网络
A while back, I learned a little bit about big O notation and the efficiency of different algorithms.

A while back, I learned a little bit about big O notation and the efficiency of different algorithms.

For example, looping through each item in an array to do something with it

foreach(item in array)
    doSomethingWith(item)

is an O(n) algorithm, because the number of cycles the program performs is directly proportional to the size of the array.

What amazed me, though, was that table lookup is O(1). That is, looking up a key in a hash table or dictionary

value = hashTable[key]

takes the same number of cycles regardless of whether the table has one key, ten keys, a hundred keys, or a gigabrajillion keys.

This is really cool, and I'm very happy that it's true, but it's unintuitive to me and I don't understand why it's true.

I can understand the first O(n) algorithm, because I can compare it to a real-life example: if I have sheets of paper that I want to stamp, I can go through each paper one-by-one and stamp each one. It makes a lot of sense to me that if I have 2,000 sheets of paper, it will take twice as long to stamp using this method than it would if I had 1,000 sheets of paper.

But I can't understand why table lookup is O(1). I'm thinking that if I have a dictionary, and I want to find the definition of polymorphism, it will take me O(logn) time to find it: I'll open some page in the dictionary and see if it's alphabetically before or after polymorphism. If, say, it was after the P section, I can eliminate all the contents of the dictionary after the page I opened and repeat the process with the remainder of the dictionary until I find the word polymorphism.

This is not an O(1) process: it will usually take me longer to find words in a thousand page dictionary than in a two page dictionary. I'm having a hard time imagining a process that takes the same amount of time regardless of the size of the dictionary.

tl;dr: Can you explain to me how it's possible to do a table lookup with O(1) complexity?

(If you show me how to replicate the amazing O(1) lookup algorithm, I'm definitely going to get a big fat dictionary so I can show off to all of my friends my ninja-dictionary-looking-up skills)

EDIT: Most of the answers seem to be contingent on this assumption:

You have the ability 开发者_运维知识库to access any page of a dictionary given its page number in constant time

If this is true, it's easy for me to see. But I don't know why this underlying assumption is true: I would use the same process to to look up a page by number as I would by word.

Same thing with memory addresses, what algorithm is used to load a memory address? What makes it so cheap to find a piece of memory from an address? In other words, why is memory access O(1)?


You should read the Wikipedia article.

But the essence is that you first apply a hash function to your key, which converts it to an integer index (this is O(1)). This is then used to index into an array, which is also O(1). If the hash function has been well designed, there should only be one (or a few items) stored at each location in the array, so the lookup is complete.

So in massively-simplified pseudocode:

ValueType array[ARRAY_SIZE];

void insert(KeyType k, ValueType v)
{
    int index = hash(k);
    array[index] = v;
}

ValueType lookup(KeyType k)
{
    int index = hash(k);
    return array[index];
}

Obviously, this doesn't handle collisions, but you can read the article to learn how that's handled.

Update

To address the edited question, indexing into an array is O(1) because underneath the hood, the CPU is doing this:

    ADD index, array_base_address -> pointer
    LOAD pointer -> some_cpu_register

where LOAD loads data stored in memory at the specified address.

Update 2

And the reason a load from memory is O(1) is really just because this is an axiom we usually specify when we talk about computational complexity (see http://en.wikipedia.org/wiki/RAM_model). If we ignore cache hierarchies and data-access patterns, then this is a reasonable assumption. As we scale the size of the machine,, this may not be true (a machine with 100TB of storage may not take the same amount of time as a machine with 100kB). But usually, we assume that the storage capacity of our machine is constant, and much much bigger than any problem size we're likely to look at. So for all intents and purposes, it's a constant-time operation.


I'll address the question from a different perspective from every one else. Hopefully this will give light to why the accessing x[45] and accessing x[5454563] takes the same amount of time.

A RAM is laid out in a grid (i.e. rows and columns) of capacitors. A RAM can address a particular cell of memory by activating a particular column and row on the grid, so let's say if you have a 16-byte capacity RAM, laid out in a 4x4 grid (insanely small for modern computer, but sufficient for illustrative purpose), and you're trying to access the memory address 13 (1101), you first split the address into rows and column, i.e row 3 (11) column 1 (01).

Let's suppose a 0 means taking the left intersection and a 1 means taking a right intersection. So when you want to activate row 3, you send an army of electrons in the row starting gate, the row-army electrons went right, right to reach row 3 activation gate; next you send another army of electrons on the column starting gate, the column-army electrons went left then right to reach the 1st column activation gate. A memory cell can only be read/written if the row and column are both activated, so this would allow the marked cell to be read/written.

What makes table lookups so cheap?

The effect of all this gibberish is that the access time of a memory address depends on the address length, and not the particular memory address itself; if an architecture uses a 32-bit address space (i.e. 32 intersections), then addressing memory address 45 and addressing memory address 5454563 both will still have to pass through all 32 intersections (actually 16 intersections for the row electrons and 16 intersections for the columns electrons).

Note that in reality memory addressing takes very little amount of time compared to charging and discharging the capacitors, therefore even if we start having a 512-bit length address space (enough for ~1.4*10^130 yottabyte of RAM, i.e. enough to keep everything under the sun in your RAM), which mean the electrons would have to go through 512 intersections, it wouldn't really add that much time to the actual memory access time.

Note that this is a gross oversimplification of modern RAM. In modern DRAM, if you want to access subsequent memory addresses you only change the columns and not spend time changing the rows, therefore accessing subsequent memory is much faster than accessing totally random addresses. Also, this description is totally ignorant about the effect of CPU cache (although CPU cache also uses a similar grid addressing scheme, however since CPU cache uses the much faster transistor-based capacitor, the negative effect of having large cache address space becomes very critical). However, the point still holds that if you're jumping around the memory, accessing any one of them will take the same amount of time.


You're right, it's surprisingly difficult to find a real-world example of this. The idea of course is that you're looking for something by address and not value.

The dictionary example fails because you don't immediately know the location of page say 278. You still have to look that up the same as you would a word because the page locations are not in your memory.

But say I marked a number on each of your fingers and then I told you to wiggle the one with 15 written on it. You'd have to look at each of them (assuming its unsorted), and if it's not 15 you check the next one. O(n).

If I told you to wiggle your right pinky. You don't have to look anything up. You know where it is because I just told you where it is. The value I just passed to you is its address in your "memory."

It's kind of like that with databases, but on a much larger scale than just 10 fingers.


Because work is done up front -- the value is put in a bucket that is easily accessible given the hashcode of the key. It would be like if you wanted to look up your work in the dictionary but had marked the exact page the word was on.


Imagine you had a dictionary where everything starting with letter A was on page 1, letter B on page 2...etc. So if you wanted to look up "balloon" you would know exactly what page to go to. This is the concept behind O(1) lookups.

Arbitrary data input => maps to a specific memory address

The trade-off of course being you need more memory to allocate for all the potential addresses, many of which may never be used.


If you have an array with 999999999 locations, how long does it take to find a record by social security number?

Assuming you don't have that much memory, then allocate about 30% more array locations that the number of records you intend to store, and then write a hash function to look it up instead.

A very simple (and probably bad) hash function would be social % numElementsInArray.

The problem is collisions--you can't guarantee that every location holds only one element. But thats ok, instead of storing the record at the array location, you can store a linked list of records. Then you scan linearly for the element you want once you hash to get the right array location.

Worst case this is O(n)--everything goes to the same bucket. Average case is O(1) because in general if you allocate enough buckets and your hash function is good, records generally don't collide very often.


Ok, hash-tables in a nutshell:

You take a regular array (O(1) access), and instead of using regular Int values to access it, you use MATH.

What you do, is to take the key value (lets say a string) calculate it into a number (some function on the characters) and then use a well known mathematical formula that gives you a relatively good distribution on the array's range.

So, in that case you are just doing like 4-5 calculations (O(1)) to get an object from that array, using a key which isn't an int.

Now, avoiding collisions, and finding the right mathematical formula for good distribution is the hard part. That's what is explained pretty well in wikipedia: en.wikipedia.org/wiki/Hash_table


Lookup tables know exactly how to access the given item in the table before hand. Completely the opposite of say, finding an item by it's value in a sorted array, where you have to access items to check that it is what you want.


In theory, a hashtable is a series of buckets (addresses in memory) and a function that maps objects from a domain into those buckets.

Say your domain is 3 letter words, you'd block out 26^3=17,576 addresses for all the possible 3 letter words and create a function that maps all 3 letter words to those addresses, e.g., aaa=0, aab=1, etc. Now when you have a word you'd like to look up, say, "and", you know immediately from your O(1) function that it is address number 367.

0

精彩评论

暂无评论...
验证码 换一张
取 消