Suppose you have a set of two-letter words and their definitions. You want to be able to look up the definition of any word, very quickly. The two-letter word is the key that addresses the definition.

Since there are 26 English letters, there are 26 * 26 = 676 possible two-letter words. To implement a dictionary, we declare an array of 676 references, all initially set to null. To insert a Definition into the dictionary, we define a function hashCode() that maps each two-letter word (key) to a unique integer between 0 and 675. We use this integer as an index into the array, and make the corresponding bucket (array position) point to the Definition object.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public class Word {
public static final int LETTERS = 26, WORDS = LETTERS * LETTERS;
public String word;

public int hashCode() { // Map a two-letter Word to 0...675.
return LETTERS * (word.charAt(0) - ’a’) + (word.charAt(1) - ’a’);
}
}

public class WordDictionary {
private Definition[] defTable = new Definition[Word.WORDS];

public void insert(Word w, Definition d) {
defTable[w.hashCode()] = d; // Insert (w, d) into Dictionary.
}

Definition find(Word w) {
return defTable[w.hashCode()]; // Return the Definition of w.
}
}

What if we want to store every English word, not just the two-letter words? The table defTable must be long enough to accommodate pneumonoultramicroscopicsilicovolcanoconiosis, 4545 letters long. Unfortunately, declaring an array of length 264526^{45} is out of the question. English has fewer than one million words, so we should be able to do better.

In the Java programming language, every class implicitly or explicitly provides a hashCode() method, which digests the data stored in an instance of the class into a single hash value (a 32-bit signed integer). This hash is used by other code when storing or manipulating the instance – the values are intended to be evenly distributed for varied inputs for use in clustering.

Hash Tables (the most common implementation of dictionaries)

Suppose n is the number of keys (words) whose definitions we want to store, and suppose we use a table of N buckets, where N is perhaps a bit larger than n, but much smaller than the number of possible keys. A hash table maps a huge set of possible keys into N buckets by applying a compression function to each hash code. The obvious compression function is

1
h(hashCode) = hashCode mod N.

Hash codes are often negative, so remember that mod is not the same as Java’s remainder operator "%". If you compute hashCode % N, check if the result is negative, and add N if it is.

With this compression function, no matter how long and variegated the keys are, we can map them into a table whose size is not much greater than the actual number of entries we want to store. However, we’ve created a new problem: several keys are hashed to the same bucket in the table if h(hashCode1) = h(hashCode2). This circumstance is called a collision.

How do we handle collisions without losing entries? We use a simple idea called chaining. Instead of having each bucket in the table reference one entry, we have it reference a linked list of entries, called a chain. If several keys are mapped to the same bucket, their definitions all reside in that bucket’s linked list.

Chaining creates a second problem: how do we know which definition corresponds to which word? The answer is that we must store each key in the table with its definition. The easiest way to do this is to have each listnode store an entry that has references to both a key (the word) and an associated value (its definition).

chaining

Hash tables usually support at least three operations. An Entry object references a key and its associated value.

1
2
3
4
5
6
7
public Entry insert(key, value)
// Compute the key’s hash code and compress it to determine the entry’s bucket.
// Insert the entry (key and value together) into that bucket’s list.
public Entry find(key)
// Hash the key to determine its bucket. Search the list for an entry with the given key. If found, return the entry; otherwise, return null.
public Entry remove(key)
// Hash the key to determine its bucket. Search the list for an entry with the given key. Remove it from the list if found. Return the entry or null.

What if two entries with the same key are inserted? There are two approaches.

  1. Following Goodrich and Tamassia, we can insert both, and have find() or remove() arbitrarily return/remove one. Goodrich and Tamassia also propose a method findAll() that returns all the entries with a given key.
  2. Replace the old value with the new one, so only one entry with a given key exists in the table.

Which approach is best? It depends on the application.

WARNING: When an object is stored as a key in a hash table, an application should never change the object in a way that will change its hash code. If you do so, the object will thenceforth be in the wrong bucket.

The load factor of a hash table is n/Nn/N, where nn is the number of keys in the table and NN is the number of buckets. If the load factor stays below one (or a small constant), and the hash code and compression function are "good," and there are no duplicate keys, then the linked lists are all short, and each operation takes O(1) time. However, if the load factor grows too large (n >> N), performance is dominated by linked list operations and degenerates to O(n) time (albeit with a much smaller constant factor than if you replaced the hash table with one singly-linked list). A proper analysis requires a little probability theory, so we’ll put it off until near the end of the semester.

Hash Codes and Compression Functions

key --> hash code --> [0, N-1]

Hash codes and compression functions are a bit of a black art. The ideal hash code and compression function would map each key to a uniformly distributed random bucket from zero to N - 1. By "random", I don’t mean that the function is different each time; a given key always hashes to the same bucket. I mean that two different keys, however similar, will hash to independently chosen integers, so the probability they’ll collide is 1/N1/N. This ideal is tricky to obtain.

In practice, it’s easy to mess up and create far more collisions than necessary. Let’s consider bad compression functions first. Suppose the keys are integers, and each integer’s hash code is itself, so hashCode(i) = i.

Suppose we use the compression function h(hashCode) = hashCode mod N, and the number N of buckets is 10,000. Suppose for some reason that our application only ever generates keys that are divisible by 4. A number divisible by 4 mod 10,000 is still a number divisible by 4, so three quarters of the buckets are never used! Thus the average bucket has about four times as many entries as it ought to.

The same compression function is much better if N is prime. With N prime, even if the hash codes are always divisible by 4, numbers larger than N often hash to buckets not divisible by 4, so all the buckets can be used.

For reasons I won’t explain (see Goodrich and Tamassia Section 9.2.4 if you’re interested),

1
h(hashCode) = ((a * hashCode + b) mod p) mod N

is a yet better compression function. Here, a, b, and p are positive integers, p is a large prime, and p >> N. Now, the number N of buckets doesn’t need to be prime.

I recommend always using a known good compression function like the two above. Unfortunately, it’s still possible to mess up by inventing a hash code that creates lots of conflicts even before the compression function is used. We’ll discuss hash codes next lecture.

Hash Codes

Since hash codes often need to be designed specially for each new object, you’re left to your own wits. Here is an example of a good hash code for Strings.

1
2
3
4
5
6
7
private static int hashCode(String key) {
int hashVal = 0;
for (int i = 0; i < key.length(); i++) {
hashVal = (127 * hashVal + key.charAt(i)) % 16908799; // sort of base 127 number, mix up the bits
}
return hashVal;
}

By multiplying the hash code by 127 before adding in each new character, we make sure that each character has a different effect on the final result. The "%" operator with a prime number tends to "mix up the bits" of the hash code. The prime is chosen to be large, but not so large that 127 * hashVal + key.charAt(i) will ever exceed the maximum possible value of an int.

The best way to understand good hash codes is to understand why bad hash codes are bad. Here are some examples of bad hash codes on Words.

  1. Sum up the ASCII values of the characters. Unfortunately, the sum will rarely exceed 500 or so, and most of the entries will be bunched up in a few hundred buckets. Moreover, anagrams like "pat," "tap," and "apt" will collide.
  2. Use the first three letters of a word, in a table with 26^3 buckets. Unfortunately, words beginning with "pre" are much more common than words beginning with "xzq", and the former will be bunched up in one long list. This does not approach our uniformly distributed ideal.
  3. Consider the "good" hashCode() function written out above. Suppose the prime modulus is 127 instead of 16908799. Then the return value is just the last character of the word, because (127 * hashVal) % 127 = 0. That’s why 127 and 16908799 were chosen to have no common factors.

Why is the hashCode() function presented above good? Because we can find no obvious flaws, and it seems to work well in practice. (A black art indeed.)

Resizing Hash Tables

Sometimes we can’t predict in advance how many entries we’ll need to store. If the load factor n/Nn/N (entries per bucket) gets too large, we are in danger of losing constant-time performance.

One option is to enlarge the hash table when the load factor becomes too large (typically larger than 0.75). Allocate a new array (typically at least twice as long as the old), then walk through all the entries in the old array and rehash them into the new.

Take note: you CANNOT just copy the linked lists to the same buckets in the new array, because the compression functions of the two arrays will certainly be incompatible. You have to rehash each entry individually.

You can also shrink hash tables (e.g., when n/N < 0.25) to free memory, if you think the memory will benefit something else. (In practice, it’s only sometimes worth the effort.)

Obviously, an operation that causes a hash table to resize itself takes more than O(1) time; nevertheless, the average over the long run is still O(1) time per operation.

Transposition Tables: Using a Dictionary to Speed Game Trees

An inefficiency of unadorned game tree search is that some grids can be reached through many different sequences of moves, and so the same grid might be evaluated many times. To reduce this expense, maintain a hash table that records previously encountered grids. This dictionary is called a transposition table. Each time you compute a grid’s score, insert into the dictionary an entry whose key is the grid and whose value is the grid’s score. Each time the minimax algorithm considers a grid, it should first check whether the grid is in the transposition table; if so, its score is returned immediately. Otherwise, its score is evaluated recursively and stored in the transposition table.

Transposition tables will only help you with your project if you can search to a depth of at least three ply (within the five second time limit). It takes three ply to reach the same grid two different ways.

After each move is taken, the transposition table should be emptied, because you will want to search grids to a greater depth than you did during the previous move.