Funding for this research provided by two Maseeh Graduate Fellowships, and by the National Science Foundation under Grant No. CNS-0719851. Thanks to Dr. Fariborz Maseeh and the National Science Foundation for their support.
Proceedings of the USENIX Annual Technical Conference (USENIX ATC'11)
Computer algorithms, Data structures (Computer science), Hashing (Computer science)
We present algorithms for shrinking and expanding a hash table while allowing concurrent, wait-free, linearly scalable lookups. These resize algorithms allow the hash table to maintain constant-time performance as the number of entries grows, and reclaim memory as the number of entries decreases, without delaying or disrupting readers.
We implemented our algorithms in the Linux kernel, to test their performance and scalability. Benchmarks show lookup scalability improved 125x over readerwriter locking, and 56% over the current state-of-the-art for Linux, with no performance degradation for lookups during a resize.
To achieve this performance, this hash table implementation uses a new concurrent programming methodology known as relativistic programming. In particular, we make use of an existing synchronization primitive which waits for all current readers to finish, with little to no reader overhead; careful use of this primitive allows ordering of updates without read-side synchronization or memory barriers.
"Resizable, Scalable, Concurrent Hash Tables" Josh Triplett, Paul E. McKenney, and Jonathan Walpole, in Proceedings of the USENIX Annual Technical Conference (USENIX ATC'11), Portland, Oregon, June 2011. Also available as PSU Computer Science Department Technical Report 11-01.