Portland State University. Department of Computer Science
Computer algorithms, Data structures (Computer science), Hashing (Computer science), Electronic data processing -- Distributed processing
Existing approaches to concurrent programming often fail to account for synchronization costs on modern shared-memory multipro- cessor architectures. A new approach to concurrent programming, known as relativistic programming, can reduce or in some cases eliminate synchronization overhead on such architectures. This approach avoids the costs of inter-processor communication and memory access by permitting processors to operate from a relativistic view of memory provided by their own caches, rather than from an absolute reference frame of memory as seen by all processors. This research shows how relativistic programming techniques can provide the perceived advantages of optimistic synchronization without the useless parallelism caused by rollbacks and retries.
Descriptions of several implementations of a concurrent hash table illustrate the differences between traditional and relativistic approaches to concurrent programming. An analysis of the fundamental performance bottlenecks in existing concurrent programming techniques, both optimistic and pessimistic, directly motivates the key ideas of relativistic programming. Relativistic techniques provide performance and scalability advantages over traditional synchronization, demonstrated through benchmarks of concurrent hash tables implemented in the Linux kernel. The demonstrated relativistic hash table makes use of an original relativistic hash table move operation. The paper concludes with a discussion of how the specific techniques used in the concurrent hash table implementation generalize to other data structures and algorithms.
Triplett, Josh, "Scalable Concurrent Hash Tables via Relativistic Programming" (2008). Computer Science Faculty Publications and Presentations. 223.