Document Type

Technical Report

Publication Date

2008

Subjects

Computer algorithms, Data structures (Computer science), Hashing (Computer science), Electronic data processing -- Distributed processing

Abstract

Existing approaches to concurrent programming often fail to account for synchronization costs on modern shared-memory multipro- cessor architectures. A new approach to concurrent programming, known as relativistic programming, can reduce or in some cases eliminate synchronization overhead on such architectures. This approach avoids the costs of inter-processor communication and memory access by permitting processors to operate from a relativistic view of memory provided by their own caches, rather than from an absolute reference frame of memory as seen by all processors. This research shows how relativistic programming techniques can provide the perceived advantages of optimistic synchronization without the useless parallelism caused by rollbacks and retries.

Descriptions of several implementations of a concurrent hash table illustrate the differences between traditional and relativistic approaches to concurrent programming. An analysis of the fundamental performance bottlenecks in existing concurrent programming techniques, both optimistic and pessimistic, directly motivates the key ideas of relativistic programming. Relativistic techniques provide performance and scalability advantages over traditional synchronization, demonstrated through benchmarks of concurrent hash tables implemented in the Linux kernel. The demonstrated relativistic hash table makes use of an original relativistic hash table move operation. The paper concludes with a discussion of how the specific techniques used in the concurrent hash table implementation generalize to other data structures and algorithms.

Description

Portland State University Computer Science Department Technical Report #08-06, 2008.

Persistent Identifier

https://archives.pdx.edu/ds/psu/30757

Share

COinS