Functional programming (Computer science), Parallel processing (Electronic computers), Electronic data processing -- Distributed processing
We present relativistic programming, a concurrent programming model based on shared addressing, which supports efficient, scalable operation on either uniform shared-memory or distributed shared- memory systems. Relativistic programming provides a strong causal ordering property, allowing a series of read operations to appear as an atomic transaction that occurs entirely between two ordered write operations. This preserves the simple immutable-memory programming model available via mutual exclusion or transactional memory. Furthermore, relativistic programming provides joint-access parallelism, allowing readers to run concurrently with a writer on the same data. We demonstrate a generalized construction technique for concurrent data structures based on relativistic programming, taking into account the natural orderings provided by reader traversals and writer program order. Our construction technique specifies the precise placement of memory barriers and synchronization operations. To demonstrate our generalized approach, we reconstruct the algorithms for existing relativistic data structures, replacing the algorithm-specific reasoning with our systematic and rigorous construction. Benchmarks of the resulting relativistic read algorithms demonstrate high performance and linear scalability: relativistic resizable hash-tables demonstrate 56% better lookup performance than the current state of the art in Linux, and relativistic red-black trees show 2.5x better lookup performance than transactional memory.
Triplett, J., Howard, P. W., McKenney, P. E., & Walpole, J. (2011). Generalized construction of scalable concurrent data structures via relativistic programming. Tech. Rep. 14, Portland State University.