Document Type
Article
Publication Date
3-1994
Subjects
Parallel programming (Computer science) -- Evaluation, Computer multitasking, Parallel computers -- Programming
Abstract
Parallel environments consisting of a network of heterogeneous workstations introduce an inherently dynamic environment that differs from multicomputers. Workstations are usually considered “shared” resources while multicomputers provide dedicated processing power. The number of workstations available for use is continually changing; the parallel machine presented by the network is in effect continually reconfiguring itself. Application programs must effectively adapt to the changing number of processing nodes while maintaining computational efficiency. This paper examines methods for adapting to this dynamic environment within the framework of explicit message passing under the data parallel programming model. We present four requirements which we feel a method must satisfy. Several potential methods are examined within the framework and evaluated according to how well they address the defined requirements. An application-level technique called Application Data Movement (ADM) is described. Although this technique puts much of the responsibility of adaptation on the application programmer, it has the advantage of running on heterogeneous workstations. Related work, such as Dataparallel C and Piranha, is also examined and compared to ADM. The application of the ADM methodology to a real application, a neural-network classifier based on conjugate-gradient optimization, is outlined and discussed. Preliminary results are presented and analyzed. The computation has been shown to achieve in excess of 70 MFLOPS under quiet conditions on a network of nine heterogeneous machines, two HP 9000/720s, two DEC Alphas, and five Sun SPARCstation 10s, while maintaining an efficiency of nearly 80%.
Persistent Identifier
http://archives.pdx.edu/ds/psu/10388
Citation Details
"Adaptive Execution of Data Parallel Computations on Networks of Heterogeneous Workstations," Robert Prouty, Steve Otto, Jonathan Walpole, OGI Technical Report CSE-94-012, March 1994.
Description
Opt was parallelized for the Sequent Symmetry shared-memory multiprocessor by Mark Fanty. Steve Neighorn rewrote the shared memory version to run on a network of workstations using PVM.