First Advisor

Karen L. Karavanic

Date of Publication

12-2002

Document Type

Thesis

Degree Name

Master of Science (M.S.) in Computer Science

Department

Computer Science

Language

English

Subjects

Electronic data processing -- Distributed processing, Parallel processing (Electronic computers), Beowulf clusters (Computer systems)

DOI

10.15760/etd.2660

Physical Description

1 online resource (vii, 138 pages)

Abstract

Many universities and research laboratories have developed low cost clusters, built from Commodity-Off-The-Shelf (COTS) components and running mostly free software. Research has shown that these types of systems are well-equipped to handle many problems requiring parallel processing. The primary components of clusters are hardware, networking, and system software. An important system software consideration for clusters is the choice of the message passing library.

MPI (Message Passing Interface) has arguably become the most widely used message passing library on clusters and other parallel architectures, due in part to its existence as a standard. As a standard, MPI is open for anyone to implement, as long as the rules of the standard are followed. For this reason, a number of proprietary and freely available implementations have been developed.

Of the freely available implementations, two have become increasingly popular: LAM (Local Area Multicomputer) and MPICH (MPI Chameleon). This thesis compares the performance of LAM and MPICH in an effort to provide performance data and analysis of the current releases of each to the cluster computing community. Specifically, the accomplishments of this thesis are: comparative testing of the High Performance Linpack benchmark (HPL); comparative testing of su3_rmd, an MPI application used in physics research; and a series of bandwidth comparisons involving eight MPI point-to-point communication constructs. All research was performed on a partition of the Wyeast SMP Cluster in the High Performance Computing Laboratory at Portland State University.

We generate a vast amount of data, and show that LAM and MPICH perform similarly on many experiments, with LAM outperforming MPICH in the bandwidth tests and on a large problem size for su3_rmd. These findings, along with the findings of other research comparing the two libraries, suggest that LAM performs better than MPICH in the cluster environment. This conclusion may seem surprising, as MPICH has received more attention than LAM from MPI researchers. However, the two architectures are very different. LAM was originally designed for the cluster and networked workstation environments, while MPICH was designed to be portable across many different types of parallel architectures.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

Comments

If you are the rightful copyright holder of this dissertation or thesis and wish to have it removed from the Open Access Collection, please submit a request to pdxscholar@pdx.edu and include clear identification of the work, preferably with URL

Persistent Identifier

http://archives.pdx.edu/ds/psu/16526

Share

COinS