Distributed memory computers provide bandwidth, processing, and memory scaling capabilities beyond what can be achieved via coherent shared memory. An important consideration in using distributed memory computers effectively is to keep communication costs low, since processing speeds are outpacing communication rates.
Two important models for programming distributed memory are message passing and RMA (Remote Memory Access). RMA comes in many forms, and benefits from global address space communication, that is generally supported by modern network hardware. RMA is employed in PGAS (Partitioned Global Address Space) models which adds global pointers, and optionally, remote procedure call. These two capabilities play an important role in reducing communication costs, especially for fine grained and irregular communication patterns.
The lectures will cover message passing and PGAS programming via two libraries, respectively, MPI and UPC++. The goal of the lectures is to build a solid grounding in distributed memory programming and the performance trade-offs in efficient implementation. Algorithmic studies will be presented. Hybrid hierarchical models will also be discussed, which compose distributed memory programming with programming at the node, e.g. multithreading. The emphasis will be on maintaining low communication costs, as opposed to optimizing computational performance, which is another topic for study.
Scott B. Baden is Group Lead of the Computer Languages and System Software Group in the Computational Research Division at Lawrence Berkeley National Laboratory, and Adjunct Professor of Computer Science and Engineering at the University of California, San Diego, where he was a faculty member for 27 years. He earned his Ph.D. from the University of California, Berkeley in 1987. His research interests are in high performance and scientific computation: domain specific translation, abstraction mechanisms, programming models, run times, and irregular problems.