|
|
Computer Science Department Colloquium
Parallel Programming with MPI: Successes, Current Challenges, Future
Speaker:
Anthony Skjellum, PhD, Professor of Computer Science and SimCenter, University of Tennessee at Chattanooga
When: 11:00AM ~ 11:50AM, Monday December 5, 2022
Where: CSB 130
Abstract: The rich history and success of explicit parallel programming with message passing plus extensions have supported the growth of multicomputers, clusters, and supercomputers over the past forty plus years, starting with the Caltech Cosmic Cube in 1981. The Message Passing Interface (MPI), a community standard defined 30 years ago, has proven an effective and widely used programming notation (abstractions, syntax, semantics, etc) used pervasively for scale out computing. With a history spanning pre-Terascale through Exascale, MPI has delivered much of the promises it offered originally: portable parallel programs, good performance, and the ability to achieve a degree of performance-portability. In complement, research in portable implementation of MPI has gone on continuously to provide open source products such as MPICH and Open MPI, used widely. MPI's standards are notable for their latitude in how implmentations achieve compliance, and is not a protocol-based standard like TCP ⁄ IP.
The Terascale-to-Exascale transformation of scalable architectures (1996-2022) has put stress on the ability to deliver acceptable performance-portability in many cases, deriving from the new complexity of heterogeneous architectures, more so at present than the absolute scale of the number of 'MPI processes' in a scalable execution, or the massive changes in performance of processors, memories, and networks over this span of time. This talk concentrates on explaining the original abstractions, emerging abstractions, and challenges faced for applications and MPI designers to achieve performance portability with accelerator-based systems, changes to node organization, and limitations of abstractions posed in MPI originally. Providing new, revised, and higher-level abstractions, as well as support for modern programming languages, are discussed as ingredients to a solution strategy for MPI as it enters its 4th decade.
Bio: Dr. Anthony (Tony) Skjellum studied at Caltech (BS, MS, PhD). His PhD work emphasized portable, parallel software for large-scale dynamic simulation, with a specific emphasis on message-passing systems, parallel nonlinear and linear solvers, and massive parallelism. From 1990-93, he was a computer scientist at LLNL focusing on performance-portable message passing and portable parallel math libraries. From 1993-2003, he was on faculty in Computer Science at Mississippi State University, where his group co-invented the MPICH implementation of the Message Passing Interface (MPI) together with colleagues at Argonne National Laboratory. He has also been active in the MPI Forum since its inception in 1992. From 2003-2013, he was professor and chair at the University of Alabama at Birmingham, Dept. of Computer and Information Sciences. In 2014, he joined Auburn University as Lead Cyber Scientist and led R&D in cyber and High-Performance Computing for over three years. In Summer 2017, he joined the University of Tennessee at Chattanooga as Professor of Computer Science, Chair of Excellence, and Director, SimCenter, where he continues work in HPC (emphasizing MPI, scalable libraries, and heterogeneous computing). He is a co-PI of the DOE ⁄ NNSA PSAAP III Center "Center for Understandable, Performant Exascale Communication Systems" led by the University of New Mexico. He is a senior member of ACM, IEEE, ASEE, and AIChE, and an Associate Member of the American Academy of Forensic Science (AAFS), Digital & Multimedia Sciences Division.
|