Name: Ken Raffenetti
Pronouns: he/him/his
Biography:
Ken Raffenetti is a principal software development specialist in the Mathematics and Computer Science Division at Argonne National Laboratory. He received his B.S. in computer science from the University of Illinois at Urbana-Champaign. He joined Argonne in 2006, where he worked for seven years as a systems administrator. In 2013, Ken shifted his main activities to software development, joining the Programming Models and Runtime Systems group, focused on the development of systems software for high-performance computing applications. Ken’s research interests include parallel programming models and low-level communication libraries. In particular, Ken is involved in the definition of the Message Passing Interface (MPI) standard and is a key maintainer of MPICH, the leading implementation of MPI. He is a member of the PMIx Administrative Steering Committee, which defines the Process Management Interface used on many of today’s large-scale supercomputers. He also participates in two industry working groups for low-level communication libraries — OpenFabrics Interfaces (OFI) and Unified Communications X (UCX).
Institution/Lab: Argonne National Laboratory
Website: https://github.com/raffenet
SRP Collaboration Topic/Title: MPICH: A High Performance and Widely Portable Implementation of the Message Passing Interface (MPI) Standard
Field or research area: Parallel Programming Models and Runtime
Please select all the topical areas that apply to your project:
Computer Science (i.e., architectures, compilers/languages, networks, workflow/edge, experiment automation, containers, neuromorphic computing, programming models, operating systems, sustainable software); High-Performance Computing; Machine Learning and AI
Brief Abstract:
MPICH is a widely used, open-source implementation of the MPI message passing standard. It has been ported to many platforms and used by several vendors and research groups as the basis for their own MPI implementations. We are looking for collaboration at all levels of the software stack. Including, but not limited to, low-level networking libraries, shared memory, collective algorithms, performance testing/tuning, CI/CD, documentation, and more!
Desired relevant skills, background, or interests:
C Python Shell Networking Linux/Unix
Other comments:
Do any special requirements apply? In-Person Only
Other, specify:
Keywords:
MPICH;MPI;HPC;networking;libfabric;ucx;GPU;multithreading;programming models;runtime systems
Lightning Talk Title: Enabling Exascale with MPICH