Name: Xingfu Wu
Pronouns: he/him/his
Biography:
Dr. Wu joined Mathematics and Computer Science Division at Argonne National Lab as a staff scientist in July 2017 and has a joint appointment at the University of Chicago as CASE Senior Scientist. He worked as the faculty member in Department of Computer Science & Engineering at Texas A&M University from 2003 to 2017. He was a post-doc researcher in Department of Electrical & Computer Engineering at Northwestern University from 1999 to 2003. He is the author of a monograph Performance Evaluation, Prediction and Visualization of Parallel Systems (Kluwer, 1999). His research interests are performance modeling and analysis, autotuning, high performance computing, energy efficient computing, and power modeling and analysis.
Institution/Lab: Argonne National Laboratory
Website:
SRP Collaboration Topic/Title: Autotuning Scientific Applications at Scale
Field or research area: HPC, ML
Please select all the topical areas that apply to your project:
Computer Science (i.e., architectures, compilers/languages, networks, workflow/edge, experiment automation, containers, neuromorphic computing, programming models, operating systems, sustainable software); High-Performance Computing; Machine Learning and AI
Brief Abstract:
As we enter the exascale computing era, efficiently utilizing power and optimizing the performance of scientific software under power and energy constraints are challenging. Scientific software developers often rely on HPC systems with the default configurations setup by the vendors to run their applications, however, their applications are not efficiently executed with the default system configurations. The number of tunable parameters that HPC users can configure at the system and software levels has increased significantly because of the complexity of the HPC ecosystems, resulting in a dramatically increased parameter space. Attempting to evaluate many (or all) possible parameter combinations becomes very time-consuming. Therefore, scientific application developers will be able to use our publicly available autotuning software package ytopt (https://github.com/ytopt-team/ytopt) to autotune their applications on the target HPC systems to identify the best configuration (for software and system parameters) and then use the best configuration to run their applications efficiently on the target systems. This approach not only optimizes scientific software for efficient execution and energy efficiency but also saves considerable energy on exascale supercomputers at DOE leadership computing facilities.
Desired relevant skills, background, or interests:
Desired relevant skills: python and any programming language (C, or Fortran, C++) Bonus: Some scientific application needs fine-tune on HPC systems
Other comments:
Do any special requirements apply? Minimum GPA (specify what GPA in comments below); In-Person Only
Other, specify:
Keywords:
Autotuning, performance optimization, scalability, Bayesian Optimization, machine learning
Lightning Talk Title: Autotuning Scientific Applications at Scale