Kathryn Maupin

Name: Kathryn Maupin
Pronouns: she/her/hers

Biography:
Kathryn Maupin is a Principal Member of the Technical Staff at Sandia National Laboratories. Her research focuses on model form error quantification and novel surrogate modeling techniques. Broader research interests include Bayesian methods, model validation, sensitivity analysis, and uncertainty quantification. Kathryn joined Sandia as a postdoc in 2016 and converted to a staff position in 2018. She received her Ph.D. in Computational Sciences, Engineering, and Mathematics and her M.S. in Computational and Applied Mathematics from The University of Texas at Austin after completing her B.A. in Applied Mathematics at the University of California, San Diego. When she is not working, Kathryn spends her time playing with her three girls and three dogs.

Institution/Lab: Sandia National Laboratories
Website:

SRP Collaboration Topic/Title: Enhancing model productivity through model form error quantification

Field or research area: Computational modeling, uncertainty quantification

Please select all the topical areas that apply to your project:
Computational Science Applications (i.e., bioscience, cosmology, chemistry, environmental science, nanotechnology, climate, etc.); Machine Learning and AI

Brief Abstract:
Despite continuing advances in statistical inversion and modeling, model inadequacy due to model form error remains a concern in all areas of mathematical modeling. The Bayesian paradigm naturally integrates uncertainties from both experimental data and model formulation, including initial or boundary conditions, model form, and parameter and numerical approximation. While model improvement is an enterprise that is continuously enabled by the availability of cost-effective high-performance computing infrastructure, model error is unavoidable in many situations. This problem is attributed to the incomplete understanding of the underlying physics, likely in addition to large and poorly characterized uncertainties in calibration and validation data. Introducing a model discrepancy term into the Bayesian framework can improve the predictive power of a given model and, arguably, the transferability of physical parameters. Much like physical models, calibrating a discrepancy model requires careful consideration regarding formulation, parameter estimation, and uncertainty quantification, each of which is often problem-specific. This project seeks to explore methods for model form error quantification that can be easily generalized for wide applicability. Of particular interest are methods that may enable physics discovery and enhance extrapolation.

Desired relevant skills, background, or interests:
Desired interests: Surrogate modeling and computational modeling Desired skills: Proficiency in at least one programming language (preferably C, R, or python) Relevant skills: Machine learning or surrogate/reduced order modeling methods

Other comments:

Do any special requirements apply? other
Other, specify: N/A

Keywords:
Bayesian methods; machine learning; model selection; model validation; model discrepancy; model form error; surrogate modeling

Lightning Talk Title: Enhancing model predictivity through model form error quantification