From 4241b9769fa588303f79f080a2e2cf7694ef2d39 Mon Sep 17 00:00:00 2001 From: Yimin Zhong Date: Sun, 21 Jul 2024 06:37:54 +0000 Subject: [PATCH] Update Awards --- Analysis/Awards-Analysis-2024.csv | 10 ++++++---- .../Awards-Computational-Mathematics-2024.csv | 20 ++++++++++--------- Statistics/Awards-Statistics-2024.csv | 13 +++++++----- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/Analysis/Awards-Analysis-2024.csv b/Analysis/Awards-Analysis-2024.csv index 7a9c547..7b7a372 100644 --- a/Analysis/Awards-Analysis-2024.csv +++ b/Analysis/Awards-Analysis-2024.csv @@ -1,12 +1,14 @@ "AwardNumber","Title","NSFOrganization","Program(s)","StartDate","LastAmendmentDate","PrincipalInvestigator","State","Organization","AwardInstrument","ProgramManager","EndDate","AwardedAmountToDate","Co-PIName(s)","PIEmailAddress","OrganizationStreet","OrganizationCity","OrganizationState","OrganizationZip","OrganizationPhone","NSFDirectorate","ProgramElementCode(s)","ProgramReferenceCode(s)","ARRAAmount","Abstract" "2348715","Differentiability in Carnot Groups and Metric Measure Spaces","DMS","ANALYSIS PROGRAM","09/01/2024","07/17/2024","Gareth Speight","OH","University of Cincinnati Main Campus","Standard Grant","Jeremy Tyson","08/31/2027","$263,078.00","","gareth.speight@uc.edu","2600 CLIFTON AVE","CINCINNATI","OH","452202872","5135564358","MPS","128100","5918, 5920, 5935, 5946, 5952","$0.00","A function is considered to be smooth or differentiable if at every point it is has a derivative, or in other words, a well-defined rate of change. Many familiar functions are smooth, and smoothness properties are convenient and prevalent in scientific applications. However, non-smooth functions also frequently arise in mathematics and its applications, such as optimization. This project concerns differentiability phenomena in non-smooth environments. Specifically, it seeks to understand when non-smooth objects possess hidden smoothness structures. While non-smooth objects are more difficult to understand, they are often equipped with additional structure that is not initially visible. For instance, Lipschitz functions (i.e., those functions which expand distances by at most a multiplicative factor) are differentiable at most points of their domain. The project investigates these and related phenomena, it seeks to describe when a partially defined function can be extended to a smooth function, and explores when a function can be approximated by a smooth function. The project will promote research collaboration and will generate research training opportunities for both graduate and undergraduate students.

The project centers on two broad topics of research. First, the PI seeks a deeper understanding of the Whitney extension and Lusin approximation questions for mappings between Carnot groups. A significant complication, not present in the Euclidean case, is that the maps to be constructed must satisfy nonlinear constraints reflecting the underlying geometry of these non-Euclidean environments. A second line of study investigates the differentiability properties of Lipschitz functions in Euclidean spaces, Carnot groups, and metric or Banach spaces. A fundamental theorem due to Rademacher states that every Lipschitz function defined in a Euclidean domain is differentiable almost everywhere. However, in many situations one in fact finds differentiability points inside measure zero sets. This observation led to the modern study of sets of universal differentiability. The project seeks to test the limits of Rademacher?s theorem through an improved understanding of universal differentiability sets, via the use of maximal directional derivatives and other methods.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400246","Free Probability, Stochastic Differential Equations, and the Large-N Limit","DMS","PROBABILITY, ANALYSIS PROGRAM","08/01/2024","07/18/2024","Todd Kemp","CA","University of California-San Diego","Standard Grant","Elizabeth Wilmer","07/31/2027","$295,000.00","","tkemp@math.ucsd.edu","9500 GILMAN DR","LA JOLLA","CA","920930021","8585344896","MPS","126300, 128100","079Z","$0.00","In much of physical science, two competing factors determine the behavior of systems: deterministic laws of nature, and random ""noise"". Physical laws are usually described mathematically by differential equations. Over the last half century, a comprehensive theory of differential equations with random noise, called stochastic differential equations, has been developed and is very well-understood in many regimes. One area where foundational work is still needed is understanding how the behavior of systems described by stochastic differential equations scales as the dimension, i.e. the number of features in the system, grows. This project aims to provide a broad theoretical framework and a general scaling limit theory for high-dimensional stochastic differential equations. This theory will have significant applications to research fields as diverse as deep learning and neural networks, neurobiology (understanding learning structures in the brains of insects and other animals), the design of broadband wireless networks, and theoretical physics (quantum field theory). The award will also support the training of graduate student researchers the dissemination of the research at conferences and workshops around the US and the world.

The principal research goals of this award are to study noncommutative stochastic calculus, developing a broad analytic foundation for the subject, and to prove general scaling limit theorems about the solutions of matrix stochastic differential equations (SDEs) as the matrix size grows. Noncommutative stochastic calculus has been developed in several quarters since the 1980s, but key analytic features of the classical theory have been missed owing to the noncommutativity - often, the methods are combinatorial, and function classes are restricted to polynomials or analytic functions. Current work has developed a new approach to noncommutative stochastic calculus, using noncommutative function theory which mirrors the classical martingale theoretic approach. This yields a general theory of noncommutative quadratic variation and an Ito formula which extends all previously known Ito formulations in free probability. This project will use these tools to study the large-N limits of NxN matrix SDEs, proving a general scaling limit for their solutions as described by noncommutative SDEs in free probability. The outline of this approach for self-adjoint processes is now clear, and the technical difficulties should be approachable with methods described above. A further goal is to extend such scaling limits to the non-self-adjoint setting using Brown measure.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2421914","Collaborative Research: Conference: Prairie Analysis Seminar 2024-2025","DMS","ANALYSIS PROGRAM","09/15/2024","07/19/2024","Virginia Naibo","KS","Kansas State University","Standard Grant","Jeremy Tyson","08/31/2026","$25,855.00","Diego Maldonado","vnaibo@ksu.edu","1601 VATTIER STREET","MANHATTAN","KS","665062504","7855326804","MPS","128100","7556, 9150","$0.00","This award supports participants at the 2024 and 2025 editions of the Prairie Analysis Seminar. The Fall 2024 event will be held in October 2024 at the University of Kansas; the Fall 2025 event will be hosted by Kansas State University (date to be determined). The Prairie Analysis Seminar is an ongoing collaboration between the mathematics departments at Kansas State University and the University of Kansas. Since its inception in 2001, the conference has showcased the research of a diverse group of mathematicians working in analysis and partial differential equations. The event provides participants in early career stages with the opportunity to present their work via contributed talks, to get advice from experts, and to expand their professional networks. In addition, the event promotes the participation of underrepresented and underserved groups in mathematics, in particular, researchers from smaller colleges and universities in geographical proximity to the host institutions.

Invited speakers at the Prairie Analysis Seminar are leading scholars well known for their contributions to active areas of research within analysis and partial differential equations and for their ability to communicate with a broad mathematical audience. Each event features an invited principal speaker, who gives two one-hour lectures, accompanied by two invited speakers who each give a one-hour lecture. An important component of the seminar is the time reserved for short talks by early career participants, including advanced Ph.D. students and postdocs. The conference also includes a session for discussion of open problems suggested by conference participants.

https://www.math.ksu.edu/research/centers-groups/group/analysis/prairie_seminar.html

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2350067","Polynomial approximation in spaces of analytic functions","DMS","ANALYSIS PROGRAM","08/01/2024","07/17/2024","Dominique Guillot","DE","University of Delaware","Standard Grant","Wing Suet Li","07/31/2027","$150,000.00","","dguillot@udel.edu","220 HULLIHEN HALL","NEWARK","DE","197160099","3028312136","MPS","128100","9150","$0.00","Constructing approximations of complicated functions using simpler ones is a problem of fundamental importance in many fields of sciences and engineering. Such approximations are used, for example, to simplify calculations and make computer algorithms faster and more efficient. This is particularly important for real-time systems (flight control systems, medical devices, smartphones, sensors, etc.) where numerical calculations need to be performed as quickly as possible. Polynomials play a crucial role in constructing such approximations. They are used across many fields due to their versatility in simplifying complex functions and providing accurate estimations. In many instances, however, building explicit polynomial approximation schemes remains an open problem. In this project, the investigator and his colleagues study new methods to construct polynomial approximations for functions belonging to well-known spaces of functions. The research advances our knowledge of how to efficiently build such approximations and also connects the problem to other fields in mathematics such as matrix analysis and operator theory. The project also supports education by training one PhD student to become a new expert in the field. The PI will also engage in undergraduate and high school student mentoring and outreach activities.


Function approximation constitutes a significant branch of analysis, involving the approximation of general functions by various families of simpler ones. This concept holds far-reaching applications across diverse mathematical disciplines and scientific domains. This project tackles challenging questions in complex analysis, focusing on polynomial approximation in spaces of analytic functions. While its history is extensive, a comprehensive understanding of polynomial approximation in many function spaces has been attained only recently, while others remain very active areas of research. The first part of the project explores constructive polynomial approximation schemes in weighted Dirichlet and in de Branges--Rovnyak spaces by re-framing the approximation problem as concrete matrix and operator theory problems. The second part of the project focuses on determining general conditions guaranteeing well-known approximation schemes (Cesaro, Abel, etc.) converge in general Banach holomorphic function spaces. The classical approximation property (AP) of Banach spaces plays a crucial role in this problem. Of particular interest is how strengthened versions of the AP can be used to characterize the existence of specific approximation schemes. Properties of the kernel of reproducing kernel Hilbert spaces of functions also have the potential to reveal when specific approximation schemes are valid. The techniques developed in the project are applicable to several fundamental spaces of analytic functions and therefore contribute to the broad problem of understanding constructive polynomial approximation in function spaces.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2403698","Conference: Geometry of Measures and Free Boundaries","DMS","ANALYSIS PROGRAM","02/15/2024","02/14/2024","Bobby Wilson","WA","University of Washington","Standard Grant","Jeremy Tyson","01/31/2025","$50,000.00","Matthew Badger, Stefan Steinerberger, Mariana Smit Vega Garcia","blwilson@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","128100","7556","$0.00","This award provides support for an international research conference on the geometry of measures and free boundaries, to take place July 22?26, 2024 at the University of Washington, Seattle. The conference lies at the intersection of calculus of variations, geometric measure theory, harmonic analysis, and partial differential equations. The event will include pre-conference introductory minicourses for PhD students, to be held July 20?21, 2024. Funding from this award will support travel expenses for non-local speakers and other participants in the conference and the minicourses, with priority for participant support given to PhD students, postdocs, and researchers without access to other sources of funding.

The subject of Geometric Measure Theory encompasses a range of analytical tools used to describe the size and structure of sets with a geometric flavor, and the theory of Free Boundary Problems addresses the challenge of characterizing unknown interfaces that adhere to specified constraints, often described by a partial differential equation. Free Boundary Problems arise in several disciplines outside of pure mathematics, including physics, finance, and biology. Contemporary geometric measure theory in metric spaces is parallel to applied research on the problem of identifying manifold structure in large data sets. The two main topics of the conference are linked through a spectrum of notions of lower-order and higher-order regularity of sets. By convening current practitioners in geometric measure theory, free boundary problems, and related areas of geometric and harmonic analysis to share their perspectives and report on the latest advances, the conference seeks to strengthen the connections between the subjects, to shed light on shared principles, and to facilitate novel solutions to longstanding problems. The conference website is https://sites.google.com/view/gmfbseattle2024/

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2421913","Collaborative Research: Conference: Prairie Analysis Seminar 2024-2025","DMS","ANALYSIS PROGRAM","09/15/2024","07/19/2024","Dionyssios Mantzavinos","KS","University of Kansas Center for Research Inc","Standard Grant","Jeremy Tyson","08/31/2026","$25,376.00","Shuanglin Shao","mantzavinos@ku.edu","2385 IRVING HILL RD","LAWRENCE","KS","660457563","7858643441","MPS","128100","7556, 9150","$0.00","This award supports participants at the 2024 and 2025 editions of the Prairie Analysis Seminar. The Fall 2024 event will be held in October 2024 at the University of Kansas; the Fall 2025 event will be hosted by Kansas State University (date to be determined). The Prairie Analysis Seminar is an ongoing collaboration between the mathematics departments at Kansas State University and the University of Kansas. Since its inception in 2001, the conference has showcased the research of a diverse group of mathematicians working in analysis and partial differential equations. The event provides participants in early career stages with the opportunity to present their work via contributed talks, to get advice from experts, and to expand their professional networks. In addition, the event promotes the participation of underrepresented and underserved groups in mathematics, in particular, researchers from smaller colleges and universities in geographical proximity to the host institutions.

Invited speakers at the Prairie Analysis Seminar are leading scholars well known for their contributions to active areas of research within analysis and partial differential equations and for their ability to communicate with a broad mathematical audience. Each event features an invited principal speaker, who gives two one-hour lectures, accompanied by two invited speakers who each give a one-hour lecture. An important component of the seminar is the time reserved for short talks by early career participants, including advanced Ph.D. students and postdocs. The conference also includes a session for discussion of open problems suggested by conference participants.

https://www.math.ksu.edu/research/centers-groups/group/analysis/prairie_seminar.html

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350290","Fully Nonlinear Equations and Minimal Submanifolds in Lagrangian Geometry","DMS","GEOMETRIC ANALYSIS, ANALYSIS PROGRAM","08/15/2024","07/17/2024","Arunima Bhattacharya","NC","University of North Carolina at Chapel Hill","Standard Grant","Jeremy Tyson","07/31/2027","$234,951.00","","arunimab@unc.edu","104 AIRPORT DR STE 2200","CHAPEL HILL","NC","275995023","9199663411","MPS","126500, 128100","","$0.00","This project lies at the intersection of differential geometry and nonlinear partial differential equations. The differential equations under consideration have geometric motivation and content, and model natural problems such as the reconstruction of the shape of a geometric object based on knowledge of how that object curves in its ambient environment, or the motion of a geometric object which deforms in a manner determined by its extrinsic curvature. Other equations of interest describe the structure of geometric objects which locally minimize a suitable notion of area subject to predetermined constraints. Such objects arise classically in mathematical models for soap films. Prescribed curvature equations and the equations describing time-dependent curvature flow also find applications in mathematical physics, e.g., to the geometry of spacetime and to the deformation theory of elastic bodies. The project will generate research opportunities for graduate students and will facilitate the mentoring of graduate students and postdocs through interactive research seminars. In addition, the principal investigator will prepare publicly accessible educational materials through the writing of survey articles on the subject.

Two important types of nonlinear geometric partial differential equations feature heavily in this project: Lagrangian mean curvature equations (and the associated flows), and the Hamiltonian stationary equation (along with other fourth order equations of a similar type). Lagrangian mean curvature equations feature in the existence theory for special Lagrangian submanifolds of Calabi-Yau manifolds, a central issue in mirror symmetry. The Hamiltonian stationary equation identifies critical points for the volume functional on Lagrangian submanifolds under Hamiltonian variations. A priori estimates are crucial for solving certain fully nonlinear equations and for determining fundamental properties of their solutions. Building on prior work in the complex Euclidean setting, the PI will investigate regularity and well-posedness of the variable Lagrangian phase function. In the Lagrangian context, variational problems for the volume functional lead to nonlinear equations of fourth order. A relevant challenge is to identify submanifolds that are minimal within a specific Hamiltonian isotopy class. In contrast with general minimal surfaces, the underlying constraints in this setting permit the existence of compact minimal submanifolds. From an analytic viewpoint, the maximum principle is no longer applicable. This project will develop new strategies for the existence theory for Hamiltonian stationary submanifolds.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2350067","Polynomial approximation in spaces of analytic functions","DMS","ANALYSIS PROGRAM","08/01/2024","07/17/2024","Dominique Guillot","DE","University of Delaware","Standard Grant","Wing Suet Li","07/31/2027","$150,000.00","","dguillot@udel.edu","220 HULLIHEN HALL","NEWARK","DE","197160099","3028312136","MPS","128100","9150","$0.00","Constructing approximations of complicated functions using simpler ones is a problem of fundamental importance in many fields of sciences and engineering. Such approximations are used, for example, to simplify calculations and make computer algorithms faster and more efficient. This is particularly important for real-time systems (flight control systems, medical devices, smartphones, sensors, etc.) where numerical calculations need to be performed as quickly as possible. Polynomials play a crucial role in constructing such approximations. They are used across many fields due to their versatility in simplifying complex functions and providing accurate estimations. In many instances, however, building explicit polynomial approximation schemes remains an open problem. In this project, the investigator and his colleagues study new methods to construct polynomial approximations for functions belonging to well-known spaces of functions. The research advances our knowledge of how to efficiently build such approximations and also connects the problem to other fields in mathematics such as matrix analysis and operator theory. The project also supports education by training one PhD student to become a new expert in the field. The PI will also engage in undergraduate and high school student mentoring and outreach activities.


Function approximation constitutes a significant branch of analysis, involving the approximation of general functions by various families of simpler ones. This concept holds far-reaching applications across diverse mathematical disciplines and scientific domains. This project tackles challenging questions in complex analysis, focusing on polynomial approximation in spaces of analytic functions. While its history is extensive, a comprehensive understanding of polynomial approximation in many function spaces has been attained only recently, while others remain very active areas of research. The first part of the project explores constructive polynomial approximation schemes in weighted Dirichlet and in de Branges--Rovnyak spaces by re-framing the approximation problem as concrete matrix and operator theory problems. The second part of the project focuses on determining general conditions guaranteeing well-known approximation schemes (Cesaro, Abel, etc.) converge in general Banach holomorphic function spaces. The classical approximation property (AP) of Banach spaces plays a crucial role in this problem. Of particular interest is how strengthened versions of the AP can be used to characterize the existence of specific approximation schemes. Properties of the kernel of reproducing kernel Hilbert spaces of functions also have the potential to reveal when specific approximation schemes are valid. The techniques developed in the project are applicable to several fundamental spaces of analytic functions and therefore contribute to the broad problem of understanding constructive polynomial approximation in function spaces.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350128","Conference: PDE in Moab: Advances in Theory and Application","DMS","ANALYSIS PROGRAM","04/15/2024","04/05/2024","Mark Allen","UT","Brigham Young University","Standard Grant","Jan Cameron","03/31/2025","$35,875.00","Blair Davey, Mariana Smit Vega Garcia","mkallen2@gmail.com","A-153 ASB","PROVO","UT","846021128","8014223360","MPS","128100","7556","$0.00","The purpose of this award is to fund a research conference on Partial Differential Equations (PDE) to take place on June 3-7, 2024, at the Utah State University (USU) building located in Moab, Utah. The conference, called ""PDE in Moab: Advances in Theory and Application"" will feature 14 invited talks, along with 9 contributed talks from early career mathematicians, with a total of approximately 40 participants. Funding attached to this grant will be used to support travel and lodging expenses for participants in the conference, with priority for junior participants who do not have access to other sources of travel funding. The conference website is https://pdemoab.byu.edu

This conference aims to explore the tools and methods of partial differential equations (PDE), and their applications in related fields such as geometric measure theory (GMT), harmonic analysis, and free boundary problems. Historically, these areas of mathematics have benefited from many fruitful interconnections. Indeed, pioneering advancements in free boundary problems adapted techniques from regularity theory in both PDE and GMT. Moreover, recent advances in both nonlinear and nonlocal PDE have enlarged the intersection of the aforementioned fields, thereby increasing interactions, collaborations, and the overall advancement of these areas. This conference will bring together experts from the areas of PDE, GMT, harmonic analysis, and free boundary problems to explore and build on recent progress. The list of speakers is comprised of a dynamic group of mathematicians specializing in complementary fields, many of whom already have intersecting interests. It is expected that by bringing these researchers together, there will be further interaction between research areas, leading to the cross-pollination of techniques and novel research results.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2348806","RUI: PDE and Geometry in non-smooth spaces","DMS","ANALYSIS PROGRAM","07/15/2024","07/15/2024","Luca Capogna","MA","Smith College","Standard Grant","Wing Suet Li","06/30/2027","$275,939.00","","lcapogna@smith.edu","10 ELM ST","NORTHAMPTON","MA","010636304","4135842700","MPS","128100","9229","$0.00","This award supports a project which investigates topics in the theory of partial differential equations in the setting of non-smooth spaces. Partial differential equations provide a powerful mathematical tool to gain insights about equilibrium states of complex physical systems which arise as solutions of certain equations. The properties of the solutions to these equations depend on a ?background geometry? that models physical features such as the non-homogeneity of materials or the presence of constraints (such as the constraints inherent in the motion of a robotic arm). In many important physical applications, one encounters non-smooth geometries (for instance, fractals) which differ fundamentally from the familiar geometry of Euclidean space, so that standard notions from calculus must be reformulated from a broader perspective. One of the most ubiquitous instances of such ?background geometry? is known as sub-Riemannian geometry, which models spaces in which motion is possible only along a given set of directions. This non-smooth geometry is widely useful in modeling physical phenomena, for example, in robotics, quantum mechanics, and neuroscience. This project will also provide opportunities for undergraduate and graduate students to work on research projects arising from the proposed work.

The PI will study sub-Riemannian analogues of the curve shortening flow; the regularity of solutions of certain degenerate elliptic parabolic PDE and non-local PDE in the general setting of certain metric spaces endowed with a doubling measure. The common thread between these investigations is the interplay between the non-smooth structure of the space and the behavior of solutions of equations describing critical points of interesting energy functionals. Some of the proposed research will provide a theoretical basis for implementing numerical simulations of real-world systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2349942","Dynamics of Polynomials","DMS","OFFICE OF MULTIDISCIPLINARY AC, ANALYSIS PROGRAM","09/01/2024","07/17/2024","Alexander Blokh","AL","University of Alabama at Birmingham","Standard Grant","Jeremy Tyson","08/31/2027","$205,606.00","","ablokh@math.uab.edu","701 S 20TH STREET","BIRMINGHAM","AL","352940001","2059345266","MPS","125300, 128100","5905, 9150","$0.00","This project analyzes the structure and dynamical properties of families of complex polynomials of degree three. Nonlinear mappings arise in mathematical models across a host of scientific and applied fields, and a key issue is to understand how the behavior of such mappings changes as the underlying parameters vary. Among the simplest nonlinear mappings are complex polynomials. The structure and dynamical properties of the space of complex quadratic polynomials has been intensively studied since the early 1980s, culminating in a detailed understanding of the celebrated Mandelbrot set. Analyzing the structure of spaces of complex cubic polynomials is at the heart of this project. The project also provides research opportunities for graduate students and contributes to the training and mentoring of undergraduate students. In addition, the principal investigator continues to serve as director of an outreach program aimed at Alabama high school students.

The project develops the dynamical and structural theory of moduli spaces of complex polynomials of degree three from several perspectives. A first line of inquiry concerns the construction of locally connected models of the cubic connectedness locus. By analogy with classical combinatorial models for the Mandelbrot set, the project also studies a combinatorial model in the cubic case based upon critical portraits. This work relies on recent laminational results recently developed by the PI and collaborators. A further approach to be investigated involves analytic tools. Estimates for the moduli of annuli will be used to show that Julia sets of polynomials of degree three are generated by rational cuts and admit a description in terms of rational laminations. If successful, this line of inquiry will validate conjectural laminational models of such Julia sets as well as certain subsets of the cubic connectedness locus.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2348806","RUI: PDE and Geometry in non-smooth spaces","DMS","ANALYSIS PROGRAM","07/15/2024","07/15/2024","Luca Capogna","MA","Smith College","Standard Grant","Wing Suet Li","06/30/2027","$275,939.00","","lcapogna@smith.edu","10 ELM ST","NORTHAMPTON","MA","010636304","4135842700","MPS","128100","9229","$0.00","This award supports a project which investigates topics in the theory of partial differential equations in the setting of non-smooth spaces. Partial differential equations provide a powerful mathematical tool to gain insights about equilibrium states of complex physical systems which arise as solutions of certain equations. The properties of the solutions to these equations depend on a ?background geometry? that models physical features such as the non-homogeneity of materials or the presence of constraints (such as the constraints inherent in the motion of a robotic arm). In many important physical applications, one encounters non-smooth geometries (for instance, fractals) which differ fundamentally from the familiar geometry of Euclidean space, so that standard notions from calculus must be reformulated from a broader perspective. One of the most ubiquitous instances of such ?background geometry? is known as sub-Riemannian geometry, which models spaces in which motion is possible only along a given set of directions. This non-smooth geometry is widely useful in modeling physical phenomena, for example, in robotics, quantum mechanics, and neuroscience. This project will also provide opportunities for undergraduate and graduate students to work on research projects arising from the proposed work.

The PI will study sub-Riemannian analogues of the curve shortening flow; the regularity of solutions of certain degenerate elliptic parabolic PDE and non-local PDE in the general setting of certain metric spaces endowed with a doubling measure. The common thread between these investigations is the interplay between the non-smooth structure of the space and the behavior of solutions of equations describing critical points of interesting energy functionals. Some of the proposed research will provide a theoretical basis for implementing numerical simulations of real-world systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2348739","RUI: Topics in Free Boundary Problems","DMS","ANALYSIS PROGRAM","09/01/2024","07/10/2024","Mariana Smit Vega Garcia","WA","Western Washington University","Standard Grant","Marian Bocea","08/31/2027","$208,780.00","","mariana.smitvegagarcia@wwu.edu","516 HIGH ST","BELLINGHAM","WA","982255996","3606502884","MPS","128100","9229, 9251","$0.00","Partial Differential Equations (PDE) describe many physical phenomena, including heat or wave propagation, and electromagnetism. The scientific part of this project focuses on families of PDE that model stochastic control, image processing, chemical diffusion, and combustion. The investigator develops new tools which allow her to better understand questions at the interface of mathematics and other sciences, leading to a deeper understanding of the problems being modeled. Furthermore, the investigator organizes a week-long mathematics workshop focused on first-generation freshmen and sophomore students, addressing a large group that is severely underserved. The students participate in minicourses, attend research talks, and have informal conversations with mathematicians who work in different sectors. The workshop is designed to maximize the chance of success of these students, promoting the progress of science and contributing to the development of a mathematically well-versed and diverse workforce. Finally, the investigator organizes a yearly event to help advanced undergraduates and masters? students prepare their applications for graduate school in mathematics.

This project focuses on questions arising in free boundary problems and geometric measure theory. Free boundaries often appear in the applied sciences, in situations where the solution to a problem consists of a pair: a function (often satisfying a PDE), and a set related to this function. The main questions investigated by this project are related to the regularity of the function and the geometry of the associated set. The investigator answers these questions for problems modeled by nonlocal equations, almost minimizers with free boundaries, and minimizers for anisotropic energies. The first class of problems involves PDE which have fundamental importance for mathematical modeling. In particular, numerous applied phenomena give rise to nonlocal equations, such as nonlocal image processing and liquid crystals. In this part of the project, the investigator develops a technique to obtain results for parabolic, nonlocal equations, from their elliptic counterparts. Secondly, the study of almost minimizers with free boundaries has outstanding potential to treat a new group of physically motivated problems, as the almost minimizing property can be understood as a minimizing problem with noise. Finally, minimizers for anisotropic energies lead to non-uniformly elliptic PDE, generating new, challenging questions in geometric PDE.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400111","Collaborative Research: Conference: Brazos Analysis Seminar","DMS","ANALYSIS PROGRAM","04/01/2024","03/25/2024","Mehrdad Kalantar","TX","University of Houston","Standard Grant","Wing Suet Li","03/31/2027","$16,000.00","","kalantar@math.uh.edu","4300 MARTIN LUTHER KING BLVD","HOUSTON","TX","772043067","7137435773","MPS","128100","7556","$0.00","This award provides three years of funding to help defray the expenses of participants in the semi-annual conference series ""Brazos Analysis Seminar"" 2024-2026, the first meeting of which will be held in Spring 2024 at Texas Christian University. Subsequent meetings will rotate among the University of Texas at Austin, University of Houston, Texas A&M University, and Baylor University. The Brazos Analysis Seminar will bring together analysts at academic institutions within the South-Central region of the United States on a regular basis to communicate their research, with a particular emphasis on providing an opportunity for young researchers and graduate students to meet, collaborate and disseminate their work on a regular basis during the academic year. The format for the seminar provides ample opportunity for graduate students, postdocs, and junior investigators to present their work, start new collaborations, learn about the latest developments in modern analysis, and to advance their careers.

The scientific topics of this conference series will focus on the analytic theory of operator algebras and operator space theories and their connections to harmonic analysis, ergodic theory, dynamic systems, and the quantum information theory. These include free probability method in the study of quantum groups, Fourier multipliers theory on noncommutative Lp spaces, dynamical system, and K-theory of C*-algebras and von Neumann algebras. In each meeting, there will be 3 plenary talks given by prominent experts and 6 contributed talks presented by 3 experts from the region, and 3 postdoctoral or upper level PhD students. The goal is to keep both junior and senior researchers in the south-central institutions exposed and informed of the latest major mathematical developments in noncommutative Analysis, and to enhance and advance the research on the related topics. Additional information is available on the seminar website https://sites.google.com/site/brazosanalysisseminar.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2349550","Weights in Harmonic Analysis and PDEs","DMS","OFFICE OF MULTIDISCIPLINARY AC, ANALYSIS PROGRAM","07/15/2024","07/11/2024","David Cruz-Uribe","AL","University of Alabama Tuscaloosa","Standard Grant","Wing Suet Li","06/30/2027","$249,454.00","","dcruzuribe@ua.edu","801 UNIVERSITY BLVD","TUSCALOOSA","AL","354012029","2053485152","MPS","125300, 128100","9150","$0.00","This project concerns two areas within the field of mathematical analysis, namely harmonic analysis and partial differential equations. Both have proved to be very effective in understanding a variety of physical phenomena and have wide applications in engineering and the natural sciences. Partial differential equations are a natural way to model dynamic processes (that is, processes that evolve or change in some way). Harmonic analysis provides both a firm theoretical foundation on which to construct these models and effective tools for analyzing their behavior. One of the main goals of this research is to expand our knowledge of harmonic analysis and its applications to the study of partial differential equations. Significant parts of this project include education and mentoring of graduate students, particularly women and under-represented minorities, and the development of new international research collaborations.

The principal investigator (PI) is working on two projects in harmonic analysis and partial differential equations. In the first, the PI is studying matrix weighted estimates for singular and fractional integrals. He is proving generalizations of the Rubio de Francia extrapolation theorem in this setting and developing a theory of matrix weighted Hardy spaces and matrix weighted variable Lebesgue spaces. These results generalize the extensive literature on scalar weighted inequalities and highlight the differences between scalar and matrix weights. New techniques involving convex-set valued functions are used to overcome various technical obstacles that arise in the passage from scalar to matrix weights. In the second project, the PI is studying the existence, uniqueness, and regularity properties of solutions of second order, degenerate elliptic equations with lower order terms. The goal is to construct a theory on as general an equation as possible with the fewest assumptions on the coefficients and the region. These assumptions are expressed in terms of the existence of matrix weighted Sobolev and Poincare ? inequalities. This approach unites and extends a number of results that are already in the literature. The PI is also studying the existence of such Sobolev and Poincare ? inequalities by applying the theory of matrix weighted norm inequalities.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -42,10 +44,10 @@ "2340465","CAREER: Gauge-theoretic Floer invariants, C* algebras, and applications of analysis to topology","DMS","TOPOLOGY, ANALYSIS PROGRAM","09/01/2024","02/02/2024","Sherry Gong","TX","Texas A&M University","Continuing Grant","Qun Li","08/31/2029","$89,003.00","","sgongli@tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","126700, 128100","1045","$0.00","The main research goal of this project is to apply analytic tools coming from physics, such as gauge theory and operator algebras, to topology, which is the study of geometric shapes. This research is divided into two themes: low dimensional topology and operator K-theory. In both fields, the aforementioned analytic tools are used to build invariants to study the geometric structure of manifolds, which are spaces modelled on Euclidean spaces, like the 3-dimensional space we live in. In both low dimensional topology and operator K-theory, the PI will use analytic tools to study questions about these spaces, such as how they are curved or how objects can be embedded inside them. These questions have a wide range of applications in biology and physics. The educational and outreach goals of this project involve math and general STEM enrichment programs at the middle and high school levels, with a focus on programs aimed at students from underserved communities and underrepresented groups, as well as mentorship in research at the high school, undergraduate and graduate levels.

In low dimensional topology, this project focuses on furthering our understanding of instanton and monopole Floer homologies and their relation to Khovanov homology, and using this to study existence questions of families of metrics with positive scalar curvature on manifolds, as well as questions about knot concordance. Separately this project also involves computationally studying knot concordance, both by a computer search for concordances and by computationally studying certain local equivalence and almost local equivalence groups that receive homomorphisms from the knot concordance groups. In operator algebras, this project focuses on studying their K-theory and its applications to invariants in geometry and topology. The K-theory groups of operator algebras are the targets of index maps of elliptic operators and have important applications to the geometry and topology of manifolds. This project involves studying the K-theory of certain C*-algebras and using them to study infinite dimensional spaces; studying the noncommutative geometry of groups that act on these infinite dimensional spaces and, in particular, the strong Novikov conjecture for these groups; and studying the coarse Baum-Connes conjecture for high dimensional expanders.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2404675","Conference: Young Mathematicians in C*-Algebras 2024","DMS","ANALYSIS PROGRAM","04/15/2024","04/09/2024","Adam Fuller","OH","Ohio University","Standard Grant","Jan Cameron","03/31/2025","$49,665.00","Priyanga Ganesan","fullera@ohio.edu","1 OHIO UNIVERSITY","ATHENS","OH","457012979","7405932857","MPS","128100","7556","$0.00","This award provides funding for U.S.-based participants, including members of underrepresented groups in the mathematical sciences, to participate in the conference Young Mathematicians in C*-Algebras (YMC*A), to be held August 5 -9, 2024 at The University of Glasgow, United Kingdom. This meeting is organized for and by graduate students and postdoctoral researchers in operator algebras and related areas, with the goal of fostering scientific and social interaction among early-career researchers. In each of its previous seven editions, YMC*A has provided an excellent opportunity for over one hundred early-career operator algebraists from around the world to attend mini-courses on current research topics in operator algebras. This grant significantly boosts the participation of U.S.-based researchers and their institutions at this conference, exemplifying U.S. research and furnishing opportunities for researchers to expand their professional networks.

The conference focuses on recent developments in operator algebras, noncommutative geometry, and related areas of mathematical analysis, with a particular emphasis on the interplay between operator algebras and group theory, dynamical systems and quantum information theory. The conference features three mini-courses by established researchers alongside many contributed talks by participants, and mentoring activities designed to increase retention of underrepresented groups in operator algebras. More information about the conference is available at: https://ymcstara.org.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2336118","CAREER: Gravitational and Electromagnetic Waves on Black Holes","DMS","APPLIED MATHEMATICS, ANALYSIS PROGRAM","07/01/2024","01/22/2024","Elena Giorgi","NY","Columbia University","Continuing Grant","Dmitry Golovaty","06/30/2029","$67,323.00","","elena.giorgi@columbia.edu","615 W 131ST ST","NEW YORK","NY","100277922","2128546851","MPS","126600, 128100","1045","$0.00","The field of Mathematical General Relativity has played a fundamental role in the analysis of solutions to the Einstein equation, such as black holes - arguably one of the most fundamental objects in our understanding of the universe. Understanding stability of black holes has been central to the mathematical endeavor of confirming their relevance as realistic physical objects. If stable, black holes perturbed with gravitational or other form of radiation, after a temporary change, would eventually return to their initial state. The investigator aims to advance the current knowledge of perturbations dynamics by including interaction of electromagnetic radiation with gravitational waves. This interaction is significant as astrophysical black holes are thought to be surrounded by an accretion disk of matter which, in particular, contains electromagnetic waves. The results of this work are shared with the mathematical and physical communities through peer-reviewed publications and seminars and disseminated to the general public through media articles, public lectures and outreach events in schools. The research of the investigator is integrated with educational activities that increase representation of women in mathematics and promote engagement in mathematics among students. Graduate students and postdocs are also to be involved in this research.

The project is focused on building a comprehensive approach to analyze interactions between gravitational waves and electromagnetic radiation on rotating and charged black holes. The investigator incorporates new techniques to obtain precise decay for the gravitational and electromagnetic waves on charged black holes by developing a universal method involving a combined energy-momentum tensor for coupled system of wave equations. The goal of the project is to prove the non-linear stability of the most general charged and rotating black hole family and extend the investigator?s collaborative work on the resolution for the Kerr family. In addition, the investigator aims to obtain conservation laws for charged black holes in connection with their canonical energy while allowing control of the gravitational and electromagnetic energy radiated at infinity.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2400112","Collaborative Research: Conference: Brazos Analysis Seminar","DMS","ANALYSIS PROGRAM","04/01/2024","03/25/2024","Zhizhang Xie","TX","Texas A&M University","Standard Grant","Wing Suet Li","03/31/2027","$16,400.00","","xie@math.tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","128100","7556","$0.00","This award provides three years of funding to help defray the expenses of participants in the semi-annual conference series ""Brazos Analysis Seminar"" 2024-2026, the first meeting of which will be held in Spring 2024 at Texas Christian University. Subsequent meetings will rotate among the University of Texas at Austin, University of Houston, Texas A&M University, and Baylor University. The Brazos Analysis Seminar will bring together analysts at academic institutions within the South-Central region of the United States on a regular basis to communicate their research, with a particular emphasis on providing an opportunity for young researchers and graduate students to meet, collaborate and disseminate their work on a regular basis during the academic year. The format for the seminar provides ample opportunity for graduate students, postdocs, and junior investigators to present their work, start new collaborations, learn about the latest developments in modern analysis, and to advance their careers.

The scientific topics of this conference series will focus on the analytic theory of operator algebras and operator space theories and their connections to harmonic analysis, ergodic theory, dynamic systems, and the quantum information theory. These include free probability method in the study of quantum groups, Fourier multipliers theory on noncommutative Lp spaces, dynamical system, and K-theory of C*-algebras and von Neumann algebras. In each meeting, there will be 3 plenary talks given by prominent experts and 6 contributed talks presented by 3 experts from the region, and 3 postdoctoral or upper level PhD students. The goal is to keep both junior and senior researchers in the south-central institutions exposed and informed of the latest major mathematical developments in noncommutative Analysis, and to enhance and advance the research on the related topics. Additional information is available on the seminar website https://sites.google.com/site/brazosanalysisseminar.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2401258","Collaborative Research: Conference: Great Lakes Mathematical Physics Meetings 2024-2025","DMS","ANALYSIS PROGRAM","04/15/2024","04/12/2024","Jeffrey Schenker","MI","Michigan State University","Standard Grant","Jan Cameron","03/31/2026","$25,000.00","Ilya Kachkovskiy","jeffrey@math.msu.edu","426 AUDITORIUM RD RM 2","EAST LANSING","MI","488242600","5173555040","MPS","128100","7556, 9150","$0.00","This award will support participants in the Great Lakes Mathematical Physics Meetings (GLaMP) in 2024 at Michigan State University and in 2025 at the University of Kentucky. The GLaMP meetings are typically held over 3 days in June, with an attendance of 45-50 researchers. The annual conference series, which began in 2016 at Michigan State, focuses on early-career mathematicians working in mathematical physics. Each meeting features invited talks by experts in the field, a minicourse on a topic in mathematical physics, contributed talks by participants, and an interactive career development panel. The main goals of the GLaMP series are: 1) to provide a forum for early-career researchers in mathematical physics ? including advanced undergraduates, graduate students, and early-career postdoctoral scholars ? to present their research and enhance their career development; 2) to maintain communication and collaboration among scientists working in mathematical physics throughout the United States and, in particular, in the greater Midwest; 3) to encourage participation by women and underrepresented minorities in the field of mathematical physics; and 4) to raise the research profile of mathematical physics within the mathematical and scientific community of the United States. All details about the 2024 meeting and links to web pages of previous GLaMP meetings can be found at https://sites.google.com/msu.edu/glamp/home.


Mathematical Physics is one of the oldest scientific disciplines and is a very active field worldwide, with researchers working in both mathematics and physics departments. The roots of the field can be traced to the classical mathematics of Newton, Euler, and Gauss. In the twentieth century, there were many developments at the boundary between mathematics and physics, for example, in scattering theory, non-relativistic quantum mechanics, constructive quantum field theory, the foundations of statistical mechanics, and applications of geometry and topology to high energy physics. The field is supported by the International Association of Mathematical Physics, which organizes an international congress every three years. Although there are many mathematical physicists working in the United States, there are few regular conferences representing the field in the US. The GLaMP meetings have evolved to be the main annual meetings focused on mathematical physics in the US. Minicourse topics have included non-equilibrium quantum statistical mechanics, disordered quantum spin chains and many-body localization, non-self-adjoint operators and quantum resonances, the mathematics of aperiodic order, random matrix theory and supersymmetry techniques, quantum trajectories, and mathematical general relativity. Besides the location, we believe that the distinguishing feature of the GLaMP meeting is its emphasis on early-career researchers. The majority of contributed talks are given by early-career faculty, postdocs, and advanced graduate students. In addition to providing a forum that showcases the work of young researchers, the GLaMP meeting also offers career development opportunities, specifically through a three-hour mini-course on an active area of research given by a world-class expert and a career round table with panelists representing different career paths in mathematical physics, both in academia and in industry.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400332","NSF-BSF: C*-algebras and Dynamics Beyond the Elliott Program","DMS","ANALYSIS PROGRAM","08/01/2024","04/08/2024","Norman Phillips","OR","University of Oregon Eugene","Standard Grant","Jan Cameron","07/31/2027","$343,286.00","","ncp@uoregon.edu","1776 E 13TH AVE","EUGENE","OR","974031905","5413465131","MPS","128100","","$0.00","A C*-algebra is a kind of mathematical object which, for example, appears in quantum mechanics. Simple C*-algebras are those that cannot be broken apart into smaller (""simpler"") C*-algebras. The largest part of this project is about when a simple C*-algebra is isomorphic to its opposite algebra, that is, mathematically the same as what might be thought of as its mirror image. For an example from everyday life, an ordinary sock is the same as its mirror image, since a sock which fits on a right foot will fit equally well on the left foot. A glove isn't like that: whatever one does, a right glove will not fit on a left hand. A nonsimple C*-algebra can be made of very elementary parts, but put together in a tricky way, so as to not be isomorphic to its opposite. Simple C*-algebras which are not separable or not nuclear (""too large"", but in different senses) can also fail to be isomorphic to their opposites. On the other hand, simple C*-algebras covered by the Elliott classification program are isomorphic to their opposites. A long-term goal of the project is to exhibit a simple separable nuclear C*-algebra which is not isomorphic to its opposite. Such an algebra could not be covered even by any proposed expansion of the Elliott program. The project will also contribute to US workforce development through the training of graduate and undergraduate students.

The intended example is a simple unital AH algebra with fast dimension growth. The intended proof that it is not isomorphic to its opposite depends on nonexistence theorems for certain homomorphisms from one matrix algebra over the algebra of continuous functions on a compact space to a different matrix algebra over the continuous functions on a different compact space. When the second matrix size is large enough, all homomorphisms not ruled out for fairly obvious reasons actually exist. When it is small, known obstructions rule out most homomorphisms. The application requires information about an intermediate range. Here, even the simplest case, asked by Blackadar over 30 years ago, remains open; understanding this case is a necessary preliminary step. This case can almost certainly be settled by computations in rational homotopy theory, a new use of algebraic topology in C*-algebras.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400114","Collaborative Research: Conference: Brazos Analysis Seminar","DMS","ANALYSIS PROGRAM","04/01/2024","03/25/2024","Jose Carrion Muniz","TX","Texas Christian University","Standard Grant","Wing Suet Li","03/31/2027","$17,250.00","Travis Russell","j.carrion@tcu.edu","3101 BELLAIRE DRIVE NORTH","FORT WORTH","TX","761290001","8172577516","MPS","128100","7556","$0.00","This award provides three years of funding to help defray the expenses of participants in the semi-annual conference series ""Brazos Analysis Seminar"" 2024-2026, the first meeting of which will be held in Spring 2024 at Texas Christian University. Subsequent meetings will rotate among the University of Texas at Austin, University of Houston, Texas A&M University, and Baylor University. The Brazos Analysis Seminar will bring together analysts at academic institutions within the South-Central region of the United States on a regular basis to communicate their research, with a particular emphasis on providing an opportunity for young researchers and graduate students to meet, collaborate and disseminate their work on a regular basis during the academic year. The format for the seminar provides ample opportunity for graduate students, postdocs, and junior investigators to present their work, start new collaborations, learn about the latest developments in modern analysis, and to advance their careers.

The scientific topics of this conference series will focus on the analytic theory of operator algebras and operator space theories and their connections to harmonic analysis, ergodic theory, dynamic systems, and the quantum information theory. These include free probability method in the study of quantum groups, Fourier multipliers theory on noncommutative Lp spaces, dynamical system, and K-theory of C*-algebras and von Neumann algebras. In each meeting, there will be 3 plenary talks given by prominent experts and 6 contributed talks presented by 3 experts from the region, and 3 postdoctoral or upper level PhD students. The goal is to keep both junior and senior researchers in the south-central institutions exposed and informed of the latest major mathematical developments in noncommutative Analysis, and to enhance and advance the research on the related topics. Additional information is available on the seminar website https://sites.google.com/site/brazosanalysisseminar.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2400112","Collaborative Research: Conference: Brazos Analysis Seminar","DMS","ANALYSIS PROGRAM","04/01/2024","03/25/2024","Zhizhang Xie","TX","Texas A&M University","Standard Grant","Wing Suet Li","03/31/2027","$16,400.00","","xie@math.tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","128100","7556","$0.00","This award provides three years of funding to help defray the expenses of participants in the semi-annual conference series ""Brazos Analysis Seminar"" 2024-2026, the first meeting of which will be held in Spring 2024 at Texas Christian University. Subsequent meetings will rotate among the University of Texas at Austin, University of Houston, Texas A&M University, and Baylor University. The Brazos Analysis Seminar will bring together analysts at academic institutions within the South-Central region of the United States on a regular basis to communicate their research, with a particular emphasis on providing an opportunity for young researchers and graduate students to meet, collaborate and disseminate their work on a regular basis during the academic year. The format for the seminar provides ample opportunity for graduate students, postdocs, and junior investigators to present their work, start new collaborations, learn about the latest developments in modern analysis, and to advance their careers.

The scientific topics of this conference series will focus on the analytic theory of operator algebras and operator space theories and their connections to harmonic analysis, ergodic theory, dynamic systems, and the quantum information theory. These include free probability method in the study of quantum groups, Fourier multipliers theory on noncommutative Lp spaces, dynamical system, and K-theory of C*-algebras and von Neumann algebras. In each meeting, there will be 3 plenary talks given by prominent experts and 6 contributed talks presented by 3 experts from the region, and 3 postdoctoral or upper level PhD students. The goal is to keep both junior and senior researchers in the south-central institutions exposed and informed of the latest major mathematical developments in noncommutative Analysis, and to enhance and advance the research on the related topics. Additional information is available on the seminar website https://sites.google.com/site/brazosanalysisseminar.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350393","Dynamics Around Translation Surfaces","DMS","ANALYSIS PROGRAM","07/01/2024","04/12/2024","Jonathan Chaika","UT","University of Utah","Standard Grant","Jan Cameron","06/30/2027","$345,497.00","","jonchaika@gmail.com","201 PRESIDENTS CIR","SALT LAKE CITY","UT","841129049","8015816903","MPS","128100","","$0.00","This award will support a project in dynamical systems. The mathematical field of dynamical systems seeks to understand how a system behaves as time evolves; it is an important subfield of mathematical analysis, which enjoys connections and applications to many other areas of the mathematical sciences. The systems at the heart of this project are connected to physics and geometry. A concrete example is that of a point mass traveling inside a polygon, which has elastic collision when it hits the sides. One focus of the project is to understand how prevalent randomness is in these systems. The PI will also investigate the structure of paths in related dynamical systems and aims to deepen our understanding of the connection between geometric and dynamical properties. This project will also stimulate the growth of the next generation of mathematicians by providing graduate student research opportunities.


This project is concerned with two closely related dynamical systems: flows on translation surfaces and the SL(2,R) action on the space of translation surfaces. It seeks to better understand when the spectrum of (the one-parameter unitary group coming from) a flow on a translation surface is continuous (aside from a simple eigenvalue of 0). It seeks to understand the dynamics of the strictly upper triangular subgroup of SL(2,R) on spaces of translation surfaces. In particular, whether there are situations where the orbit closures are always constrained and how wild averages along orbits can behave.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2349868","Testing Theorems in Analytic Function Theory, Harmonic Analysis and Operator Theory","DMS","ANALYSIS PROGRAM","07/01/2024","04/02/2024","Brett Wick","MO","Washington University","Standard Grant","Wing Suet Li","06/30/2027","$310,000.00","","wick@math.wustl.edu","ONE BROOKINGS DR","SAINT LOUIS","MO","63110","3147474134","MPS","128100","","$0.00","This proposal involves basic fundamental mathematical research at the intersection of analytic function theory, harmonic analysis, and operator theory. Motivation to study these questions can be found in partial differential equations, which are fundamental to the study of science and engineering. The solution to a partial differential equation is frequently given by an integral operator, a Calderon-Zygmund operator, whose related properties can be used to deduce related properties of these partial differential equations. In general, studying these Calderon-Zygmund operators is challenging and one seeks to study their action on certain spaces of functions, by checking the behavior only on a simpler class of test functions. In analogy, this can be seen as attempting to understand a complicated musical score by simply understanding a simpler finite collection of pure frequencies. The proposed research is based on recent contributions made by the PI, leveraging the skills and knowledge developed through prior National Science Foundation awards. Through this proposal the PI will address open and important questions at the interface of analytic function theory, harmonic analysis, and operator theory. Resolution of questions in these areas will provide for additional lines of inquiry. Funds from this award will support a diverse group of graduate students whom the PI advises; helping to increase the national pipeline of well-trained STEM students for careers in academia, government, or industry.

The research program of this proposal couples important open questions with the PI's past work. The general theme will be to use methods around ``testing theorems,'' called ``T1 theorems'' in harmonic analysis or the ``reproducing kernel thesis'' in analytic function theory and operator theory, to study questions that arise in analytic function theory, harmonic analysis, and operator theory. In particular, applications of the proof strategy of testing theorems will: (1) be used to characterize when Calderon-Zygmund operators are bounded between weighted spaces both for continuous and dyadic variants of these operators; (2) serve as motivation for a class of questions related to operators on the Fock space of analytic functions that are intimately connected to Calderon-Zygmund operators; and, (3) be leveraged to provide a method to study Carleson measures in reproducing kernel Hilbert spaces of analytic functions. Results obtained will open the door to other lines of investigation.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2349828","Spatial restriction of exponential sums to thin sets and beyond","DMS","ANALYSIS PROGRAM","06/01/2024","04/01/2024","Ciprian Demeter","IN","Indiana University","Standard Grant","Wing Suet Li","05/31/2027","$299,999.00","","demeterc@indiana.edu","107 S INDIANA AVE","BLOOMINGTON","IN","474057000","3172783473","MPS","128100","","$0.00","In recent years, the PI has developed a new tool called decoupling that measures the extent to which waves traveling in different directions interact with each other. While this tool was initially intended to analyze differential equations that describe wave cancellations, it has also led to important breakthroughs in number theory. For example, Diophantine equations are potentially complicated systems of equations involving whole numbers. They are used to generate scrambling and diffusion keys, which are instrumental in encrypting data. Mathematicians are interested in counting the number of solutions to such systems. Unlike waves, numbers do not oscillate, at least not in an obvious manner. But we can think of numbers as frequencies and thus associate them with waves. In this way, problems related to counting the number of solutions to Diophantine systems can be rephrased in the language of quantifying wave interferences. This was the case with PI's breakthrough resolution of the Main Conjecture in Vinogradov's Mean Value Theorem. The PI plans to further extend the scope of decoupling toward the resolution of fundamental problems in harmonic analysis, geometric measure theory, and number theory. He will seek to make the new tools accessible and useful to a large part of the mathematical community. This project provides research training opportunities for graduate students.


Part of this project is aimed at developing the methodology to analyze the Schrödinger maximal function in the periodic setting. Building on his recent progress, the PI aims to incorporate Fourier analysis and more delicate number theory into the existing combinatorial framework. Decouplings have proved successful in addressing a wide range of problems in such diverse areas as number theory, partial differential equations, and harmonic analysis. The current project seeks to further expand the applicability of this method in new directions. One of them is concerned with finding sharp estimates for the Fourier transforms of fractal measures supported on curved manifolds. The PI seeks to combine decoupling with sharp estimates for incidences between balls and tubes. In yet another direction, he plans to further investigate the newly introduced tight decoupling phenomenon. This has deep connections to both number theory and the Lambda(p) estimates.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -83,10 +85,10 @@ "2349794","Regularity Problems in Free Boundaries and Degenerate Elliptic Partial Differential Equations","DMS","ANALYSIS PROGRAM","07/01/2024","04/02/2024","Ovidiu Savin","NY","Columbia University","Standard Grant","Marian Bocea","06/30/2027","$273,927.00","","os2161@columbia.edu","615 W 131ST ST","NEW YORK","NY","100277922","2128546851","MPS","128100","","$0.00","The goal of this project is to develop new methods for the mathematical theory in several problems of interest involving partial differential equations (PDE). The problems share some common features and are motivated by various physical phenomena such as the interaction of elastic membranes in contact with one another, jet flows of fluids, surfaces of minimal area, and optimal transportation between the elements of two regions. Advancement in the theoretical knowledge about these problems would be beneficial to the scientific community in general and possibly have applications to more concrete computational aspects of solving these equations numerically. The outcomes of the project will be disseminated at a variety of seminars and conferences.

The project focuses on the regularity theory of some specific free boundary problems and nonlinear PDE. The first part is concerned with singularity formation in the Special Lagrangian equation. The equation appears in the context of calibrated geometries and minimal submanifolds. The Principal Investigator (PI) studies the stability of singular solutions under small perturbations together with certain degenerate Bellman equations that are relevant to their study. The second part of the project is devoted to free boundary problems. The PI investigates regularity questions that arise in the study of the two-phase Alt-Phillips family of free boundary problems. Some related questions concern rigidity of global solutions in low dimensions in the spirit of the De Giorgi conjecture. A second problem of interest involves coupled systems of interacting free boundaries. They arise in physical models that describe the configuration of multiple elastic membranes that are interacting with each other according to some specific potential. Another part of the project is concerned with the regularity of nonlocal minimal graphs and some related questions about the boundary Harnack principle for nonlocal operators.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400090","Microlocal Analysis and Hyperbolic Dynamics","DMS","ANALYSIS PROGRAM","07/01/2024","04/01/2024","Semyon Dyatlov","MA","Massachusetts Institute of Technology","Continuing Grant","Marian Bocea","06/30/2027","$120,678.00","","dyatlov@MATH.MIT.EDU","77 MASSACHUSETTS AVE","CAMBRIDGE","MA","021394301","6172531000","MPS","128100","","$0.00","This project investigates a broad range of topics at the intersection of microlocal analysis and hyperbolic dynamics. Microlocal analysis, with its roots in physical phenomena such as geometric optics and quantum/classical correspondence, is a powerful mathematical theory relating classical Hamiltonian dynamics to singularities of waves and quantum states. Hyperbolic dynamics is the mathematical theory of strongly chaotic systems, where a small perturbation of the initial data leads to exponentially divergent trajectories after a long time. The project takes advantage of the interplay between these two fields, studying the behavior of waves and quantum states in situations where the underlying dynamics is strongly chaotic, and also exploring the applications of microlocal methods to purely dynamical questions. The project provides research training opportunities for graduate students.

One direction of this project is in the highly active field of quantum chaos, the study of spectral properties of quantum systems where the underlying classical system has chaotic behavior. The Principal Investigator (PI) has introduced new methods in the field coming from harmonic analysis, fractal geometry, additive combinatorics, and Ratner theory, combined together in the concept of fractal uncertainty principle. The specific goals of the project include: (1) understanding the macroscopic concentration of high energy eigenfunctions of closed chaotic systems, such as negatively curved Riemannian manifolds and quantum cat maps; and (2) proving essential spectral gaps (implying in particular exponential local energy decay of waves) for open systems with fractal hyperbolic trapped sets. A second research direction is the study of forced waves in stratified fluids (with similar problems appearing also for rotating fluids), motivated by experimentally observed internal waves in aquaria and by applications to oceanography. A third direction is to apply microlocal methods originally developed for the theory of hyperbolic partial differential equations to study classical objects such as dynamical zeta functions, which is a rare example of the reversal of quantum/classical correspondence. In particular, the PI and his collaborators study (1) how the special values of the dynamical zeta function for a negatively curved manifold relate to the topology of the manifold; and (2) whether dynamical zeta functions can be meromorphically continued for systems with singularities such as dispersive billiards.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2334874","Conference: Pittsburgh Links among Analysis and Number Theory (PLANT)","DMS","ALGEBRA,NUMBER THEORY,AND COM, ANALYSIS PROGRAM","02/01/2024","01/19/2024","Carl Wang Erickson","PA","University of Pittsburgh","Standard Grant","James Matthew Douglass","01/31/2025","$20,000.00","Theresa Anderson, Armin Schikorra","carl.wang-erickson@pitt.edu","4200 FIFTH AVENUE","PITTSBURGH","PA","152600001","4126247400","MPS","126400, 128100","7556","$0.00","This award will support the four-day conference ""Pittsburgh Links among Analysis and Number Theory (PLANT)"" that will take place March 4-7, 2024 in Pittsburgh, PA. The purpose of the conference is to bring together representatives of two disciplines with a shared interface: number theory and analysis. There is a large potential for deeper collaboration between these fields resulting in new and transformative mathematical perspectives, and this conference aims at fostering such an interchange. In particular, the conference is designed to attract PhD students and post-doctoral scholars into working on innovations at this interface.

To encourage the development of new ideas, the conference speakers, collectively, represent many subfields that have developed their own distinctive blend of analysis and number theory, such as analytic number theory, arithmetic statistics, analytic theory of modular and automorphic forms, additive combinatorics, discrete harmonic analysis, and decoupling. While there have been a wide variety of conferences featuring these subfields in relative isolation, the PIs are excited at PLANT's potential for sparking links among all of these subfields and giving early-career participants the opportunity to be part of this exchange. The conference website is https://sites.google.com/view/plant24/home.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2349865","Analysis and Dynamics in Several Complex Variables","DMS","ANALYSIS PROGRAM","06/01/2024","03/21/2024","Xianghong Gong","WI","University of Wisconsin-Madison","Standard Grant","Jeremy Tyson","05/31/2027","$333,182.00","","gong@math.wisc.edu","21 N PARK ST STE 6301","MADISON","WI","537151218","6082623822","MPS","128100","","$0.00","This award supports research at the interface of several complex variables, differential geometry, and dynamical systems. Complex analysis studies the behavior and regularity of functions defined on and taking values in spaces of complex numbers. It remains an indispensable tool across many domains in the sciences, engineering, and economics. This project considers the smoothness of transformations on a domain defined by complex valued functions when the domain is deformed. Using integral formulas, the PI will study how invariants of a domain vary when the underlying structure of the domain changes. Another component of the project involves the study of resonance. The PI will use small divisors that measure non-resonance to classify singularities of the complex structure arising in linear approximations of curved manifolds. The project will involve collaboration with researchers in an early career stage and will support the training of graduate students.

Motivated by recent counterexamples showing that smooth families of domains may be equivalent by a discontinuous family of biholomorphisms, the PI will study the existence of families of biholomorphisms between families of domains using biholomorphism groups and other analytic tools such as Bergman metrics. The PI will construct a global homotopy formula with good estimates for suitable domains in a complex manifold. One of the goals is to construct a global formula in cases when a local homotopy formula fails to exist. The PI will use such global homotopy formulas to investigate the stability of holomorphic embeddings of domains with strongly pseudoconvex or concave boundary in a complex manifold, when the complex structure on the domains is deformed. The PI will use this approach to investigate stability of global Cauchy-Riemann structures on Cauchy-Riemann manifolds of higher codimension. The project seeks a holomorphic classification of neighborhoods of embeddings of a compact complex manifold in complex manifolds via the Levi-form and curvature of the normal bundle. In addition, the PI will study the classification of Cauchy-Riemann singularities for real manifolds using methods from several complex variables and small-divisor conditions in dynamical systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350530","Analysis and Geometry of Conformal and Quasiconformal Mappings","DMS","ANALYSIS PROGRAM","06/01/2024","04/02/2024","Malik Younsi","HI","University of Hawaii","Standard Grant","Jeremy Tyson","05/31/2027","$211,262.00","","malik.younsi@gmail.com","2425 CAMPUS RD SINCLAIR RM 1","HONOLULU","HI","968222247","8089567800","MPS","128100","9150","$0.00","This project aims to better understand the analytic and geometric properties of conformal and quasiconformal mappings. Conformal mappings are planar transformations which locally preserve angles. An important example is the Mercator projection in cartography, used to project the surface of the Earth to a two-dimensional map. More recently, much attention has been devoted to the study of quasiconformal mappings, a generalization of conformal mappings where a controlled amount of angle distortion is permitted. Because of this additional flexibility, quasiconformal mappings have proven over the years to be of fundamental importance in a wide variety of areas of mathematics and applications. Many of these applications involve planar transformations that are quasiconformal inside a given region except possibly for some exceptional set of points inside the region. The study of this exceptional set leads to the notion of removability, central to this research project and closely related to fundamental questions in complex analysis, dynamical systems, probability and related areas. Another focus of this project is on the study of certain families of quasiconformal mappings called holomorphic motions. The principal investigator will study how quantities such as dimension and area change under holomorphic motions, leading to a better understanding of the geometric properties of quasiconformal mappings. The project also provides opportunities for the training and mentoring of early career researchers, including graduate students. In addition, the principal investigator will continue to be involved in a science and mathematics outreach program for local high school students.

Two strands of research comprise the planned work. The first component involves the study of conformal removability. Motivated by the long-standing Koebe uniformization conjecture, the principal investigator will investigate the relationship between removability and the rigidity of circle domains. This part of the project also involves the study of conformal welding, a correspondence between planar Jordan curves and functions on the circle. Recent years have witnessed a renewal of interest in conformal welding along with new generalizations and variants, notably in the theory of random surfaces and in connection with applications to computer vision and numerical pattern recognition. The second component of the project concerns holomorphic motions. The principal investigator will study the variation of several notions of dimension under holomorphic motions. A new approach to this topic by the principal investigator and his collaborators using inf-harmonic functions has already yielded a unified treatment of several celebrated theorems about quasiconformal mappings, and many more fruitful connections are anticipated as progress continues to be made towards a better understanding of holomorphic motions. This part of the project also involves the relationship between global quasiconformal dimension and conformal dimension.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350356","Dynamics of Nonlinear and Disordered Systems","DMS","ANALYSIS PROGRAM","06/01/2024","04/02/2024","Wilhelm Schlag","CT","Yale University","Continuing Grant","Marian Bocea","05/31/2027","$149,265.00","","wilhelm.schlag@yale.edu","150 MUNSON ST","NEW HAVEN","CT","065113572","2037854689","MPS","128100","","$0.00","Observations of solitary waves that maintain their shape and velocity during their propagation were recorded around 200 years ago. First by Bidone in Turin in 1826, and then famously by Russell in 1834 who followed a hump of water moving at constant speed along a channel for several miles. Today these objects are known as solitons. Lying at the intersection of mathematics and physics, they have been studied rigorously since the 1960s. For completely integrable wave equations, many properties of solitons are known, such as their elastic collisions, their stability properties, as well as their role as building blocks in the long-time description of waves. The latter is particularly important, as it for example predicts how waves carrying information decompose into quantifiable units. In quantum physics, quantum chemistry, and material science, these mathematical tools allow for a better understanding of the movement of electrons in various media. This project aims to develop the mathematical foundations which support these areas in applied science, which are of great importance to industry and society at large. The project provides research training opportunities for graduate students.

The project?s goal is to establish both new results and new techniques in nonlinear evolution partial differential equations on the one hand, and the spectral theory of disordered systems on the other hand. The long-range scattering theory developed by Luhrmann and the Principal Investigator (PI) achieved the first results on potentials which exhibit a threshold resonance in the context of topological solitons. This work is motivated by the fundamental question about asymptotic kink stability for the phi-4 model. Asymptotic stability of Ginzburg-Landau vortices in their own equivariance class is not understood. The linearized problem involves a non-selfadjoint matrix operator, and the PI has begun to work on its spectral theory. With collaborators, the PI will engage on research on bubbling for the harmonic map heat flow and attempt to combine the recent paper on continuous-in-time bubbling with a suitable modulation theory. The third area relevant to this project is the spectral theory of disordered systems. More specifically, the PI will continue his work on quasiperiodic symplectic cocycles which arise in several models in condensed matter physics such as in graphene and on non-perturbative methods to analyze them.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2348908","Low Regularity and Long Time Dynamics in Nonlinear Dispersive Flows","DMS","ANALYSIS PROGRAM","08/01/2024","04/02/2024","Mihaela Ifrim","WI","University of Wisconsin-Madison","Standard Grant","Marian Bocea","07/31/2027","$343,401.00","","ifrim@wisc.edu","21 N PARK ST STE 6301","MADISON","WI","537151218","6082623822","MPS","128100","","$0.00","The primary objective of this project is to examine solutions to a broad class of equations that can be described as nonlinear waves. These mathematical equations model a wide range of physical phenomena arising in fluid dynamics (oceanography), quantum mechanics, plasma physics, nonlinear optics, and general relativity. The equations being studied range from semilinear to fully nonlinear, and from local to nonlocal equations, and we aim to investigate them in an optimal fashion both locally and globally in time. This research develops and connects ideas and methods in partial differential equations, and in some cases also draws a clear path towards other problems in fields such as geometry, harmonic analysis, complex analysis, and microlocal analysis. The project provides research training opportunities for graduate students.

The strength of the nonlinear wave interactions is the common feature in the models considered in this proposal, and it significantly impacts both their short-time and their long-time behavior. The project addresses a series of very interesting questions concerning several classes of nonlinear dispersive equations: (i) short-time existence theory in a low regularity setting; (ii) breakdown of waves, and here a particular class of equations is provided by the water wave models; and (iii) long-time persistence and/or dispersion and decay of waves, which would involve either a qualitative aspect attached to it, that is, an asymptotic description of the nonlinear solution, or a quantitative description of it, for instance nontraditional scattering statements providing global in time dispersive bounds. All of this also depends strongly on the initial data properties, such as size, regularity and localization.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2349865","Analysis and Dynamics in Several Complex Variables","DMS","ANALYSIS PROGRAM","06/01/2024","03/21/2024","Xianghong Gong","WI","University of Wisconsin-Madison","Standard Grant","Jeremy Tyson","05/31/2027","$333,182.00","","gong@math.wisc.edu","21 N PARK ST STE 6301","MADISON","WI","537151218","6082623822","MPS","128100","","$0.00","This award supports research at the interface of several complex variables, differential geometry, and dynamical systems. Complex analysis studies the behavior and regularity of functions defined on and taking values in spaces of complex numbers. It remains an indispensable tool across many domains in the sciences, engineering, and economics. This project considers the smoothness of transformations on a domain defined by complex valued functions when the domain is deformed. Using integral formulas, the PI will study how invariants of a domain vary when the underlying structure of the domain changes. Another component of the project involves the study of resonance. The PI will use small divisors that measure non-resonance to classify singularities of the complex structure arising in linear approximations of curved manifolds. The project will involve collaboration with researchers in an early career stage and will support the training of graduate students.

Motivated by recent counterexamples showing that smooth families of domains may be equivalent by a discontinuous family of biholomorphisms, the PI will study the existence of families of biholomorphisms between families of domains using biholomorphism groups and other analytic tools such as Bergman metrics. The PI will construct a global homotopy formula with good estimates for suitable domains in a complex manifold. One of the goals is to construct a global formula in cases when a local homotopy formula fails to exist. The PI will use such global homotopy formulas to investigate the stability of holomorphic embeddings of domains with strongly pseudoconvex or concave boundary in a complex manifold, when the complex structure on the domains is deformed. The PI will use this approach to investigate stability of global Cauchy-Riemann structures on Cauchy-Riemann manifolds of higher codimension. The project seeks a holomorphic classification of neighborhoods of embeddings of a compact complex manifold in complex manifolds via the Levi-form and curvature of the normal bundle. In addition, the PI will study the classification of Cauchy-Riemann singularities for real manifolds using methods from several complex variables and small-divisor conditions in dynamical systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350351","Unique continuation and the regularity of elliptic PDEs and generalized minimal submanifolds","DMS","GEOMETRIC ANALYSIS, ANALYSIS PROGRAM","06/01/2024","03/27/2024","Zihui Zhao","MD","Johns Hopkins University","Standard Grant","Jeremy Tyson","05/31/2027","$253,734.00","","zhaozh@jhu.edu","3400 N CHARLES ST","BALTIMORE","MD","212182608","4439971898","MPS","126500, 128100","5920, 5936, 5946, 5950","$0.00","This award supports research on the regularity of solutions to elliptic partial differential equations and regularity of generalized minimal submanifolds. Elliptic differential equations govern the equilibrium configurations of various physical phenomena, for instance, those arising from minimization problems for natural energy functionals. Examples include the shape of free-hanging bridges, the shape of soap bubbles, and the sound of drums. Elliptic differential equations are also used to quantify the degree to which physical objects are bent or distorted, with far-reaching implications and applications in geometry and topology. The proposed research focuses on the regularity of solutions to such equations. Questions to be addressed include the following: Do non-smooth points (singularities) exist? How large can the set of singularities be? What is the behavior of the solution near a singularity? Is it possible to perturb the underlying environment in order to eliminate the singularity? The project will also provide opportunities for the professional development of graduate students, both via individual mentoring and via the organization of a directed learning seminar on geometric analysis and geometric measure theory.

The mathematical objectives of the project are twofold. First, the principal investigator will study unique continuation for solutions to elliptic partial differential equations, with a focus on quantitative estimates on the size and structure of the singular set of these solutions. A second topic for consideration is the regularity theory for generalized minimal submanifolds (a generalized notion of smooth submanifolds which arise as critical points for the area functional under local deformations). In particular, the principal investigator will study branch singular points in the interior as well as at the boundary of a generalized minimal submanifold, under an area-minimizing or stability assumption. Research on the latter topic, which can be viewed as a non-linear analogue of quantitative unique continuation for elliptic equations, requires the integration of ideas from geometric measure theory, partial differential equations and geometric analysis.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2350263","Diffusion in Kinetic Equations","DMS","ANALYSIS PROGRAM","07/01/2024","03/27/2024","Luis Silvestre","IL","University of Chicago","Standard Grant","Marian Bocea","06/30/2027","$363,887.00","","luis@math.uchicago.edu","5801 S ELLIS AVE","CHICAGO","IL","606375418","7737028669","MPS","128100","","$0.00","Kinetic equations model the evolution of densities of a large system of interactive particles. They may be used, for example, to study the evolution of a gas or a plasma. The Principal Investigator (PI) is interested in the study of the Boltzmann and Landau equations, for systems of particles that repel each other by power-law potentials. These equations exhibit a regularization effect. An outstanding open problem is to understand if a singularity could emerge from the natural flow of the equation, or if the regularization effects actually dominate the evolution and keep the solutions smooth. The PI mentors graduate students and postdocs in research on the topics of this project.

This project aims at developing tools in the analysis of nonlocal equations, parabolic equations and hypoelliptic theory targeted to their applications in kinetic equations. The Boltzmann collision operator acts as a nonlinear diffusive operator of fractional order. It can be studied in the framework of parabolic integro-differential equations. The Landau equation is a model from statistical mechanics used to describe the dynamics of plasma. It can be obtained as a limit case of the Boltzmann equation when grazing collisions prevail. It is a second order, nonlinear, parabolic equation. The project connects different areas of mathematics and mathematical physics, relating recent progress in nonlinear integro-differential equations with the classical Boltzmann equation from statistical mechanics. Kinetic equations involve a nonlinear diffusive operator with respect to velocity, combined with a transport equation with respect to space. The regularization effect in all variables requires ideas from hypoelliptic theory. For the Boltzmann equation in the case of very soft potentials, as well as for the Landau equation with Coulomb potentials, the diffusive part of the equations is not strong enough to prevent the solution from blowing up in theory. In that case, new ideas are needed to properly understand the regularization effects of the equation.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2400008","A description of surface dynamics","DMS","ANALYSIS PROGRAM","07/01/2024","04/01/2024","Enrique Pujals","NY","CUNY Graduate School University Center","Standard Grant","Jeremy Tyson","06/30/2026","$249,103.00","","epujals@gc.cuny.edu","365 5TH AVE STE 8113","NEW YORK","NY","100164309","2128177526","MPS","128100","5913, 5918","$0.00","This project seeks to understand the mechanisms that underlie the transition of a dynamical system from an ordered state to a random (chaotic) state. In other words, the aim is to understand the processes through which a system's behavior evolves from periodicity toward chaos, as one or more governing parameters are varied. A related goal is to identify the primary bifurcation responsible for qualitative changes exhibited by a dynamical system. While such comprehension has previously been attained for low-dimensional dynamical systems, this project introduces a novel approach to transcend the low-dimensional limitation. The project will offer new conceptual ideas and approaches to provide fresh perspectives on advances in mathematics and science. Additionally, the project will facilitate the training of graduate students directly engaged in the research, and will afford educational opportunities to undergraduate students through the organization of a summer school presenting topics in mathematics, including topics related to dynamical systems.

The theory of one-dimensional dynamical systems successfully explains the depth and complexity of chaotic phenomena in concert with a description of the dynamics of typical orbits for typical maps. Its remarkable universality properties supplement this understanding with powerful geometric tools. In the two-dimensional setting, the range of possible dynamical scenarios that can emerge is at present only partially understood, and a general framework for those new phenomena that do not occur for one-dimensional dynamics remains to be developed. In prior work supported by the NSF, the principal investigator introduced a large open class of two-dimensional dynamical systems, including the classical Henon family without the restriction of large area contraction, that is amenable to obtaining results as in the one-dimensional case. Moreover, major progress was reached to understand the transition from zero entropy to positive entropy using renormalization schemes. The present project has several components. First, existing renormalization schemes will be adapted to the positive entropy realm. Next, initial steps towards a characterization of dissipative diffeomorphisms in more general contexts will be addressed. Finally, the principal investigator will seek to develop the theory of differentiable renormalization without an a priori assumption of proximity to the one-dimensional setting. These results will open the door to a global description of dissipative diffeomorphisms and their behavior under perturbation, bringing both new tools and new perspectives to smooth dynamical systems theory.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." diff --git a/Computational-Mathematics/Awards-Computational-Mathematics-2024.csv b/Computational-Mathematics/Awards-Computational-Mathematics-2024.csv index f21404c..311ddf5 100644 --- a/Computational-Mathematics/Awards-Computational-Mathematics-2024.csv +++ b/Computational-Mathematics/Awards-Computational-Mathematics-2024.csv @@ -1,15 +1,21 @@ "AwardNumber","Title","NSFOrganization","Program(s)","StartDate","LastAmendmentDate","PrincipalInvestigator","State","Organization","AwardInstrument","ProgramManager","EndDate","AwardedAmountToDate","Co-PIName(s)","PIEmailAddress","OrganizationStreet","OrganizationCity","OrganizationState","OrganizationZip","OrganizationPhone","NSFDirectorate","ProgramElementCode(s)","ProgramReferenceCode(s)","ARRAAmount","Abstract" +"2410944","Reliable data-driven optimization with complex physics-based systems","DMS","OFFICE OF MULTIDISCIPLINARY AC, COMPUTATIONAL MATHEMATICS","08/15/2024","07/19/2024","Johannes Milz","GA","Georgia Tech Research Corporation","Standard Grant","Troy D. Butler","07/31/2027","$383,840.00","","johannes.milz@isye.gatech.edu","926 DALNEY ST NW","ATLANTA","GA","303186395","4048944819","MPS","125300, 127100","8248, 8396, 9263","$0.00","Simulation models have greatly enhanced our understanding of various phenomena, such as climate change impacts, weather patterns, the spread of pathogens, and nuclear fusion. These models are essential for advancing clean energy and reaching goals like reducing emissions to net zero. Modern computing environments allow scientists to leverage these models for decision-making. Despite today's advanced computational capabilities, optimizing computational processes with physics-based simulation models involves trade-offs between model fidelity, data utilization, and computational resources. This project aims to analyze the computational resources needed for reliable, data-driven decision-making with physics-based models, focusing on enhancing the design of renewable tidal energy farms. Open-source computer code and simulation output will be created and archived. Through outreach activities, students will learn about the importance of computer-aided decision-making.

The project will establish informative estimates of the computational resources required for optimizing physics-based systems, which are challenging infinite-dimensional optimization problems. Specifically, the complexity of deterministic and stochastic optimization problems governed by complex physics-based systems will be analyzed, focusing on those given by partial differential equations. A key aspect is analyzing the accuracy and reliability of low-fidelity and sample-based approximations of deterministic, risk-averse, and chance-constrained optimization problems. The focus is on objective and constraint functions with nonsmooth, nonconvex, and nonconvex dependence on decision variables and uncertain parameters. The project employs tools from high-dimensional statistics and nonsmooth analysis to quantify the generalization properties of sample-based and data-driven solutions. Theoretical findings will be empirically validated through simulations, with computer code and simulation outputs published open-source and archived. The results will be disseminated through research publications in scientific journals and presentations at workshops and conferences. Additionally, one Ph.D. student will be integrated into the research of this project.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2409919","MPS/DMS-EPSRC: Advanced Computational Methods for Imperfect/Uncertain Geometries","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","07/19/2024","Guglielmo Scovazzi","NC","Duke University","Standard Grant","Yuliya Gorb","07/31/2027","$362,568.00","","guglielmo.scovazzi@duke.edu","2200 W MAIN ST","DURHAM","NC","277054640","9196843030","MPS","127100","9263","$0.00","This project aims at developing algorithms for the analysis of mechanical systems with imperfect or uncertain geometries, essential for Digital Twins (DTs). DTs of geometrically complex systems may require months to train because the generation of computational grids is time-consuming and labor intense. These difficulties will be bypassed by combining the Shifted Boundary Method (SBM), an immersed geometry computational method, with probabilistic Subdivision Surfaces (SSs), for geometric representation. Probabilistic geometry representations are needed because the exact geometry of the system in operation is usually only partly known. This project aims to transform the current design and analysis cycle, by making Computer-Aided Design (CAD) and mesh generation more flexible, automated, and better integrated with the analysis phase. This research will enable an ecosystem of computational methods that can robustly and efficiently interact with the meta-algorithms for DTs (e.g., reduced-order modeling, machine learning, uncertainty quantification, and optimization) and will benefit the ?democratization? of computing to professionals who are non-expert in this field.

Computations in complex geometries pose at least two major challenges: (1) the representation of geometries with imperfect/uncertain CAD models or imaging-based data; and (2) the quantification of the effect of geometric uncertainties on the performance of systems. Unfitted Finite Element Methods (UFEMs; e.g., cutFEM, the Finite Cell Method, Immerso-Geometric Analysis, etc.) simplify mesh generation by immersing geometries in a pre-existing simple grid. However, UFEMs suffer from numerical instabilities or poor matrix conditioning, whenever small cut cells are present. UFEMs require more involved data structures and cut-cell integration may become extremely complex, or even unfeasible for geometry representations with gaps and overlaps. The as-is geometry of parts may be different from design models due to manufacturing uncertainties and wear caused by operation. The Shifted Boundary Method (SBM) shifts both the location of boundary conditions from the true boundary to an approximate boundary (with no cut cells) and their value by means of Taylor expansions. This yields a simple, robust, accurate and efficient method for very complex geometries, which may include gaps/overlaps. The SBM will be applied to high-order hierarchical B-spline discretizations, which have superior monotonicity and regularity properties. The SBM can be adapted to uncertain geometries using probabilistic SS, which combine standard (deterministic) subdivision surfaces with stochastic SPDE representation of random fields. Uncertainties will then be propagated to the output quantities of interest. Observation data from the product in operation will be probabilistically synthesized with the uncertain simulation data obtained by forward propagation. This approach will be extended to a variational Bayesian framework for the inference of geometry and other model parameters using the observation data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2424305","Comparative Study of Finite Element and Neural Network Discretizations for Partial Differential Equations","DMS","COMPUTATIONAL MATHEMATICS","03/15/2024","03/15/2024","Jonathan Siegel","TX","Texas A&M University","Continuing Grant","Yuliya Gorb","07/31/2025","$140,889.00","","jwsiegel@tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","127100","079Z, 9263","$0.00","This research connects two different fields, machine learning from data science and numerical partial differential equations from scientific and engineering computing, through the comparative study of the finite element method and finite neuron method. Finite element methods have undergone decades of study by mathematicians, scientists and engineers in many fields and there is a rich mathematical theory concerning them. They are widely used in scientific computing and modelling to generate accurate simulations of a wide variety of physical processes, most notably the deformation of materials and fluid mechanics. By contrast, deep neural networks are relatively new and have only been widely used in the last decade. In this short time, they have demonstrated remarkable empirical performance on a wide variety of machine learning tasks, most notably in computer vision and natural language processing. Despite this great empirical success, there is still a very limited mathematical understanding of why and how deep neural networks work so well. We hope to leverage the success of deep learning to improve numerical methods for partial differential equations and to leverage the theoretical understanding of the finite element method to better understand deep learning. The interdisciplinary nature of the research will also provide a good training experience for junior researchers. This project will support 1 graduate student each year of the three year project.

Piecewise polynomials represent one of the most important functional classes in approximation theory. In classical approximation theory and numerical methods for partial differential equations, these functional classes are often represented by linear functional spaces associated with a priori given grids, for example, by splines and finite element spaces. In deep learning, function classes are typically represented by a composition of a sequence of linear functions and coordinate-wise non-linearities. One important non-linearity is the rectified linear unit (ReLU) function and its powers (ReLUk). The resulting functional class, ReLUk-DNN, does not form a linear vector space but is rather parameterized non-linearly by a high-dimensional set of parameters. This function class can be used to solve partial differential equations and we call the resulting numerical algorithms the finite neuron method (FNM). Proposed research topics include: error estimates for the finite neuron method, universal construction of conforming finite elements for arbitrarily high order partial differential equations, an investigation into how and why the finite neuron method gives a much better asymptotic error estimate than the corresponding finite element method, and the development and analysis of efficient algorithms for using the finite neuron method.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2410676","Collaborative Research: Data-driven Realization of State-space Dynamical Systems via Low-complexity Algorithms","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/07/2024","Sirani Mututhanthrige-Perera","FL","Embry-Riddle Aeronautical University","Standard Grant","Jodi Mead","07/31/2027","$175,000.00","","pereras2@erau.edu","1 AEROSPACE BLVD","DAYTONA BEACH","FL","321143910","3862267695","MPS","127100","079Z, 9263","$0.00","Data science is evolving rapidly and places a new perspective on realizing state-space dynamical systems. Predicting time-advanced states of dynamical systems is a challenging problem in STEM disciplines due to their nonlinear and complex nature. This project will utilize data-driven methods and analyze state-space dynamical systems to predict and understand future states, surpassing classical techniques. In addition, the PI team will (i) guide students to obtain cross-discipline PhD/Master's degrees, (ii) guide students to work in a peer-learning environment, and (iii) educate a diverse group of undergraduates.

In more detail, this project will utilize state-of-the-art machine learning (ML) algorithms to efficiently analyze and predict information within data matrices and tensor computations with low-complexity algorithms. Single-dimensional ML models are not efficient at extracting hidden semantic information in the time and space domains. As a result, it becomes challenging to simultaneously capture multi-dimensional spatiotemporal data in state-space dynamical systems. Using efficient ML algorithms to recover multi-dimensional spatiotemporal data simultaneously offers a breakthrough in understanding the chaotic behavior of dynamical systems. This project will (i) utilize ML to predict future states of dynamical systems based on high-dimensional data matrices captured at different time stamps, (ii) realize state-space controllable and observable systems via low-complexity algorithms to simultaneously analyze multiple states of the systems, (iii) analyze noise in state-space systems for uncertainty quantification, predict patterns in real-time states, generate counter-resonance states to suppress them, and optimize performance and stability, (iv) study system resilience via multiple state predictors and perturbations to assess performance and adaptation to disturbances and anomalies, and finally (v) optimize spacecraft trajectories, avoid impact, and use low-complexity algorithms to understand spacecraft launch dynamics on the space coast and support ERAU's mission in aeronautical research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2337678","CAREER: Gaussian Processes for Scientific Machine Learning: Theoretical Analysis and Computational Algorithms","DMS","APPLIED MATHEMATICS, COMPUTATIONAL MATHEMATICS","06/01/2024","03/11/2024","Bamdad Hosseini","WA","University of Washington","Continuing Grant","Yuliya Gorb","05/31/2029","$120,000.00","","bamdadh@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","126600, 127100","079Z, 1045, 9263","$0.00","Machine learning and artificial intelligence are increasingly changing our lives at every scale, from our personal day-to-day activities to large scale shifts in our society, economy, and geopolitics. These technologies have also profoundly transformed sciences with new ideas and algorithms being developed at an immense speed. However, our mathematical understanding of these algorithms is still far beyond their practical development and widespread adoption. Put simply, in many cases we do not know how or why machine learning algorithms work so well, which in turn limits our ability to safely deploy them in safety critical engineering and scientific scenarios. The goal of this project is to develop mathematical formalism and understanding of problems at the intersection of machine learning and science (i.e., scientific machine learning) using rigorous mathematics, leading to novel algorithms and computational methodologies that are interpretable, supported by rigorous theory, and aware of uncertainties.

The project is focused on the development of novel Gaussian Process (GP) based computational frameworks for scientific machine learning that are provably well-posed, robust, and stable, thereby meeting the high standards of scientific computing. The developed methodologies will be capable of rigorous uncertainty quantification and inherit the desirable properties of machine learning algorithms such as flexibility, generalizability, and applicability in high-dimensions. The efforts of the project are directed in three primary directions: (1) GPs for solving nonlinear, high-dimensional and parametric PDEs; (2) GPs for operator learning, emulation, and physics discovery; and (3) GPs for high-dimensional sampling, inference, and generative modeling. Each research direction focuses on the development of algorithms, foundational theory, and concrete applications in engineering and science. The project also contains an extensive education plan focused on machine learning and data science education from high-school through graduate levels with extensive opportunities for training of graduate and undergraduate students as well as local and international outreach.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2417818","Conference: NSF Computational Mathematics PI Meeting 2024","DMS","COMPUTATIONAL MATHEMATICS","06/01/2024","02/09/2024","Bamdad Hosseini","WA","University of Washington","Standard Grant","Yuliya Gorb","11/30/2024","$99,999.00","Beatrice Riviere, Akil Narayan","bamdadh@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","127100","7556, 9263","$0.00","The NSF Computational Mathematics (CompMath) PIs? meeting is the first such event for over a decade and will be held on July 15 and 16, 2024, at the University of Washington in Seattle. The meeting is open to all with interest in CompMath, with a particular emphasis on attracting current and recent PIs of the CompMath program, as well as early career researchers such as junior professors, postdocs, and Ph.D. students working in CompMath. The meeting aims to achieve three primary goals: (1) Community building in CompMath by bringing together the CompMath community to strengthen and expand the network of NSF-supported PIs. Parallel research sessions serve as a platform to highlight the achievements of the PIs and the CompMath program; (2) Engagement of early career scientists through a poster session designed to highlight their works, activities aimed at the professional development of early career researchers, and travel support for these attendees; (3) A forward-looking report will be compiled by the organizers by leveraging community input that presents an overview of discussions during the meeting and summarizes any strategic community recommendations and outcomes from the meeting focusing on the major achievements of CompMath, current challenges, and exciting new research directions.

The CompMath disciplinary program at NSF supports a variety of research endeavors, which include more classical foci of numerical analysis, PDE solving algorithms, and mathematical optimization, along with the more recent and/or nascent subfields of randomized linear algebra, computational imaging, and mathematical aspects of data science and machine learning. Investigators from these subfields indeed attend their own discipline-specific conferences and professional events and also take opportunities to attend meetings in related fields. However, there is a dearth of meeting opportunities between a large collection of researchers who focus on the broad collection of foundational research questions in CompMath. Because the community of PIs supported by project awards from the NSF CompMath program does not have targeted events/conferences where the community broadly meets, the main motivation for this proposal is to organize such a professional meeting. The PI meeting will feature focused presentation sessions where PI?s will highlight their NSF-supported work in the previously described technical research areas.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2409841","Hydrodynamic models of electric charge transport: structure-preserving numerical methods","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","06/04/2024","Ignacio Tomas","TX","Texas Tech University","Standard Grant","Ludmil T. Zikatanov","08/31/2027","$219,782.00","","igtomas@ttu.edu","2500 BROADWAY","LUBBOCK","TX","79409","8067423884","MPS","127100","9263","$0.00","Electric charge transport physics is at the core of several technologies driving our economic and national security interests. For instance, the design of novel semiconductor devices requires a proper understanding of electron transport in the high-frequency regime. Similarly, the operation of directed energy systems hinges on the development of novel microwave sources and high-voltage high-current pulsed-power infrastructure. The project aims to provide innovative and robust numerical methods that will greatly enhance our predictive capabilities in the context of high-frequency electric charge transport simulation. This project will contribute to developing a new educational curriculum targeting the interdisciplinary training of graduate students at the intersection of mathematical modeling, numerical analysis, scientific computation, and physics.

The project will develop numerical methods to solve electrostatic and electrodynamic fluid models of electric charge transport. The Euler-Maxwell and Euler-Poisson systems are some of the simplest electrodynamic and electrostatic (respectively) fluid models of electric charge transport. These models describe electrically non-neutral plasmas, electron inertia effects, high-frequency electrostatic plasma oscillation, and collective cyclotron motions such as the Diocotron instability. This project comprises numerical analysis, scientific computing, and graduate-level education. The research program will advance space and time discretizations for hydrodynamic models of electric charge transport that are mathematically guaranteed to be robust and preserve key mathematical properties of interest. Among such properties, we have preservation of pointwise stability properties (e.g. positivity of density and minimum principle of the specific entropy), discrete energy-stability, and well-posedness of linear algebra systems. This project comprises three research tasks involving the development of: (I) Semi-implicit schemes for Euler-Maxwell and Euler-Poisson systems, (II) Maxwell's equations formulations and solvers, and (III) Graph-based solvers for nonlinear hyperbolic systems (mathematical theory and high-performance implementation). The resulting methods will be implemented using the library deal.II. It will extend the investigator and collaborators' high-performance software developments. This project will also lead to a new graduate-level class to train a new generation of students on the nature of these models and their technological applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411264","NSF-BSF: Scalable Graph Neural Network Algorithms and Applications to PDEs","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/21/2024","Lars Ruthotto","GA","Emory University","Continuing Grant","Troy D. Butler","07/31/2027","$121,190.00","","lruthotto@emory.edu","201 DOWMAN DR NE","ATLANTA","GA","303221061","4047272503","MPS","127100","079Z, 9263","$0.00","This project will advance the fields of geometric machine learning and numerical partial differential equations and strengthen the connections between them. Geometric machine learning provides an effective approach for analyzing unstructured data and has become indispensable for computer graphics and vision, bioinformatics, social network analysis, protein folding, and many other areas. Partial differential equations (PDEs) are ubiquitous in mathematical modeling, and their numerical solution enables the simulation of real-world phenomena in engineering design, medical analysis, and material sciences, to name a few. A unified study of both fields exposes many potential synergies, which the project will seize to improve the efficiency of algorithms in both areas. The first goal is to improve the scalability of geometric machine learning approaches based on graph neural networks (GNNs) to accommodate growing datasets with millions of nodes using insights and ideas from numerical PDEs. The second goal is to accelerate numerical PDE simulations by enhancing numerical solvers on unstructured meshes with GNN components. Through these improvements in computational efficiency, the project will enable more accurate data analysis and PDE simulations for high-impact applications across the sciences, engineering, and industry. Graduate students and postdoctoral researchers will be integrated into this research as part of their professional training.

This project will develop computational algorithms that improve the efficiency and scalability of GNNs and create new approaches for GNNs for solving nonlinear PDEs on unstructured meshes. To improve the scalability of GNNs to graphs with millions of nodes, the research team will develop spatial smoothing operators, coarsening operators, and multilevel training schemes. To accelerate PDE simulations on unstructured meshes, the team will train GNNs to produce effective prolongation, restriction, and coarse mesh operators in multigrid methods and preconditioners in Krylov methods. The team will demonstrate that the resulting hybrid schemes accelerate computations and are provably convergent. To show the broad applicability of the schemes, the team will consider challenging PDE problems in computational fluid dynamics and test the scalable GNNs on established geometric learning benchmark tasks such as shape and node classification. The mathematical backbone of these developments is algebraic multigrid techniques, which motivate GNN design and training and are used in the PDE solvers.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2414705","CAREER: Mathematical Modeling from Data to Insights and Beyond","DMS","COMPUTATIONAL MATHEMATICS","01/15/2024","01/22/2024","Yifei Lou","NC","University of North Carolina at Chapel Hill","Continuing Grant","Yuliya Gorb","05/31/2025","$141,540.00","","yflou@unc.edu","104 AIRPORT DR STE 2200","CHAPEL HILL","NC","275995023","9199663411","MPS","127100","1045, 9263","$0.00","This project will develop both analytical and computational tools for data-driven applications. In particular, analytical tools will hold great promise to provide theoretical guidance on how to acquire data more efficiently than current practices. To retrieve useful information from data, numerical methods will be investigated with emphasis on guaranteed convergence and algorithmic acceleration. Thanks to close interactions with collaborators in data science and information technology, the investigator will ensure the practicability of the proposed research, leading to a real impact. The investigator will also devote herself to various outreach activities in the field of data science. For example, she will initiate a local network of students, faculty members, and domain experts to develop close ties between mathematics and industry as well as to broaden career opportunities for mathematics students. This initiative will have a positive impact on the entire mathematical sciences community. In addition, she will advocate for the integration of mathematical modeling into K-16 education by collaborating with The University of Texas at Dallas Diversity Scholarship Program to reach out to mathematics/sciences teachers.

This project addresses important issues in extracting insights from data and training the next generation in the ""big data"" era. The research focuses on signal/image recovery from a limited number of measurements, in which ""limited"" refers to the fact that the amount of data that can be taken or transmitted is limited by technical or economic constraints. When data is insufficient, one often requires additional information from the application domain to build a mathematical model, followed by numerical methods. Questions to be explored in this project include: (1) how difficult is the process of extracting insights from data? (2) how should reasonable assumptions be taken into account to build a mathematical model? (3) how should an efficient algorithm be designed to find a model solution? More importantly, a feedback loop from insights to data will be introduced, i.e., (4) how to improve upon data acquisition so that information becomes easier to retrieve? As these questions mimic the standard procedure in mathematical modeling, the proposed research provides a plethora of illustrative examples to enrich the education of mathematical modeling. In fact, one of this CAREER award's educational objectives is to advocate the integration of mathematical modeling into K-16 education so that students will develop problem-solving skills in early ages. In addition, the proposed research requires close interactions with domain experts in business, industry, and government (BIG), where real-world problems come from. This requirement helps to fulfill another educational objective, that is, to promote BIG employment by providing adequate training for students in successful approaches to BIG problems together with BIG workforce skills.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410671","Robust Algorithms Based on Domain Decomposition and Microlocal-Analysis for Wave propagation","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","06/14/2024","Yassine Boubendir","NJ","New Jersey Institute of Technology","Standard Grant","Ludmil T. Zikatanov","06/30/2027","$200,000.00","","boubendi@njit.edu","323 DR MARTIN LUTHER KING JR BLV","NEWARK","NJ","071021824","9735965275","MPS","127100","9263","$0.00","More than ever, technological advances in industries such as aerospace, microchips, telecommunications, and renewable energy rely on advanced numerical solvers for wave propagation. The aim of this project is the development of efficient and accurate algorithms for acoustic and electromagnetic wave propagation in complex domains containing, for example, inlets, cavities, or a multilayer structure. These geometrical features continue to pose challenges for numerical computation. The numerical methods developed in this project will have application to radar, communications, remote sensing, stealth technology, satellites, and many others. Fundamental theoretical and computational issues as well as realistic complex geometries such as those occurring in aircraft and submarines will be addressed in this project. The obtained algorithms will facilitate the use of powerful computers when simulating industrial high-frequency wave problems. The numerical solvers obtained through this research will be made readily available to scientists in aerospace and other industries, which will contribute to enhancing the U.S leadership in this field. Several aspects in this project will benefit the education of both undergraduate and graduate students. Graduate students will gain expertise in both scientific computing and mathematical analysis. This will reinforce their preparation to face future challenges in science and technology.

The aim of this project is the development of efficient and accurate algorithms for acoustic and electromagnetic wave propagation in complex domains. One of the main goals of this project resides in the design of robust algorithms based on high-frequency integral equations, microlocal and numerical analysis, asymptotic methods, and finite element techniques. The investigator plans to derive rigorous asymptotic expansions for incidences more general than plane waves in order to support the high-frequency integral equation multiple scattering iterative procedure. The investigator will introduce Ray-stabilized Galerkin boundary element methods, based on a new theoretical development on ray tracing, to significantly reduce the computational cost at each iteration and limit the exponentially increasing cost of multiple scattering iterations to a fixed number. Using the theoretical findings in conjunction with the stationary phase lemma, frequency-independent quadratures for approximating the multiple scattering amplitude will also be designed. These new methods will be beneficial for industrial applications involving multi-component radar and antenna design. In addition, this project includes development of new non-overlapping domain decomposition methods with considerably enhanced convergence characteristics. The main idea resides in a novel treatment of the continuity conditions in the neighborhood of the so called cross-points. Analysis of the convergence and stability will be included in parallel to numerical simulations in the two and three dimensional cases using high performance computing.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411396","Interacting particle system for nonconvex optimization","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","06/13/2024","Yuhua Zhu","CA","University of California-San Diego","Continuing Grant","Troy D. Butler","06/30/2027","$80,048.00","","yuz244@ucsd.edu","9500 GILMAN DR","LA JOLLA","CA","920930021","8585344896","MPS","127100","9263","$0.00","Collective Intelligence offers profound insights into how groups, whether they be cells, animals, or even machines, can work together to accomplish tasks more effectively than individuals alone. Originating in biology and now influencing fields as varied as management science, artificial intelligence, and robotics, this concept underscores the potential of collaborative efforts in solving complex challenges. On the other hand, the quest for finding global minimizers of nonconvex optimization problems arises in physics and chemistry, as well as in machine learning due to the widespread adoption of deep learning. Building the bridge between these two seemingly disparate realms, this project will utilize Collective Intelligence to leverage the interacting particle systems as a means to address the formidable challenge of finding global minimizers in nonconvex optimization problems. Graduate students will also be integrated within the research team as part of their professional training.

This project will focus on a gradient-free optimization method inspired by a consensus-based interacting particle system to solve different types of nonconvex optimization problems. Effective communication and cooperation among particles within the system play pivotal roles in efficiently exploring the landscape and converging to the global minimizer. Aim 1 targets nonconvex optimization with equality constraints; and Aim 2 addresses nonconvex optimization on convex sets; while Aim 3 applies to Clustered Federated Learning. Additionally, convergence guarantees will be provided for nonconvex and nonsmooth objective functions. Theoretical analyses, alongside practical implementations, will provide valuable insights and tools for addressing different types of nonconvex optimization challenges.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2416250","Theory and algorithms for a new class of computationally amenable nonconvex functions","DMS","COMPUTATIONAL MATHEMATICS","03/01/2024","03/12/2024","Ying Cui","CA","University of California-Berkeley","Standard Grant","Jodi Mead","06/30/2026","$240,330.00","","yingcui@berkeley.edu","1608 4TH ST STE 201","BERKELEY","CA","947101749","5106433891","MPS","127100","079Z, 9263","$0.00","As the significance of data science continues to expand, nonconvex optimization models become increasingly prevalent in various scientific and engineering applications. Despite the field's rapid development, there are still a host of theoretical and applied problems that so far are left open and void of rigorous analysis and efficient methods for solution. Driven by practicality and reinforced by rigor, this project aims to conduct a comprehensive investigation of composite nonconvex optimization problems and games. The technologies developed will offer valuable tools for fundamental science and engineering research, positively impacting the environment and fostering societal integration with the big-data world. Additionally, the project will educate undergraduate and graduate students, cultivating the next generation of experts in the field.

This project seeks to advance state-of-the-art techniques for solving nonconvex optimization problems and games through both theoretical and computational approaches. At its core is the innovative concept of ""approachable difference-of-convex functions,"" which uncovers a hidden, asymptotically decomposable structure within the multi-composition of nonconvex and non-smooth functions. The project will tackle three main tasks: (i) establishing fundamental properties for a novel class of computationally amenable nonconvex and non-smooth composite functions; (ii) designing and analyzing computational schemes for single-agent optimization problems, with objective and constrained functions belonging to the aforementioned class; and (iii) extending these approaches to address nonconvex games.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410678","Collaborative Research: Data-driven Realization of State-space Dynamical Systems via Low-complexity Algorithms","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/07/2024","Aaron Welters","FL","Florida Institute of Technology","Standard Grant","Jodi Mead","07/31/2027","$125,000.00","Xianqi Li","awelters@fit.edu","150 W UNIVERSITY BLVD","MELBOURNE","FL","329018995","3216748000","MPS","127100","079Z, 9263","$0.00","Data science is evolving rapidly and places a new perspective on realizing state-space dynamical systems. Predicting time-advanced states of dynamical systems is a challenging problem in STEM disciplines due to their nonlinear and complex nature. This project will utilize data-driven methods and analyze state-space dynamical systems to predict and understand future states, surpassing classical techniques. In addition, the PI team will (i) guide students to obtain cross-discipline PhD/Master's degrees, (ii) guide students to work in a peer-learning environment, and (iii) educate a diverse group of undergraduates.

In more detail, this project will utilize state-of-the-art machine learning (ML) algorithms to efficiently analyze and predict information within data matrices and tensor computations with low-complexity algorithms. Single-dimensional ML models are not efficient at extracting hidden semantic information in the time and space domains. As a result, it becomes challenging to simultaneously capture multi-dimensional spatiotemporal data in state-space dynamical systems. Using efficient ML algorithms to recover multi-dimensional spatiotemporal data simultaneously offers a breakthrough in understanding the chaotic behavior of dynamical systems. This project will (i) utilize ML to predict future states of dynamical systems based on high-dimensional data matrices captured at different time stamps, (ii) realize state-space controllable and observable systems via low-complexity algorithms to simultaneously analyze multiple states of the systems, (iii) analyze noise in state-space systems for uncertainty quantification, predict patterns in real-time states, generate counter-resonance states to suppress them, and optimize performance and stability, (iv) study system resilience via multiple state predictors and perturbations to assess performance and adaptation to disturbances and anomalies, and finally (v) optimize spacecraft trajectories, avoid impact, and use low-complexity algorithms to understand spacecraft launch dynamics on the space coast and support ERAU's mission in aeronautical research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2416250","Theory and algorithms for a new class of computationally amenable nonconvex functions","DMS","COMPUTATIONAL MATHEMATICS","03/01/2024","03/12/2024","Ying Cui","CA","University of California-Berkeley","Standard Grant","Jodi Mead","06/30/2026","$240,330.00","","yingcui@berkeley.edu","1608 4TH ST STE 201","BERKELEY","CA","947101749","5106433891","MPS","127100","079Z, 9263","$0.00","As the significance of data science continues to expand, nonconvex optimization models become increasingly prevalent in various scientific and engineering applications. Despite the field's rapid development, there are still a host of theoretical and applied problems that so far are left open and void of rigorous analysis and efficient methods for solution. Driven by practicality and reinforced by rigor, this project aims to conduct a comprehensive investigation of composite nonconvex optimization problems and games. The technologies developed will offer valuable tools for fundamental science and engineering research, positively impacting the environment and fostering societal integration with the big-data world. Additionally, the project will educate undergraduate and graduate students, cultivating the next generation of experts in the field.

This project seeks to advance state-of-the-art techniques for solving nonconvex optimization problems and games through both theoretical and computational approaches. At its core is the innovative concept of ""approachable difference-of-convex functions,"" which uncovers a hidden, asymptotically decomposable structure within the multi-composition of nonconvex and non-smooth functions. The project will tackle three main tasks: (i) establishing fundamental properties for a novel class of computationally amenable nonconvex and non-smooth composite functions; (ii) designing and analyzing computational schemes for single-agent optimization problems, with objective and constrained functions belonging to the aforementioned class; and (iii) extending these approaches to address nonconvex games.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409903","Development of novel numerical methods for forward and inverse problems in mean field games","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","06/11/2024","Yat Tin Chow","CA","University of California-Riverside","Continuing Grant","Troy D. Butler","06/30/2027","$95,280.00","","yattinc@ucr.edu","200 UNIVERSTY OFC BUILDING","RIVERSIDE","CA","925210001","9518275535","MPS","127100","9263","$0.00","Mean field games is the study of strategic decision making in large populations where individual players interact through a certain quantity in the mean field. Mean field games have strong descriptive power in socioeconomics and biology, e.g. in the understanding of social cooperation, stock markets, trading and economics, biological systems, election dynamics, population games, robotic control, machine learning, dynamics of multiple populations, pandemic modeling and control as well as vaccination distribution. It is therefore essential to develop accurate numerical methods for large-scale mean field games and their model recovery. However, current computational approaches for the recovery problem are impractical in high dimensions. This project will comprehensively study new computational methods for both large-scale mean field games and their model recovery. The comprehensive plans will cover algorithmic development, theoretical analysis, numerical implementation and practical applications. The project will also involve research on speeding up the forward and inverse problem computations to speed up the computation for mean field game modeling and turn real life mean field game model recovery problems from computationally unaffordable to affordable. The research team will disseminate results through publications, professional presentations, the training of graduate students at the University of California, Riverside as well as through public outreach events that involve public talks and engagement with high school math fairs. The goals of these outreach events are to increase public literacy and public engagement in mathematics, improve STEM education and educator development, and broaden participation of women and underrepresented minorities.

The project will provide novel computational methods for both forward and inverse problems of mean field games. The team will (1) develop two new numerical methods for forward problems in mean field games, namely monotone inclusion with Benamou-Brenier's formulation and extragradient algorithm with moving anchoring; (2) develop three new numerical methods for inverse problems in mean field games with only boundary measurements, namely a three-operator splitting scheme, a semi-smooth Newton acceleration method, and a direct sampling method. Both theoretical analysis and practical implementations will be emphasized. In particular, numerical methods for inverse problems for mean field games, which is a main target of the project, will be designed to work with only boundary measurements. This represents a brand new field in inverse problems and optimization. The project will also seek the simultaneous reconstruction of coefficients in the severely ill-posed case when only noisy boundary measurements from one or two measurement events are available.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2411208","Collaborative Research: Numerical Methods and Differential Geometry","DMS","COMPUTATIONAL MATHEMATICS","06/15/2024","06/04/2024","Evan Gawlik","HI","University of Hawaii","Standard Grant","Yuliya Gorb","05/31/2027","$190,527.00","","egawlik@hawaii.edu","2425 CAMPUS RD SINCLAIR RM 1","HONOLULU","HI","968222247","8089567800","MPS","127100","9150, 9263","$0.00","Partial differential equations (PDEs) model a wide variety of phenomena, ranging from how an airplane wing deforms in response to turbulence, to how radio waves travel through and around objects, to how black holes generate gravitational waves when they merge. Numerical analysts develop algorithms for simulating these systems by solving PDEs on a computer; these simulations enable engineers and scientists to develop prototypes and to interpret data from sensors. For example, the NSF-funded Nobel-winning detection of gravitational waves would not have been possible without advances in numerical analysis. In recent decades, numerical analysts discovered that ideas from differential geometry, an area of pure mathematics, can be used to develop good algorithms for solving PDEs. In fact, these ideas help not only for geometric problems in fields of study like computer vision and general relativity, but also for fields like electromagnetism that have little to do with geometry. Although applying differential geometry to numerical analysis has been very successful, thus far this link has been explored only for a small number of differential geometry ideas. In this project, the investigators will continue exploring this link, taking more ideas from differential geometry and applying them to develop new numerical algorithms. These algorithms could then be used both in applied areas, by solving PDEs in science and engineering, and in pure areas, by solving PDEs in differential geometry itself. The project will also support the training of graduate student researchers.

This project focuses on problems at the cusp of numerical analysis and differential geometry. It deals specifically with the design of finite element methods for PDEs that involve vector fields and tensor fields on Riemannian manifolds. In the long term, these efforts have the potential to lead to robust numerical methods for solving geometric PDEs like the Einstein field equations, which are useful for studying gravitational wave signals, as well as PDEs like the elasticity equations, which model how objects deform under stress. This project has three main goals. The first is to develop a new family of finite elements for discretizing algebraic curvature tensors and other bi-forms---tensor products of differential forms---on simplicial triangulations. The second goal is to develop an intrinsic finite element discretization of the Bochner Laplacian, which is a basic differential operator in Riemannian geometry that differs from the familiar Hodge Laplacian from finite element exterior calculus. The third goal is to leverage what we learn to design numerical methods for a wide range of geometric problems, such as computing spectra of elliptic operators on manifolds, simulating intrinsic geometric flows, and solving prescribed curvature problems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409918","Structure preservation in nonlinear, degenerate, evolution","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/03/2024","Abner Salgado","TN","University of Tennessee Knoxville","Standard Grant","Ludmil T. Zikatanov","07/31/2027","$204,533.00","","asalgad1@utk.edu","201 ANDY HOLT TOWER","KNOXVILLE","TN","379960001","8659743466","MPS","127100","9263","$0.00","A thorough treatment is feasible for the classical linear problems in the numerical approximation of partial differential equations. The continuous problem is well-posed. The numerical schemes are well-posed, parameter-robust, and convergent. It is even possible to prove convergence rates. However, the situation is more precarious for modern, complex systems of equations. Oftentimes, the uniqueness of solutions is not known. Even when there is uniqueness, the theory is far from complete, and so besides (weak) convergence of numerical solutions, little can be said about their behavior. In these scenarios, one must settle for simpler yet still relevant goals. An important goal in this front is that of structure preservation. The study of structure preservation in numerical methods is not new. Geometric numerical integration, many methods for electromagnetism, the finite element exterior calculus, and some novel approaches to hyperbolic systems of conservation laws, have this goal in mind: geometric, algebraic, or differential constraints must be preserved. This project does not focus on the problems mentioned above. Instead, it studies structure preservation in some evolution problems that have, possibly degenerate, diffusive behavior. This class of problems remains a largely unexplored topic when it comes to numerical discretizations. Bridging this gap will enhance modeling and prediction capabilities since diffusive models can be found in every aspect of scientific inquiry.

This project is focused on a class of diffusive problems in which stability of the solution cannot be obtained by standard energy arguments, in other words, by testing the equation with the solution to assert that certain space-time norms are under control. Norms are always convex. Structure preservation may then be a generalization of the approach given above. Instead of norms being under control, a (family of) convex functional(s) evaluated at the solution behave predictably during the evolution. The project aims to develop numerical schemes that mimic this in the discrete setting. While this is a largely unexplored topic, at the same time, many of the problems under consideration can be used to describe a wide range of phenomena. In particular, the project will develop new numerical schemes for an emerging theory of non-equilibrium thermodynamics, active scalar equations, and a class of problems in hyperbolic geometry. These models have a very rich intrinsic structure and a wide range of applications, and the developments of this project will serve as a stepping stone to bring these tools to the numerical treatment of more general problems. The students involved in the project will be trained in exciting, mathematically and computationally challenging, and practically relevant areas of research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2410676","Collaborative Research: Data-driven Realization of State-space Dynamical Systems via Low-complexity Algorithms","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/07/2024","Sirani Mututhanthrige-Perera","FL","Embry-Riddle Aeronautical University","Standard Grant","Jodi Mead","07/31/2027","$175,000.00","","pereras2@erau.edu","1 AEROSPACE BLVD","DAYTONA BEACH","FL","321143910","3862267695","MPS","127100","079Z, 9263","$0.00","Data science is evolving rapidly and places a new perspective on realizing state-space dynamical systems. Predicting time-advanced states of dynamical systems is a challenging problem in STEM disciplines due to their nonlinear and complex nature. This project will utilize data-driven methods and analyze state-space dynamical systems to predict and understand future states, surpassing classical techniques. In addition, the PI team will (i) guide students to obtain cross-discipline PhD/Master's degrees, (ii) guide students to work in a peer-learning environment, and (iii) educate a diverse group of undergraduates.

In more detail, this project will utilize state-of-the-art machine learning (ML) algorithms to efficiently analyze and predict information within data matrices and tensor computations with low-complexity algorithms. Single-dimensional ML models are not efficient at extracting hidden semantic information in the time and space domains. As a result, it becomes challenging to simultaneously capture multi-dimensional spatiotemporal data in state-space dynamical systems. Using efficient ML algorithms to recover multi-dimensional spatiotemporal data simultaneously offers a breakthrough in understanding the chaotic behavior of dynamical systems. This project will (i) utilize ML to predict future states of dynamical systems based on high-dimensional data matrices captured at different time stamps, (ii) realize state-space controllable and observable systems via low-complexity algorithms to simultaneously analyze multiple states of the systems, (iii) analyze noise in state-space systems for uncertainty quantification, predict patterns in real-time states, generate counter-resonance states to suppress them, and optimize performance and stability, (iv) study system resilience via multiple state predictors and perturbations to assess performance and adaptation to disturbances and anomalies, and finally (v) optimize spacecraft trajectories, avoid impact, and use low-complexity algorithms to understand spacecraft launch dynamics on the space coast and support ERAU's mission in aeronautical research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410677","Collaborative Research: Data-driven Realization of State-space Dynamical Systems via Low-complexity Algorithms","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/07/2024","Kshitij Khare","FL","University of Florida","Standard Grant","Jodi Mead","07/31/2027","$89,853.00","","kdkhare@stat.ufl.edu","1523 UNION RD RM 207","GAINESVILLE","FL","326111941","3523923516","MPS","127100","079Z, 9263","$0.00","Data science is evolving rapidly and places a new perspective on realizing state-space dynamical systems. Predicting time-advanced states of dynamical systems is a challenging problem in STEM disciplines due to their nonlinear and complex nature. This project will utilize data-driven methods and analyze state-space dynamical systems to predict and understand future states, surpassing classical techniques. In addition, the PI team will (i) guide students to obtain cross-discipline PhD/Master's degrees, (ii) guide students to work in a peer-learning environment, and (iii) educate a diverse group of undergraduates.

In more detail, this project will utilize state-of-the-art machine learning (ML) algorithms to efficiently analyze and predict information within data matrices and tensor computations with low-complexity algorithms. Single-dimensional ML models are not efficient at extracting hidden semantic information in the time and space domains. As a result, it becomes challenging to simultaneously capture multi-dimensional spatiotemporal data in state-space dynamical systems. Using efficient ML algorithms to recover multi-dimensional spatiotemporal data simultaneously offers a breakthrough in understanding the chaotic behavior of dynamical systems. This project will (i) utilize ML to predict future states of dynamical systems based on high-dimensional data matrices captured at different time stamps, (ii) realize state-space controllable and observable systems via low-complexity algorithms to simultaneously analyze multiple states of the systems, (iii) analyze noise in state-space systems for uncertainty quantification, predict patterns in real-time states, generate counter-resonance states to suppress them, and optimize performance and stability, (iv) study system resilience via multiple state predictors and perturbations to assess performance and adaptation to disturbances and anomalies, and finally (v) optimize spacecraft trajectories, avoid impact, and use low-complexity algorithms to understand spacecraft launch dynamics on the space coast and support ERAU's mission in aeronautical research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2411208","Collaborative Research: Numerical Methods and Differential Geometry","DMS","COMPUTATIONAL MATHEMATICS","06/15/2024","06/04/2024","Evan Gawlik","HI","University of Hawaii","Standard Grant","Yuliya Gorb","05/31/2027","$190,527.00","","egawlik@hawaii.edu","2425 CAMPUS RD SINCLAIR RM 1","HONOLULU","HI","968222247","8089567800","MPS","127100","9150, 9263","$0.00","Partial differential equations (PDEs) model a wide variety of phenomena, ranging from how an airplane wing deforms in response to turbulence, to how radio waves travel through and around objects, to how black holes generate gravitational waves when they merge. Numerical analysts develop algorithms for simulating these systems by solving PDEs on a computer; these simulations enable engineers and scientists to develop prototypes and to interpret data from sensors. For example, the NSF-funded Nobel-winning detection of gravitational waves would not have been possible without advances in numerical analysis. In recent decades, numerical analysts discovered that ideas from differential geometry, an area of pure mathematics, can be used to develop good algorithms for solving PDEs. In fact, these ideas help not only for geometric problems in fields of study like computer vision and general relativity, but also for fields like electromagnetism that have little to do with geometry. Although applying differential geometry to numerical analysis has been very successful, thus far this link has been explored only for a small number of differential geometry ideas. In this project, the investigators will continue exploring this link, taking more ideas from differential geometry and applying them to develop new numerical algorithms. These algorithms could then be used both in applied areas, by solving PDEs in science and engineering, and in pure areas, by solving PDEs in differential geometry itself. The project will also support the training of graduate student researchers.

This project focuses on problems at the cusp of numerical analysis and differential geometry. It deals specifically with the design of finite element methods for PDEs that involve vector fields and tensor fields on Riemannian manifolds. In the long term, these efforts have the potential to lead to robust numerical methods for solving geometric PDEs like the Einstein field equations, which are useful for studying gravitational wave signals, as well as PDEs like the elasticity equations, which model how objects deform under stress. This project has three main goals. The first is to develop a new family of finite elements for discretizing algebraic curvature tensors and other bi-forms---tensor products of differential forms---on simplicial triangulations. The second goal is to develop an intrinsic finite element discretization of the Bochner Laplacian, which is a basic differential operator in Riemannian geometry that differs from the familiar Hodge Laplacian from finite element exterior calculus. The third goal is to leverage what we learn to design numerical methods for a wide range of geometric problems, such as computing spectra of elliptic operators on manifolds, simulating intrinsic geometric flows, and solving prescribed curvature problems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410893","Unsteady Reynolds averaged Navier-Stokes models and computational fluid dynamics","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","06/07/2024","William Layton","PA","University of Pittsburgh","Standard Grant","Ludmil T. Zikatanov","07/31/2027","$220,000.00","","wjl+@pitt.edu","4200 FIFTH AVENUE","PITTSBURGH","PA","152600001","4126247400","MPS","127100","9263","$0.00","The project will conduct research on the numerical solution of turbulent ?ows. Fluids transport and mix heat, chemical species, and contaminants. Accurate simulation of turbulent flow is essential for safety critical prediction and design in applications involving these and other e?ects. Turbulent ?ow prediction in science , engineering and industry requires the use of turbulence models. The research project has 3 objectives: increasing accuracy of these models, decreasing model complexity and exploring a promising algorithmic idea for computer solution of models. The proposed research also develops the expertise of graduate students in computational and applied mathematics while working on compelling problems addressing human needs. In their development into independent scientists, each student will develop their own research agenda and collaborate at points of contact among the problems studied.

Modeling turbulence presents challenges at every level in every discipline it touches. 2-equation Unsteady Reynolds Averaged Navier-Stokes models are common in applications and also the ones with the most incomplete mathematical foundation. They have many calibration parameters, work acceptably for ?ows similar to the calibration data set and require users to have an intuition about which model predictions to accept and which to ignore. The project?s model analysis will address model accuracy, complexity and reliability. Even after modeling, greater computational resources are often required for their computational solution. In 1991 Ramshaw and Mesina proposed a non-obvious synthesis of penalty and arti?cial compression methods, resulting in a dispersive regularization of ?uid motion. When the two effects were balanced, they reported a dramatic accuracy improvement over the most e?cient current methods. The project will develop, improve and test the method based on a new analysis of energy ?ow.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410645","Computational Methods for Inverse and Optimal Design Problems in Topological Wave Insulators Based on Constructive Analysis","DMS","COMPUTATIONAL MATHEMATICS, EPSCoR Co-Funding","07/01/2024","06/04/2024","Junshan Lin","AL","Auburn University","Standard Grant","Troy D. Butler","06/30/2027","$300,263.00","","jzl0097@auburn.edu","321-A INGRAM HALL","AUBURN","AL","368490001","3348444438","MPS","127100, 915000","9150, 9263","$0.00","Topological wave insulators are a specialized material for transporting wave energy in various applications in modern science and engineering. This project will develop computational methods for several classes of inverse and optimal problems arising from the mathematical studies of partial differential equation (PDE) models for topological wave insulators. The goals of this project are to provide efficient computational algorithms that address several theoretical open questions in this area. Successful completion of this project should stimulate the mathematical research for topological insulators and beyond. The developed computational frameworks will also provide physical experimentalists and engineers with the computational tools to improve the performance and functionalities of topological materials. The project will also integrate students into the research team as part of their professional training.

The project will address several key scientific challenges arising from the inverse and optimal design of the spectrum of the PDE operators in topological wave insulators. First, based on the spectral analysis of the PDE operators in periodic media, a new optimization framework through the enforcement of parity for the eigenfunctions will be built to solve for wave insulators that attain Dirac points at desired Bloch wave vectors and eigenfrequencies. Numerical algorithms based on the construction of wave propagators in periodic media and the design of the spectral indicator function will be developed to efficiently identify the interface parameters that allow for the existence of edge modes in joint topological wave insulators. Finally, efficient convex semidefinite programming based numerical methods will be developed for solving the optimization problems that arise from maximizing the band gaps of the PDE operators for topological wave insulators in order to enlarge the spectral bandwidth of edge modes. This project is jointly funded by the Computational Mathematics and the Established Program to Stimulate Competitive Research (EPSCoR).

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411229","Taming nonlinearity in PDE systems using lifted Newton nonlinear preconditioning","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","06/04/2024","Georg Stadler","NY","New York University","Standard Grant","Yuliya Gorb","08/31/2027","$399,998.00","","stadler@courant.nyu.edu","70 WASHINGTON SQ S","NEW YORK","NY","100121019","2129982121","MPS","127100","9263","$0.00","Many important questions in the natural sciences and in engineering involve nonlinear phenomena, mathematically described by nonlinear equations. Solving these problems typically requires iterative algorithms like Newton's method, which linearizes the nonlinear problem in each iteration. Newton's method is known for its rapid local convergence. However, the convergence theory only applies when the initialization is (very) close to the unknown solution. Thus, relying on local convergence theory is often impractical. Farther from the solution, small Newton updates are typically necessary to prevent divergence, leading to slow overall convergence. This project aims to develop better nonlinear solvers. This will benefit outer-loop problems, such as parameter estimation, learning, control, or design problems, which typically require solving many nonlinear (inner) problems. The project will also support the training and research of at least one graduate student, the mentoring of undergraduate students through the Courant?s Summer Undergraduate Research Experience (SURE) program, and the outreach to K-12 students through the cSplash activity in New York City.

To address issues of slow nonlinear convergence, This project aims to develop methods that lift the nonlinear system to a higher-dimensional space, enabling the application of nonlinear transformations that can mitigate nonlinearity before Newton linearization. The project will develop and systematically study the resulting novel Newton methods for severely nonlinear systems of partial differential equations (PDEs). The proposed lifting and transformation method can be interpreted as nonlinear preconditioning, a research area much less developed than preconditioning for linear systems. The goal of this project is to study for which classes of nonlinear PDE problems this approach improves convergence, to theoretically analyze why, and to make these methods a more broadly accessible tool for solving severely nonlinear systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -20,21 +26,20 @@ "2409900","Tensor-valued finite elements and applications","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","06/03/2024","Jay Gopalakrishnan","OR","Portland State University","Continuing Grant","Yuliya Gorb","08/31/2027","$56,009.00","","gjay@pdx.edu","1600 SW 4TH AVE","PORTLAND","OR","972015508","5037259900","MPS","127100","9263","$0.00","Tensor-valued functions are crucial mathematical abstractions in many areas of science. They are indispensable in modeling solids, fluids, electromagnetics, and even the spacetime we occupy, all areas touched upon in this project. The overarching goal of this project is to build new numerical facilities for approximating certain important tensor fields using new finite elements. The pursuit of this goal is guided not only by the utilitarian considerations of the applications, but also by the elegance of mathematical structures within which the new numerical tools potentially fit. These abstract structures have transdisciplinary connections, including applications of societal impact in material science, fluid dynamics, and optics.

Several specific tensor functions, rich in applications, are targeted in this study for finite element approximation. They include the Riemann curvature tensor on manifolds, the Cauchy stress tensor in solid mechanics, and viscous stresses in incompressible fluids. Certain second-order differential operators, like the incompatibility operator, arising in mechanics and linearized relativity, are targeted for approximation using distributional techniques suited for non-smooth finite elements. The project develops new simulation tools for temporal evolution of certain tensors using symplectic integrators and automatic locally variable timestepping using spacetime tents. Varied elements of this project are unified by a modern viewpoint, exemplified by fitting many existing scalar and vector finite elements into a subcomplex of the de Rham complex. The search for a similar unifying algebraic structure connecting tensor spaces, through certain natural second-order differential operators, is a common mathematical thread running through all aspects of this project.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409807","Approximating partial differential equations without boundary conditions","DMS","COMPUTATIONAL MATHEMATICS","10/01/2024","06/03/2024","Andrea Bonito","TX","Texas A&M University","Standard Grant","Yuliya Gorb","09/30/2027","$399,583.00","","bonito@math.tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","127100","9263","$0.00","The predicting power of computational tools is of paramount importance in engineering and science. They offer insights into the behavior of complex systems, modeled by partial differential equations inside a region of interest. Boundary conditions expressing the influence of the surroundings must be provided to complete the mathematical models. However, there are many instances for which the boundary conditions are not available to practitioners: the understanding of the physical processes might be lacking, for instance when modeling the airflow around an airplane, or the boundary data is not accessible. This project aims to design numerical algorithms able to alleviate missing information on boundary conditions by incorporating physical measurements of the quantity of interest. The problems to be addressed fit under the strategic area of machine learning, and the potential scientific impact of this research is far-reaching. It includes improved meteorological forecasting, discovering biological pathways, and commercial design.

In traditional numerical treatments of elliptic partial differential equations, the solution to be approximated is completely characterized by the given data. However, there are many instances for which the boundary conditions are not available. While not sufficient to pinpoint the solution, measurements of the solution are provided to attenuate the incomplete information. The aim of this research program is to exploit the structure provided by the PDE to design and analyze practical numerical algorithms able to construct the best simultaneous approximation of all functions satisfying the PDE and the measurements. This project embeds the design, analysis, and implementation of numerical methods for PDEs within an optimal recovery framework. It uncovers uncharted problematics requiring new mathematical tools.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410272","Level Set Methods for Multiphase Motion by Mean Curvature","DMS","COMPUTATIONAL MATHEMATICS","06/15/2024","06/03/2024","Selim Esedoglu","MI","Regents of the University of Michigan - Ann Arbor","Standard Grant","Yuliya Gorb","05/31/2027","$349,518.00","","esedoglu@umich.edu","1109 GEDDES AVE, SUITE 3300","ANN ARBOR","MI","481091079","7347636438","MPS","127100","9263","$0.00","This project will develop new algorithms for simulating on the computer a class of mathematical models that describe the evolution in time of a network of surfaces. These models play a prominent role in many applications. A very important example that will receive particular attention is the evolution of the internal structure (microstructure) of polycrystalline materials, such as most metals and ceramics, during manufacturing processes such as heat treatment (annealing). Polycrystalline materials are very common. They are composed of tiny crystallites, known as grains, stuck together. During annealing, the boundaries between the grains, described by a network of surfaces, start to move as some grains get larger, while others shrink and disappear. The shapes and sizes of the grains making up these materials are known to have profound implications for their physical properties, such as their strength and conductivity. Materials scientists have long had mathematical models that describe the motion of the network of grains; what has been lacking is accurate, efficient, reliable, and flexible numerical methods that would allow them to compare large scale simulations of their models against experimental measurements. In recent years, as experimental measurements of time evolution of the three dimensional internal structure of materials have become available, the need for algorithms to simulate the relevant models have become increasingly acute. The project will take steps to address this need. Resulting algorithms will be implemented in software, which will be made available to the broader scientific community. The project will also support the training and research of one graduate student working towards a Ph.D. in mathematics.

The project will take a new approach to designing level set methods for multiphase geometric motions such as motion by mean curvature of networks of surfaces. It will exploit a precise, mathematical connection between a particular discretization of the level set formulation of motion by mean curvature, known as the median filter scheme, and another class of algorithms known as threshold dynamics. This will allow extending advantages of one method to the other. The advantage of threshold dynamics is its generality and highly developed theory of stability and convergence. In particular, recent advances in our theoretical understanding of threshold dynamics enabled its extension to the more elaborate microstructure evolution models of interest to materials scientists. Via its precise connection to median filter schemes, elements of this theory will be carried over to level set methods. The new level set methods will allow arbitrary, normal dependent (anisotropic) surface tensions and mobilities to be assigned to any interface in the network of surfaces ? a level of generality that cannot even be attempted by most existing techniques. They will also allow subgrid accuracy in locating the interface even when implemented on uniform grids ? a distinct advantage of the level set method over threshold dynamics.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2409841","Hydrodynamic models of electric charge transport: structure-preserving numerical methods","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","06/04/2024","Ignacio Tomas","TX","Texas Tech University","Standard Grant","Ludmil T. Zikatanov","08/31/2027","$219,782.00","","igtomas@ttu.edu","2500 BROADWAY","LUBBOCK","TX","79409","8067423884","MPS","127100","9263","$0.00","Electric charge transport physics is at the core of several technologies driving our economic and national security interests. For instance, the design of novel semiconductor devices requires a proper understanding of electron transport in the high-frequency regime. Similarly, the operation of directed energy systems hinges on the development of novel microwave sources and high-voltage high-current pulsed-power infrastructure. The project aims to provide innovative and robust numerical methods that will greatly enhance our predictive capabilities in the context of high-frequency electric charge transport simulation. This project will contribute to developing a new educational curriculum targeting the interdisciplinary training of graduate students at the intersection of mathematical modeling, numerical analysis, scientific computation, and physics.

The project will develop numerical methods to solve electrostatic and electrodynamic fluid models of electric charge transport. The Euler-Maxwell and Euler-Poisson systems are some of the simplest electrodynamic and electrostatic (respectively) fluid models of electric charge transport. These models describe electrically non-neutral plasmas, electron inertia effects, high-frequency electrostatic plasma oscillation, and collective cyclotron motions such as the Diocotron instability. This project comprises numerical analysis, scientific computing, and graduate-level education. The research program will advance space and time discretizations for hydrodynamic models of electric charge transport that are mathematically guaranteed to be robust and preserve key mathematical properties of interest. Among such properties, we have preservation of pointwise stability properties (e.g. positivity of density and minimum principle of the specific entropy), discrete energy-stability, and well-posedness of linear algebra systems. This project comprises three research tasks involving the development of: (I) Semi-implicit schemes for Euler-Maxwell and Euler-Poisson systems, (II) Maxwell's equations formulations and solvers, and (III) Graph-based solvers for nonlinear hyperbolic systems (mathematical theory and high-performance implementation). The resulting methods will be implemented using the library deal.II. It will extend the investigator and collaborators' high-performance software developments. This project will also lead to a new graduate-level class to train a new generation of students on the nature of these models and their technological applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410045","Rational approximation for new structured methods in numerical linear algebra","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","06/03/2024","Heather Wilber","WA","University of Washington","Standard Grant","Ludmil T. Zikatanov","06/30/2027","$227,891.00","","hdw27@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","127100","9263","$0.00","This grant supports the development of extremely fast methods for large-scale computing with structured matrices that appear pervasively in applications such as imaging, control theory, and signal processing. The Investigator will leverage new ideas in rational approximation that explain the structures in these matrices and imply the possibility of superfast algorithms. The matrices of primary interest are those with special displacement structures, including Toeplitz, Vandermonde, Hankel, Cauchy, and Loewner matrices, as well as block variants of these matrices. Such matrices and related matrix equations are ubiquitous across the sciences, and improved algorithms are greatly needed to overcome computational bottlenecks that currently impede progress and limit the scale of investigable problems. Collaborating with domain experts, the Investigator will develop open-source software that solves these problems under broader assumptions and at larger scales than what is currently possible. In areas such as MRI imaging, geophysical imaging, Fourier imaging in astrophysics and scattering, and in climate modeling, these improvements will ultimately benefit the public with positive impacts on medical technologies and other technologies deployed in the interest of citizens.

The goal and scope of the project is to advance scientific knowledge in two critical ways: (1) It will extend the applicability of rank-structured methods beyond what is currently possible and create new methods for working with rank-structured rectangular matrices. The solvers developed in this work are general and can be applied to any matrix with rectangular hierarchical structure. The work will develop general techniques for efficiently designing preconditioners, solving least squares and minimum norm problems, applying regularization, and solving constrained optimization problems that involve rectangular hierarchical matrices. It will inspire further research into both the design and application of direct methods in settings where previously they were too expensive or underdeveloped to consider. (2) This work tackles a collection of matrix families that lie at the heart of many applications. It supplies a new and general framework from which all of their compression properties can be theoretically understood. The foundation of that framework comes from rational approximation theory.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409951","Low-rank gradient flow - a first order algorithm for non-convex optimization","DMS","COMPUTATIONAL MATHEMATICS","06/15/2024","06/03/2024","Andrei Draganescu","MD","University of Maryland Baltimore County","Standard Grant","Troy D. Butler","05/31/2027","$250,000.00","","adraganescu@gmail.com","1000 HILLTOP CIR","BALTIMORE","MD","212500001","4104553140","MPS","127100","075Z, 079Z, 9263","$0.00","Non-convex optimization problems are ubiquitous in science and engineering. They often present significant challenges for many existing classes of algorithms due to the presence of multiple suboptimal, undesirable solutions. The methods emerging from this project will circumvent some of these challenges due to their ability of bypassing more efficiently suboptimal solutions using a novel set of techniques. They will contribute to the numerical solution of non-convex optimization problems that can be found in a very wide range of applications, such as computer-aided design (shape and topology optimization), radiation therapy, optimization of manufacturing processes, inverse problems, optimal control of partial differential equations, statistics, and artificial intelligence. Open software will be shared with the community in order to facilitate the reproducibility of the results. One summer undergraduate student and one graduate student will benefit from training in areas that are relevant to topics of current interest to both academia and industry. Special attention will be given to the recruitment of students from underrepresented groups.

The project is centered around developing and analyzing a novel class of first order methods for solving optimization problems, called low-rank-gradient-flow (LRGF). The idea behind the method consists of developing, at each step, quadratic surrogates with low-rank Hessian, and computing analytically the gradient flow on that surrogate. The step will conclude with a line search along the curvilinear gradient flow, with the purpose of finding a point satisfying the Wolfe conditions. Convergence will be accelerated using a multilevel approach based on reduced order models. The convergence properties of the method will be studied, addressing both questions related to global convergence, efficient construction of low-rank models, as well as convergence rates. The method will be applied to maximum likelihood estimation, optimization of hyperbolic partial differential equations, and training of deep neural networks.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2408978","Finite element methods for complex surface fluids","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/30/2024","Maxim Olshanskiy","TX","University of Houston","Standard Grant","Yuliya Gorb","06/30/2027","$319,951.00","","molshan@math.uh.edu","4300 MARTIN LUTHER KING BLVD","HOUSTON","TX","772043067","7137435773","MPS","127100","9263","$0.00","Material interfaces with lateral fluidity are widespread in biology and are vital for processes at various scales, from subcellular to tissue levels. Mathematical models describe these interfaces using systems of partial differential equations on deforming surfaces, sometimes linked to equations in the bulk. These equations govern the movement of interfaces and fluid flow along them and in the surrounding medium. While existing studies often focus on simple, homogeneous fluid flows on steady surfaces, real-life scenarios are more complex. This research project will develop and analyze new computational methods for studying these complex fluid systems. In addition, open-source software for simulating evolving surface PDEs will be developed and the project will provide research training opportunities for students.

This project will develop and analyze a finite element method for the tangential fluid system posed on a moving surface, a multi-component surface flow problem, and a fluid-elastic interface model, all arising in the continuum modeling of inextensible viscous deformable membranes. A numerical approach will be employed in the project that falls into the category of geometrically unfitted discretizations. It will allow for large surface deformations, avoid the need for surface parametrization and triangulation, and have optimal complexity. The developed technique will incorporate an Eulerian treatment of time derivatives in evolving domains and employ physics-based stable and linear splitting schemes. The particular problems that will be addressed include the analysis of finite element methods for the Boussinesq-Scriven fluid problem on a passively evolving surface; the development of a stable linear finite element scheme for a phase-field model of two-phase surface flows on both steady and evolving surfaces; and the construction of a splitting scheme for equations governing the motion of a material surface exhibiting lateral fluidity and out-of-plane elasticity.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410686","Modeling, discretizations, and solution strategies for multiphysics problems","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/29/2024","Ivan Yotov","PA","University of Pittsburgh","Standard Grant","Yuliya Gorb","06/30/2027","$420,000.00","","yotov@math.pitt.edu","4200 FIFTH AVENUE","PITTSBURGH","PA","152600001","4126247400","MPS","127100","9263","$0.00","The goal of this project is to advance the state-of-the-art in modeling and computation of multiphysics systems that model the physical interactions between two or more media, such as couplings of fluid flows, rigid or deformable porous media, and elastic structures. Typical examples are coupling of free fluid and porous media flows, fluid-structure interaction, and fluid-poroelastic structure interaction (FPSI). The developed methods will be employed for several biomedical and geoscience applications. Biomedical applications include investigation of non-Newtonian and poroelastic effects in arterial flows on important clinical markers such as wall shear stress and relative residence time, modeling LDL transport and drug delivery in blood flows, as well as flows in the eye and the brain. Geoscience applications include tracing organic and inorganic contaminants in coupled surface-subsurface hydrological systems, predicting hydrocarbon extraction in hydraulic fracturing, geothermal energy production, and modeling the effect of proppant particles in injected polymers on the fracture width and flow patterns. While focused on FPSI, the developments in this project will be applicable to modeling and computation of a wide class of multiphysics problems with a broad range of applications.

The project consists of a comprehensive program for mathematical and computational modeling of multiphysics problems that includes 1) development and analysis of new mathematical models, 2) design and analysis of stable, accurate, and robust structure-preserving numerical methods, 3) development and analysis of efficient time-splitting and multiscale domain decomposition algorithms for the solution of the resulting algebraic problems, and 4) applications to the geosciences and biomedicine. Variational formulations of new fluid--poroelastic structure interaction (FPSI) models based on Navier-Stokes - Biot couplings will be developed, extending current model capabilities to flows with higher Reynolds numbers. Fully coupled nonlinear FPSI-transport models, including miscible displacement models with concentration-dependent fluid viscosity, stress-dependent diffusion, and non-isothermal models will also be studied. Novel discretization techniques will be investigated for the numerical approximation of the FPSI models. The focus will be on dual mixed and total pressure discretizations with local conservation of mass and momentum, accurate approximations with continuous normal components for velocities and stresses, and robustness with respect to physical parameters. These include multipoint stress-flux mixed finite element methods and local-stress mimetic finite difference methods that can be reduced to positive definite cell-centered schemes. Efficient multiscale domain decomposition and time-splitting algorithms will be developed for the solution of the resulting algebraic systems. The domain decomposition methodology will be based on space-time variational formulations and will allow for multiple subdomains within each region with non-matching grids along subdomain interfaces and local time-stepping. The convergence of the space-time coarse-scale mortar interface iteration will be studied by analyzing the spectrum of the interface operator. Iterative and non-iterative time-splitting methods will also be investigated.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2408912","Equity Beyond the Algorithm: A Mathematical Quest for Fairer-ness in Machine Learning","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/29/2024","Deanna Needell","CA","University of California-Los Angeles","Standard Grant","Troy D. Butler","06/30/2027","$275,000.00","","deanna@math.ucla.edu","10889 WILSHIRE BLVD STE 700","LOS ANGELES","CA","900244200","3107940102","MPS","127100","075Z, 079Z, 9263","$0.00","While machine learning (ML) and artificial intelligence (AI) are seeing widespread and rapid use across the world, very little is understood about many of their underlying mechanisms, and especially those revolving around fairness and bias. More examples are being reported every day that range from racist outputs of ChatGPT to imaging AI that predicts former president Barack Obama's face to be white. The mathematical community has fallen behind the rush to use ML and AI, yet mathematics is at the heart of the algorithmic designs and mechanisms behind ML and AI. This project will study fairness in ML and AI from several angles. First, it will create a framework that identifies fairness metrics throughout the algorithmic pipelines. Second, it will develop technologies to mitigate biases and improve fairness. Third, it will develop mathematical foundations to help us understand the mechanisms at work inside of many of these so-called black-box methods. In addition, medical and social justice applications will be integrated throughout the project, helping many nonprofits with high data driven needs meet their goals. These include medical applications helping to understand manifestations of Lyme disease as well as tools to help Innocence projects that work to free innocent people from prison, make appeal decisions, and synthesize case files. This synergistic approach both serves the community while also allowing those applications to fuel motivation for new and better mathematics. In addition, students will be integrated within the research team as part of their training.

Although ML and AI methods have expanded by leaps and bounds, there are still critical issues around fairness and bias that remain unresolved. The focus of this project consists of two main goals. First, it will create a framework where ML and AI methods generate informative descriptions about fairness across population groups. Subsequently, a mechanism will be applied based on this assessment to promote fairness across the population. This direction will both establish a structured framework for researchers and practitioners to report fairness metrics and emphasize their significance, while also enabling algorithms to adjust for fairness. The majority of the first goal revolves around showcasing this framework in ML applications including dimension reduction, topic modeling, classification, clustering, data completion, and prediction modeling. Second, the project will provide foundational mathematical support for more complex, seemingly opaque techniques such as neural networks and large language models. This includes the investigation of mathematically tangible shallow networks to understand their behavior in benign and non-benign overfitting. The project will also analyze the geometry of embeddings derived from large language models using a linear algebraic topic modeling approach, which is tied to the first goal. Applications with nonprofit community partners will be included throughout the duration of the project, including those in medicine and criminal and social justice. In total, successful completion of the proposed work will provide a pivotal step towards creating a more equitable and mathematically grounded machine learning landscape.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411198","Collaborative Research: Randomized algorithms for dynamic and hierarchical Bayesian inverse problems","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/30/2024","Arvind Saibaba","NC","North Carolina State University","Standard Grant","Troy D. Butler","07/31/2027","$170,000.00","","asaibab@ncsu.edu","2601 WOLF VILLAGE WAY","RALEIGH","NC","276950001","9195152444","MPS","127100","9263","$0.00","Inverse problems appear in a diverse array of applications - in medical imaging, for X-ray computerized tomography, ultrasound, and magnetic resonance imaging; in geophysics, for atmospheric tomography, electrical resistivity tomography, seismic tomography, and weather prediction; in material sciences, for X-ray ptychography; in homeland security applications, for luggage scanning; in astrophysics, to image black holes and cosmic background estimation. The main goal of solving inverse problems is to use measurements to estimate the parameters of physical models. Being able to solve inverse problems efficiently, accurately, and with quantifiable certainty remains an open challenge. Randomized algorithms have made several advances in numerical linear algebra due to their ability to dramatically reduce computational costs without significantly compromising the accuracy of the computations. However, there is a rich and relatively unexplored field of research that lies between randomized numerical linear algebra and inverse problems, in particular for dynamic and hierarchical problems, where randomization can and should be exploited in unique ways. This project will address fundamental issues in the development of computationally efficient solvers for inverse problems and uncertainty quantification. The project will also train graduate students on start-of-the-art randomized algorithms.

The project will develop new and efficient randomized algorithms for mitigating the computational burdens of two types of inverse problems: hierarchical Bayesian inverse problems and dynamical inverse problems. The two main thrusts of this project are (i) to develop efficient algorithms to quantify the uncertainty of the hyperparameters that govern Bayesian inverse problems, and (ii) to develop new iterative methods that leverage randomization to efficiently approximate solutions and enable uncertainty quantification for large-scale inverse problems. This project will advance knowledge in the field of randomized algorithms for computational inverse problems and uncertainty quantification. It will also create numerical methods that are expected to be broadly applicable to many areas of science and engineering.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2410140","Manifold learning in Wasserstein space using Laplacians: From graphs to submanifolds","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/29/2024","Caroline Moosmueller","NC","University of North Carolina at Chapel Hill","Standard Grant","Troy D. Butler","07/31/2027","$419,962.00","Shiying Li","cmoosm@unc.edu","104 AIRPORT DR STE 2200","CHAPEL HILL","NC","275995023","9199663411","MPS","127100","079Z, 9263","$0.00","Manifold learning algorithms are tools used to reveal the underlying structure of high-dimensional datasets. This can be achieved by finding a lower-dimensional representation of the dataset, thereby enhancing the efficiency of subsequent data analysis. They find applications across various fields such as single-cell analysis, natural language processing, and neuroscience. While most existing algorithms are designed for datasets represented in vector spaces, real-world data often comprises distributions or point-clouds, presenting both theoretical and computational challenges for manifold learning algorithms. This project will develop manifold learning algorithms tailored for distributional or point-cloud datasets, with a particular emphasis on theoretical analysis and computational efficiency. Leveraging the framework of optimal transport and established manifold learning theory in vector spaces, the project will address these challenges. This project will also train students in interdisciplinary aspects of the research.

This project will develop and analyze algorithms for uncovering low-dimensional intrinsic structures of data sets within Wasserstein space, a natural space for distributions or point-clouds. This is motivated by the recent success in representing data as elements in Wasserstein space, as opposed to Euclidean space, and the necessity to develop efficient algorithms for their analysis. To accomplish the goals of this project, the research team will leverage the eigenvectors of a Laplacian matrix built from a data-dependent graph. Specifically, consistency theory of operators such as the Laplacian between the discrete (graph) and the continuous (submanifold) setting will be developed, drawing inspiration from the well-established theory for finite-dimensional Riemannian manifolds. The project will develop theoretically provable methods that provide algorithmic insights, which in turn can be used for efficient algorithms. The aims are threefold: (1) define dimensionality reduction algorithms for point-cloud data that can uncover curved submanifolds through suitable embeddings, (2) provide theoretical guarantees for these embeddings, and (3) design efficient algorithms for applications in high-dimensional settings such as single-cell data analysis.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411141","Nonlinear Eigenvalue Problems: Building a New Paradigm Through the Lens of Systems Theory and Rational Interpolation","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/29/2024","Serkan Gugercin","VA","Virginia Polytechnic Institute and State University","Continuing Grant","Jodi Mead","07/31/2027","$89,958.00","Mark Embree","gugercin@math.vt.edu","300 TURNER ST NW","BLACKSBURG","VA","240603359","5402315281","MPS","127100","9263","$0.00","When building new devices or products, engineers naturally want to optimize performance with respect to some design variables; this process typically involves simulation with large-scale mathematical models. One desirable goal of this optimization is to maximize the stability of a system, to avoid designs for which small disturbances can get magnified until failure occurs. This project will study new approaches for assessing such stability, including a technique for simultaneously analyzing an entire ensemble of systems across a range of design variables, rather than analyzing individual systems one at a time. These techniques involve the symbiotic interplay of data and mathematical models. The project will involve graduate student training and professional development through summer research and capstone projects for Virginia Tech?s Computational Modeling and Data Analytics major.

Nonlinear eigenvalue problems (NLEVPs) arise naturally in many applications throughout science and engineering, from networks of vibrating structures to dynamical systems with time delays. In contrast to the linear eigenvalue problem, algorithms for solving NLEVPs remain in an unsettled state due to the fundamental challenges these problems pose. This project approaches NLEVPs through the lens of control theory, identifying contour-based eigenvalue algorithms as examples of system realization techniques. Given this perspective, this research program seeks to develop robust, reliable algorithms and software for NLEVPs, with an eye toward optimal parameter selection and efficiency for large-scale problems. This analysis and computational methods will be extended to handle parameter-dependent NLEVPs, where the problem varies based on one or more physical parameters. The project will also look in the opposite direction, using contour integral algorithms from eigenvalue computations to offer new approaches to data-driven modeling of dynamical systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410699","Collaborative Research: Memory-aware Accelerated Solvers for Nonlinear Problems in High Dimensional Imaging Applications","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/30/2024","Mirjeta Pasha","VA","Virginia Polytechnic Institute and State University","Standard Grant","Troy D. Butler","06/30/2027","$162,490.00","","mpasha@vt.edu","300 TURNER ST NW","BLACKSBURG","VA","240603359","5402315281","MPS","127100","9263","$0.00","Achieving significant advances over the state-of-the-art methods for imaging applications such as reconstruction and high-dimensional image/video compression requires fundamentally new approaches to keep up with the volumes of (multi-dimensional and possibly streaming) data that have become pervasive in science and engineering applications. Consider, for instance, the need to continuously monitor, diagnose, and visualize anomalies within the body through modern magnetic resonance (MR) and computerized tomography (CT) scans, to inspect objects at crowded checkpoints, or check surveillance video for possible threats. These applications present a common challenge: the need to process an ever-increasing amount of data quickly and accurately to enable real-time decisions at a low computational cost while respecting limited memory capacities. This collaborative project will address these challenges through an innovative, multi-pronged, mathematical and algorithmic framework that capitalizes on properties inherent in the data as well as on features in the solutions (i.e. images, video frames) that persist over time and/or as the solutions are being resolved. The work produced will have broad scientific impact: for example, the newly offered speed of the image reconstruction methods may improve the ability to detect anomalies in tissue, underground or in luggage, while the compression algorithms hold promise for other disciplines where the ability to compress and explain multi-way (a.k.a tensor) data is paramount, such as satellite imaging, biology, and data science. Graduate students will be trained as part of this project.

Digital images and video are inherently multi-way objects. A single, grayscale, digital image is a two-dimensional array of numbers with the numbers coded to appear as shades of gray whereas a collection of such grayscale images, such as video frames, are three-way arrays, also called third order tensors. The benefit of tensor compression (or completion, if some image values are missing or obscured) techniques in terms of quality over more traditional matrix-based methods merit their use. Reconstructing images that preserve edges is also of paramount importance: consider that an image edge defines the boundary between a tumor and normal tissue, for instance. This project will focus on these two distinct imaging problems, edge-based reconstruction and compressed tensor data representation, whose solution requires memory-efficient iterative approaches, but for which the state-of-the-art iterative techniques are slow to converge and memory intensive. The acceleration will be achieved by a combination of judicious choice of limited memory recycled subspaces, classical acceleration approaches (e.g., NGMRES or Anderson Acceleration), and operator approximation. Furthermore, if the data arrives asynchronously or the regularized problem cannot all fit into memory at once, the method will extend to streamed-recycling. The streamed-recycling approach will break the problem up into memory-manageable chunks while keeping a small-dimensional subspace that encodes and retains the most important features to enable solution to the original, large-scale problem. The impact of the accelerated edge-preserving image reconstruction algorithms will be demonstrated on X-ray CT, but the algorithms will have much wider applicability in other imaging modalities.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411197","Collaborative Research: Randomized algorithms for dynamic and hierarchical Bayesian inverse problems","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/30/2024","Julianne Chung","GA","Emory University","Standard Grant","Troy D. Butler","07/31/2027","$170,000.00","","jmchung@emory.edu","201 DOWMAN DR NE","ATLANTA","GA","303221061","4047272503","MPS","127100","9263","$0.00","Inverse problems appear in a diverse array of applications - in medical imaging, for X-ray computerized tomography, ultrasound, and magnetic resonance imaging; in geophysics, for atmospheric tomography, electrical resistivity tomography, seismic tomography, and weather prediction; in material sciences, for X-ray ptychography; in homeland security applications, for luggage scanning; in astrophysics, to image black holes and cosmic background estimation. The main goal of solving inverse problems is to use measurements to estimate the parameters of physical models. Being able to solve inverse problems efficiently, accurately, and with quantifiable certainty remains an open challenge. Randomized algorithms have made several advances in numerical linear algebra due to their ability to dramatically reduce computational costs without significantly compromising the accuracy of the computations. However, there is a rich and relatively unexplored field of research that lies between randomized numerical linear algebra and inverse problems, in particular for dynamic and hierarchical problems, where randomization can and should be exploited in unique ways. This project will address fundamental issues in the development of computationally efficient solvers for inverse problems and uncertainty quantification. The project will also train graduate students on start-of-the-art randomized algorithms.

The project will develop new and efficient randomized algorithms for mitigating the computational burdens of two types of inverse problems: hierarchical Bayesian inverse problems and dynamical inverse problems. The two main thrusts of this project are (i) to develop efficient algorithms to quantify the uncertainty of the hyperparameters that govern Bayesian inverse problems, and (ii) to develop new iterative methods that leverage randomization to efficiently approximate solutions and enable uncertainty quantification for large-scale inverse problems. This project will advance knowledge in the field of randomized algorithms for computational inverse problems and uncertainty quantification. It will also create numerical methods that are expected to be broadly applicable to many areas of science and engineering.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2410698","Collaborative Research: Memory-aware Accelerated Solvers for Nonlinear Problems in High Dimensional Imaging Applications","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/30/2024","Misha Kilmer","MA","Tufts University","Standard Grant","Troy D. Butler","06/30/2027","$162,510.00","","misha.kilmer@tufts.edu","169 HOLLAND ST","SOMERVILLE","MA","021442401","6176273696","MPS","127100","9263","$0.00","Achieving significant advances over the state-of-the-art methods for imaging applications such as reconstruction and high-dimensional image/video compression requires fundamentally new approaches to keep up with the volumes of (multi-dimensional and possibly streaming) data that have become pervasive in science and engineering applications. Consider, for instance, the need to continuously monitor, diagnose, and visualize anomalies within the body through modern magnetic resonance (MR) and computerized tomography (CT) scans, to inspect objects at crowded checkpoints, or check surveillance video for possible threats. These applications present a common challenge: the need to process an ever-increasing amount of data quickly and accurately to enable real-time decisions at a low computational cost while respecting limited memory capacities. This collaborative project will address these challenges through an innovative, multi-pronged, mathematical and algorithmic framework that capitalizes on properties inherent in the data as well as on features in the solutions (i.e. images, video frames) that persist over time and/or as the solutions are being resolved. The work produced will have broad scientific impact: for example, the newly offered speed of the image reconstruction methods may improve the ability to detect anomalies in tissue, underground or in luggage, while the compression algorithms hold promise for other disciplines where the ability to compress and explain multi-way (a.k.a tensor) data is paramount, such as satellite imaging, biology, and data science. Graduate students will be trained as part of this project.

Digital images and video are inherently multi-way objects. A single, grayscale, digital image is a two-dimensional array of numbers with the numbers coded to appear as shades of gray whereas a collection of such grayscale images, such as video frames, are three-way arrays, also called third order tensors. The benefit of tensor compression (or completion, if some image values are missing or obscured) techniques in terms of quality over more traditional matrix-based methods merit their use. Reconstructing images that preserve edges is also of paramount importance: consider that an image edge defines the boundary between a tumor and normal tissue, for instance. This project will focus on these two distinct imaging problems, edge-based reconstruction and compressed tensor data representation, whose solution requires memory-efficient iterative approaches, but for which the state-of-the-art iterative techniques are slow to converge and memory intensive. The acceleration will be achieved by a combination of judicious choice of limited memory recycled subspaces, classical acceleration approaches (e.g., NGMRES or Anderson Acceleration), and operator approximation. Furthermore, if the data arrives asynchronously or the regularized problem cannot all fit into memory at once, the method will extend to streamed-recycling. The streamed-recycling approach will break the problem up into memory-manageable chunks while keeping a small-dimensional subspace that encodes and retains the most important features to enable solution to the original, large-scale problem. The impact of the accelerated edge-preserving image reconstruction algorithms will be demonstrated on X-ray CT, but the algorithms will have much wider applicability in other imaging modalities.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2410140","Manifold learning in Wasserstein space using Laplacians: From graphs to submanifolds","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/29/2024","Caroline Moosmueller","NC","University of North Carolina at Chapel Hill","Standard Grant","Troy D. Butler","07/31/2027","$419,962.00","Shiying Li","cmoosm@unc.edu","104 AIRPORT DR STE 2200","CHAPEL HILL","NC","275995023","9199663411","MPS","127100","079Z, 9263","$0.00","Manifold learning algorithms are tools used to reveal the underlying structure of high-dimensional datasets. This can be achieved by finding a lower-dimensional representation of the dataset, thereby enhancing the efficiency of subsequent data analysis. They find applications across various fields such as single-cell analysis, natural language processing, and neuroscience. While most existing algorithms are designed for datasets represented in vector spaces, real-world data often comprises distributions or point-clouds, presenting both theoretical and computational challenges for manifold learning algorithms. This project will develop manifold learning algorithms tailored for distributional or point-cloud datasets, with a particular emphasis on theoretical analysis and computational efficiency. Leveraging the framework of optimal transport and established manifold learning theory in vector spaces, the project will address these challenges. This project will also train students in interdisciplinary aspects of the research.

This project will develop and analyze algorithms for uncovering low-dimensional intrinsic structures of data sets within Wasserstein space, a natural space for distributions or point-clouds. This is motivated by the recent success in representing data as elements in Wasserstein space, as opposed to Euclidean space, and the necessity to develop efficient algorithms for their analysis. To accomplish the goals of this project, the research team will leverage the eigenvectors of a Laplacian matrix built from a data-dependent graph. Specifically, consistency theory of operators such as the Laplacian between the discrete (graph) and the continuous (submanifold) setting will be developed, drawing inspiration from the well-established theory for finite-dimensional Riemannian manifolds. The project will develop theoretically provable methods that provide algorithmic insights, which in turn can be used for efficient algorithms. The aims are threefold: (1) define dimensionality reduction algorithms for point-cloud data that can uncover curved submanifolds through suitable embeddings, (2) provide theoretical guarantees for these embeddings, and (3) design efficient algorithms for applications in high-dimensional settings such as single-cell data analysis.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2409868","On Iteratively Regularized Alternating Minimization under Nonlinear Dynamics Constraints with Applications to Epidemiology","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","05/29/2024","Alexandra Smirnova","GA","Georgia State University Research Foundation, Inc.","Standard Grant","Troy D. Butler","08/31/2027","$200,000.00","Xiaojing Ye","asmirnova@gsu.edu","58 EDGEWOOD AVE NE","ATLANTA","GA","303032921","4044133570","MPS","127100","9263","$0.00","How widely has the virus spread? This important and often overlooked question was brought to light by the recent COVID-19 outbreak. Several techniques have been used to account for silent spreaders along with varying testing and healthcare seeking habits as the main reasons for under-reporting of incidence cases. It has been observed that silent spreaders play a more significant role in disease progression than previously understood, highlighting the need for policymakers to incorporate these hidden figures into their strategic responses. Unlike other disease parameters, i.e., incubation and recovery rates, the case reporting rate and the time-dependent effective reproduction number are directly influenced by a large number of factors making it impossible to directly quantify these parameters in any meaningful way. This project will advance iteratively regularized numerical algorithms, which have emerged as a powerful tool for reliable estimation (from noise-contaminated data) of infectious disease parameters that are crucial for future projections, prevention, and control. Apart from epidemiology, the project will benefit all real-world applications involving massive amounts of observation data for multiple stages of the inversion process with a shared model parameter. In the course of their theoretical and numerical studies, the PIs will continue to create research opportunities for undergraduate and graduate students, including women and students from groups traditionally underrepresented in STEM disciplines. A number of project topics are particularly suitable for student research and will be used to train some of the next generation of computational mathematicians.

In the framework of this project, the PIs will develop new regularized alternating minimization algorithms for solving ill-posed parameter-estimation problems constrained by nonlinear dynamics. While significant computational challenges are shared by both deterministic trust-region and Bayesian methods (such as numerical solutions requiring solutions to possibly complex ODE or PDE systems at every step of the iterative process), the team will address these challenges by constructing a family of fast and stable iteratively regularized optimization algorithms, which carefully alternate between updating model parameters and state variables.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409858","Dynamical Low Rank Methods for Multiscale Kinetic Plasma Simulations","DMS","COMPUTATIONAL MATHEMATICS","08/01/2024","05/29/2024","Jingwei Hu","WA","University of Washington","Standard Grant","Ludmil T. Zikatanov","07/31/2027","$220,000.00","","hujw@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","127100","9263","$0.00","Plasmas consist of many charged particles, such as electrons and ions. The Boltzmann equation is often regarded as the first-principle model for plasmas; however, its numerical simulation is prohibitively expensive even on today?s most powerful supercomputers. The challenges manifest as: 1) High-dimensionality. The Boltzmann equation resides in six-dimensional phase space. Hence, full 6D deterministic simulation requires excessive computing effort and memory. 2) Collision operator. Collisions between particles are described by nonlinear, nonlocal integral operators that are extremely difficult to approximate. Yet, they play a critical role in driving the system towards local thermodynamic equilibrium and must be included in the simulation, especially in transition and fluid regimes. 3) Multiple scales. Plasmas inherently exhibit multiscale physics. Different scaling can lead to different asymptotic models. How to conduct efficient kinetic simulations such that multiscale behaviors are properly captured is a long-standing problem. The overall objective of this project is to develop a set of ultra-efficient deterministic numerical methods for multiscale kinetic plasma simulations. The algorithms to be developed in this project have the potential to provide high-fidelity kinetic plasma simulations across a range of regimes at a manageable computational cost.

The basic framework we will employ is the dynamical low-rank method (DLRM), a robust dimension reduction technique for solving high-dimensional partial differential equations. In essence, DLRM can be viewed as a time-dependent singular value decomposition; instead of solving the 6D equation, it tracks the dynamics of low-rank factors of the solution, which depend on either the three-dimensional position variable or the three-dimensional velocity variable, thus drastically reducing the computational cost and memory footprint. Our focus will be on the nonlinear collisional kinetic equations for plasmas, allowing us to address a broader range of regimes beyond the collisionless ones. We will design an efficient low-rank ansatz inspired by various asymptotic limits of plasma kinetic equations such that the method only requires a few ranks in the limiting regime and is as efficient as solving the reduced fluid models. We will also study the uniform stability and longtime behavior of DLRM rigorously, justifying the method's robustness for treating multiscale problems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2411120","Improving quantum speedup for solving differential equations","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","05/30/2024","Xiantao Li","PA","Pennsylvania State Univ University Park","Continuing Grant","Troy D. Butler","08/31/2027","$59,632.00","","xli@math.psu.edu","201 OLD MAIN","UNIVERSITY PARK","PA","168021503","8148651372","MPS","127100","9263","$0.00","The ultimate challenge in many areas of applied science can be attributed to the limited capability of solving large-scale differential equations. Classical computers encounter a fundamental bottleneck due to the nonlinearity, vast number of degrees of freedom, and inherent stochasticity of these equations. Motivated by the emergence of quantum computing, which promises significant speedups over classical methods for many scientific computing problems, particularly those involving quantum dynamics governed by the Schrodinger equation, this research aims to establish an innovative mathematical framework. This framework will transform a broad range of differential equations into the Schrodinger equation, enabling the application of quantum algorithms. Such quantum speedup has the potential to enhance the prediction of physical properties and optimize system performance based on differential equation models. To ensure broader scientific and societal impacts, the research team will disseminate results at quantum information processing conferences and also integrate graduate students within the research plan as part of their professional training.

The principal investigator will develop an encoding scheme to represent large-scale differential equations within unitary dynamics through a shadow Hamiltonian. Using backward error analysis, the research aims to systematically construct a shadow Hamiltonian with an arbitrarily higher order of accuracy. Moreover, a precise procedure will be developed for mapping nonlinear and stochastic differential equations into such unitary evolution, significantly broadening the applicability of the proposed encoding scheme. The quantum algorithms derived from this project will be applied to non-Hermitian dynamics from topological materials and chemical Langevin dynamics from biomolecular modeling, aiming to make a direct impact on critical physics and engineering fields.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2409868","On Iteratively Regularized Alternating Minimization under Nonlinear Dynamics Constraints with Applications to Epidemiology","DMS","COMPUTATIONAL MATHEMATICS","09/01/2024","05/29/2024","Alexandra Smirnova","GA","Georgia State University Research Foundation, Inc.","Standard Grant","Troy D. Butler","08/31/2027","$200,000.00","Xiaojing Ye","asmirnova@gsu.edu","58 EDGEWOOD AVE NE","ATLANTA","GA","303032921","4044133570","MPS","127100","9263","$0.00","How widely has the virus spread? This important and often overlooked question was brought to light by the recent COVID-19 outbreak. Several techniques have been used to account for silent spreaders along with varying testing and healthcare seeking habits as the main reasons for under-reporting of incidence cases. It has been observed that silent spreaders play a more significant role in disease progression than previously understood, highlighting the need for policymakers to incorporate these hidden figures into their strategic responses. Unlike other disease parameters, i.e., incubation and recovery rates, the case reporting rate and the time-dependent effective reproduction number are directly influenced by a large number of factors making it impossible to directly quantify these parameters in any meaningful way. This project will advance iteratively regularized numerical algorithms, which have emerged as a powerful tool for reliable estimation (from noise-contaminated data) of infectious disease parameters that are crucial for future projections, prevention, and control. Apart from epidemiology, the project will benefit all real-world applications involving massive amounts of observation data for multiple stages of the inversion process with a shared model parameter. In the course of their theoretical and numerical studies, the PIs will continue to create research opportunities for undergraduate and graduate students, including women and students from groups traditionally underrepresented in STEM disciplines. A number of project topics are particularly suitable for student research and will be used to train some of the next generation of computational mathematicians.

In the framework of this project, the PIs will develop new regularized alternating minimization algorithms for solving ill-posed parameter-estimation problems constrained by nonlinear dynamics. While significant computational challenges are shared by both deterministic trust-region and Bayesian methods (such as numerical solutions requiring solutions to possibly complex ODE or PDE systems at every step of the iterative process), the team will address these challenges by constructing a family of fast and stable iteratively regularized optimization algorithms, which carefully alternate between updating model parameters and state variables.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409855","Metric-Dependent Strategies for Inverse Problem Analysis and Computation","DMS","COMPUTATIONAL MATHEMATICS","07/01/2024","05/29/2024","Yunan Yang","NY","Cornell University","Standard Grant","Troy D. Butler","06/30/2027","$275,000.00","","yunan.yang@cornell.edu","341 PINE TREE RD","ITHACA","NY","148502820","6072555014","MPS","127100","9263","$0.00","This project will develop novel approaches to solving inverse problems, which are pivotal in many scientific fields, including biology, geophysics, and medical imaging. Inverse problems often involve deducing unknown parameters from observed data, a task complicated by issues such as sensitivity to measurement noise and complex modeling procedures. The broader significance of this research lies in its potential to significantly enhance the accuracy and efficiency of computational methods used in critical applications such as electrical impedance tomography (EIT), inverse scattering, and cryo-electron microscopy (cryo-EM). For instance, improvements in cryo-EM computation will accelerate breakthroughs in molecular biology and aid in rapid drug development, directly benefiting medical research and public health. Additionally, this project will also (1) engage undergraduate and graduate students in research to foster a new generation of computational mathematicians, and (2) promote STEM careers among K-12 students through outreach activities.

The technical focus of this project will be on the development of metric-dependent strategies to improve the stability and computational efficiency of solving inverse problems. Lipschitz-type stability will be established by selecting metrics tailored to the data and unknown parameters to facilitate more robust algorithmic solutions. A key highlight of the project will be the investigation of the stochastic inverse problem's well-posedness. Sampling methods inspired by metric-dependent gradient flows will serve as the novel computational tool for the practical solution of stochastic inverse problems. These analytical and computational strategies will be designed to handle the randomness inherent in many practical scenarios, shifting the traditional deterministic approach for solving inverse problems to a probabilistic framework that better captures the intricacies of real-world data. This research has the promise to not only advance theoretical knowledge in studying inverse problems but also to develop practical, efficient tools for a wide range of applications in science and engineering.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2432134","Collaborative Research: Computational Methods for Optimal Transport via Fluid Flows","DMS","COMPUTATIONAL MATHEMATICS","05/15/2024","05/17/2024","Yangwen Zhang","LA","University of Louisiana at Lafayette","Continuing Grant","Yuliya Gorb","06/30/2025","$56,877.00","","yangwen.zhang@louisiana.edu","104 E UNIVERSITY AVE","LAFAYETTE","LA","705032014","3374825811","MPS","127100","9150, 9263","$0.00","Transport and mixing in fluids is a topic of fundamental interest in engineering and natural sciences, with broad applications ranging from industrial and chemical mixing on small and large scales, to preventing the spreading of pollutants in geophysical flows. This project focuses on computational methods for control of optimal transport and mixing of some quantity of interest in fluid flows. The question of what fluid flow maximizes mixing rate, slows it down, or even steers a quantity of interest toward a desired target distribution draws great attention from a broad range of scientists and engineers in the area of complex dynamical systems. The goal of this project is to place these problems within a flexible computational framework, and to develop a solution strategy based on optimal control tools, data compression strategies, and methods to reduce the complexity of the mathematical models. This project will also help the training and development of graduate students across different disciplines to conduct collaborative research in optimal transport and mixing, flow control, and computational methods for solving these problems.


The project is concerned with the development and analysis of numerical methods for optimal control for mixing in fluid flows. More precisely, the transport equation is used to describe the non-dissipative scalar field advected by the incompressible Stokes and Navier-Stokes flows. The research aims at achieving optimal mixing via an active control of the flow velocity and constructing efficient numerical schemes for solving this problem. Various control designs will be investigated to steer the fluid flows. Sparsity of the optimal boundary control will be promoted via a non-smooth penalty term in the objective functional. This essentially leads to a highly challenging nonlinear non-smooth control problem for a coupled parabolic and hyperbolic system, or a semi-dissipative system. The project will establish a novel and rigorous mathematical framework and also new accurate and efficient computational techniques for these difficult optimal control problems. Compatible discretization methods for coupled flow and transport will be employed to discretize the controlled system and implement the optimal control designs numerically. Numerical schemes for the highly complicated optimality system will be constructed and analyzed in a systematic fashion. New incremental data compression techniques will be utilized to avoid storing extremely large solution data sets in the iterative solvers, and new model order reduction techniques specifically designed for the optimal mixing problem will be developed to increase efficiency. The synthesis of optimal control and numerical approximation will enable the study of similar phenomena arising in many other complex and real-world flow dynamics.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2347546","Conference: Mathematical models and numerical methods for multiphysics problems","DMS","COMPUTATIONAL MATHEMATICS","01/15/2024","01/10/2024","Ivan Yotov","PA","University of Pittsburgh","Standard Grant","Troy D. Butler","12/31/2024","$30,000.00","","yotov@math.pitt.edu","4200 FIFTH AVENUE","PITTSBURGH","PA","152600001","4126247400","MPS","127100","7556, 9263","$0.00","The Mathematical Models and Numerical Methods for Multiphysics Problems Conference is held May 1-3, 2024 at the University of Pittsburgh in Pittsburgh, PA. The conference aims to bring together experts from different communities in which computational models for multiphysics systems are employed. Multiphysics systems model the physical interactions between two or more media, such as couplings of fluid flows, rigid or deformable porous media, and elastic structures. Typical examples are coupling of free fluid and porous media flows, fluid-structure interaction, and fluid-poroelastic structure interaction. Applications of interest include climate modeling, interaction of surface and subsurface hydrological systems, fluid flows through fractured or deformable aquifers or reservoirs, evolution of soil structures, arterial flows, perfusion of living tissues, and organ modeling, such as the heart, lungs, and brain. The work presented at the conference will cover both rigorous mathematical and numerical analysis and applications to cutting-edge problems.

The mathematical models describing the multiphysics systems of interest consist of couplings of complex systems of partial differential equations. Examples include the Stokes/Navier-Stokes equations for free fluid flows, the linear or nonlinear elasticity equations for structure mechanics, the Darcy equations for porous media flows, and the Biot equations for poroelasticity. Physical phenomena occurring in different regions are coupled through kinematic and dynamic interface conditions. The modeling and simulation process involves well-posedness analysis of the mathematical models, design and analysis of stable, accurate, and robust numerical methods, and development of efficient solution strategies. Despite significant progress in recent years, many challenges remain in all three areas. Examples include, on the mathematical modeling side, the nonlinear advection term in the Navier Stokes equations in coupled settings, nonlinear fully-coupled flow-transport models, nonlinear diffusion, mobility, and elastic parameters, and non-isothermal effects; on the numerical side, structure preserving and parameter robust discretization methods, a posteriori error estimation and mesh adaptivity in both space and time, multiscale and reduced order models; on the solution side, stable and higher-order loosely-coupled time splitting methods, domain decomposition methods, and parameter-robust monolithic solvers and preconditioners. The conference will bring together experts in the field who are actively working to address these challenges. It will provide an environment for them to discuss state-of-the-art results and trends and encourage future collaborations and research directions. The conference website is https://www.mathematics.pitt.edu/events/mathematical-models-and-numerical-methods-multiphysics-systems

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -44,6 +49,3 @@ "2340631","CAREER: Solving Estimation Problems of Networked Interacting Dynamical Systems Via Exploiting Low Dimensional Structures: Mathematical Foundations, Algorithms and Applications","DMS","APPLIED MATHEMATICS, COMPUTATIONAL MATHEMATICS","09/01/2024","02/02/2024","Sui Tang","CA","University of California-Santa Barbara","Continuing Grant","Stacey Levine","08/31/2029","$241,397.00","","suitang@math.ucsb.edu","3227 CHEADLE HALL","SANTA BARBARA","CA","931060001","8058934188","MPS","126600, 127100","079Z, 1045, 9263","$0.00","Networked Interacting Dynamical Systems (NetIDs) are ubiquitous, displaying complex behaviors that arise from the interactions of agents or particles. These systems have found applications in diverse fields, including ecology, engineering, and social sciences, yet their high-dimensional nature makes them challenging to study. This often leads to significant theoretical and computational difficulties, known as the ?curse of dimensionality.? Recent advances in applied mathematics have shed light on these complexities, revealing that complex NetID patterns can arise from low dimensional interactions. Building on these insights, this project is dedicated to developing a theoretical and computational framework to address the estimation problems within these models by exploiting the underlying low dimensional structures. The overarching goal is to create efficient, physically interpretable surrogate models that bridge the gap between qualitative analysis and quantitative data-driven applications, ranging from sensor network optimization to modeling the environmental and climate impacts on fish migration. This research program will provide research opportunities for both undergraduate and graduate students, featuring a graduate summer school at the intersection of NetIDs and machine learning. There will be a particular focus on engaging female and underrepresented minority students in this vibrant field, blending machine learning with differential equations. The project's findings will also enrich mathematical data science course materials for both undergraduate and graduate education.

This project aims to make fundamental mathematical, statistical, and computational advances for solving NetIDs' estimation problems. The research will focus on three primary areas: (1) Developing innovative sampling strategies for optimal data recovery in NetIDs with linear interactions by exploiting their inherent low-dimensionality in terms of sparsity, smoothness, low-rankness. (2) Establishing robust statistical estimation of NetIDs with nonlinear time-varying interactions by combining machine learning, numerical analysis, and functional data analysis to create physically consistent estimators that bypass the ?curse of dimensionality,? while exploring the identifiability and convergence as sample sizes increase. (3) Investigating the statistical predictive properties of Graph Neural Differential Equations, aiming to derive upper bounds for their transferability and generalization error. The results of this project are expected to address the computational challenges of large-scale Graph Neural Networks and bridge theory and practice in NetIDs research.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2427204","CAREER: Adapting the Fluid Projection Method to Model Elasto-plastic Materials","DMS","COMPUTATIONAL MATHEMATICS","04/15/2024","04/16/2024","Christopher Rycroft","WI","University of Wisconsin-Madison","Continuing Grant","Yuliya Gorb","06/30/2024","$23,580.00","","rycroft@wisc.edu","21 N PARK ST STE 6301","MADISON","WI","537151218","6082623822","MPS","127100","1045, 9263","$0.00","There are two main ways that materials deform under an applied force. The deformation can be elastic, so that when the force is removed the material recovers its original shape. Alternatively, the deformation can be plastic, whereby the material undergoes irreversible changes that may subsequently lead to breakage. Many materials of technological importance exhibit a combination of these two types of deformation depending on the applied force, and are called elasto-plastic. One example are bulk metallic glasses (BMGs), which are alloys that have an amorphous atomic arrangement in contrast to most metals. BMGs have desirable properties, such as the ability to be processed like plastics, making them attractive candidates for many applications (e.g. next-generation smartphone cases) due to considerable improvements in manufacturing efficiency. However, experimental measurements of BMG breakage properties show wide variations, limiting their usage. To overcome these limitations, it is essential to develop predictive theoretical and computational models of BMG elasto-plasticity. This project is based on a surprising similarity between the equations for elasto-plastic materials and the equations for incompressible fluids. Using this similarity, computational approaches that were originally developed for fluid flow will be translated to elasto-plasticity. These computational methods will be used in collaboration with theorists and experimentalists to study the fracture properties of BMGs. The ultimate aim is to provide a practical engineering tool for predicting when elasto-plastic materials will break, and how to best design structures using them. This work will be undertaken as part of an integrated program of research, teaching, and mentorship, and will involve outreach activities in New England, including a local library lecture series.

The projection method of Chorin (1968) is a well-established approach for simulating the incompressible Navier-Stokes equations for fluid flow. This proposal is based on a surprising mathematical correspondence between fluids in the incompressible limit and elasto-plastic solids in the quasi-static limit (when inertia can be neglected). In this proposal, this correspondence is harnessed to translate several modern numerical approaches derived from Chorin's projection method to quasi-static elasto-plasticity, resulting in a practical and powerful set of new simulation tools for a different class of physical problem. Compared to existing techniques, the resultant numerical methods are likely to be especially well-suited to problems involving large plastic deformations. An example type of elasto-plastic material are the bulk metallic glasses (BMGs), which are alloys with many favorable properties such as excellent strength and wear resistance. The numerical methods developed here will be used in a collaboration with theorists and experimentalists to study the fracture toughness properties of BMGs, with the aim of predicting BMG toughness over a wide range of experimental conditions. The PI plans to expand the graduate curriculum in numerical methods to address a pressing need in this area. Open source software will be released as part of this project, and the PI will train students in best practices to make software accessible to a broad audience.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2409989","Design and Analysis of Structure Preserving Discretizations to Simulate Pattern Formation in Liquid Crystals and Ferrofluids","DMS","COMPUTATIONAL MATHEMATICS","01/15/2024","01/19/2024","Franziska Weber","CA","University of California-Berkeley","Standard Grant","Yuliya Gorb","06/30/2024","$22,159.00","","fweber@berkeley.edu","1608 4TH ST STE 201","BERKELEY","CA","947101749","5106433891","MPS","127100","9263","$0.00","Complex fluids are mixtures that have a coexistence between two phases. Some examples include shaving cream, blood, and the liquid crystals used in displays (LCD displays) like the one you are probably using right now to read this abstract. On a microscopic scale, the molecules of complex fluids have a special structure, which at a macroscopic scale affects the mechanical response to stress and strain. For instance, the molecules of liquid crystals react to electric fields on a microscopic scale, which on a macroscopic scale changes the polarization of the light passing through the material. Monitors take advantage of this property to allow a certain amount of red, green, or blue light through each pixel. We have barely scratched the surface of what is possible to achieve with complex fluids. Medical researchers hope to exploit the microscopic properties of ferrofluids for magnetic drug targeting, to control with precision the parts of the human body the drug is able to interact with. Materials engineers hope to use complex fluids to assemble nano-structures such as the silicon circuits in CPUs. Mathematical models and computer simulations can be used to describe the dynamics of these fluids. The goal of this research project is to design and analyze new computational algorithms that simulate the behavior of liquid crystals and ferrofluids. The algorithms will be used in simulations which may complement and ultimately replace expensive physical experiments. This research activity may also contribute to our general understanding of pattern formation in complex materials.

Mathematical models for ferrofluids and liquid crystals consist of systems of partial differential equations. Due to the inherent fine scale structure of the fluids under consideration, these partial differential equations are highly nonlinear and coupled. Preserving discrete versions of energy balances, length and other constraints of the solutions of these nonlinear partial differential equations is crucial for obtaining fast and stable numerical schemes that capture realistic scenarios of their dynamics. The aim of this research project is to develop efficient and convergent finite volume and discontinuous Galerkin methods for the Rosensweig model of ferrohydrodynamics, multi-phase flow models of ferrofluids, and models of liquid crystal flows, that mimic the intrinsic structure of the underlying partial differential equations at the discrete level. The resulting algorithms will be implemented and used for extensive simulations to compare to physical observations.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2337678","CAREER: Gaussian Processes for Scientific Machine Learning: Theoretical Analysis and Computational Algorithms","DMS","APPLIED MATHEMATICS, COMPUTATIONAL MATHEMATICS","06/01/2024","03/11/2024","Bamdad Hosseini","WA","University of Washington","Continuing Grant","Yuliya Gorb","05/31/2029","$120,000.00","","bamdadh@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","126600, 127100","079Z, 1045, 9263","$0.00","Machine learning and artificial intelligence are increasingly changing our lives at every scale, from our personal day-to-day activities to large scale shifts in our society, economy, and geopolitics. These technologies have also profoundly transformed sciences with new ideas and algorithms being developed at an immense speed. However, our mathematical understanding of these algorithms is still far beyond their practical development and widespread adoption. Put simply, in many cases we do not know how or why machine learning algorithms work so well, which in turn limits our ability to safely deploy them in safety critical engineering and scientific scenarios. The goal of this project is to develop mathematical formalism and understanding of problems at the intersection of machine learning and science (i.e., scientific machine learning) using rigorous mathematics, leading to novel algorithms and computational methodologies that are interpretable, supported by rigorous theory, and aware of uncertainties.

The project is focused on the development of novel Gaussian Process (GP) based computational frameworks for scientific machine learning that are provably well-posed, robust, and stable, thereby meeting the high standards of scientific computing. The developed methodologies will be capable of rigorous uncertainty quantification and inherit the desirable properties of machine learning algorithms such as flexibility, generalizability, and applicability in high-dimensions. The efforts of the project are directed in three primary directions: (1) GPs for solving nonlinear, high-dimensional and parametric PDEs; (2) GPs for operator learning, emulation, and physics discovery; and (3) GPs for high-dimensional sampling, inference, and generative modeling. Each research direction focuses on the development of algorithms, foundational theory, and concrete applications in engineering and science. The project also contains an extensive education plan focused on machine learning and data science education from high-school through graduate levels with extensive opportunities for training of graduate and undergraduate students as well as local and international outreach.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2424305","Comparative Study of Finite Element and Neural Network Discretizations for Partial Differential Equations","DMS","COMPUTATIONAL MATHEMATICS","03/15/2024","03/15/2024","Jonathan Siegel","TX","Texas A&M University","Continuing Grant","Yuliya Gorb","07/31/2025","$140,889.00","","jwsiegel@tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","127100","079Z, 9263","$0.00","This research connects two different fields, machine learning from data science and numerical partial differential equations from scientific and engineering computing, through the comparative study of the finite element method and finite neuron method. Finite element methods have undergone decades of study by mathematicians, scientists and engineers in many fields and there is a rich mathematical theory concerning them. They are widely used in scientific computing and modelling to generate accurate simulations of a wide variety of physical processes, most notably the deformation of materials and fluid mechanics. By contrast, deep neural networks are relatively new and have only been widely used in the last decade. In this short time, they have demonstrated remarkable empirical performance on a wide variety of machine learning tasks, most notably in computer vision and natural language processing. Despite this great empirical success, there is still a very limited mathematical understanding of why and how deep neural networks work so well. We hope to leverage the success of deep learning to improve numerical methods for partial differential equations and to leverage the theoretical understanding of the finite element method to better understand deep learning. The interdisciplinary nature of the research will also provide a good training experience for junior researchers. This project will support 1 graduate student each year of the three year project.

Piecewise polynomials represent one of the most important functional classes in approximation theory. In classical approximation theory and numerical methods for partial differential equations, these functional classes are often represented by linear functional spaces associated with a priori given grids, for example, by splines and finite element spaces. In deep learning, function classes are typically represented by a composition of a sequence of linear functions and coordinate-wise non-linearities. One important non-linearity is the rectified linear unit (ReLU) function and its powers (ReLUk). The resulting functional class, ReLUk-DNN, does not form a linear vector space but is rather parameterized non-linearly by a high-dimensional set of parameters. This function class can be used to solve partial differential equations and we call the resulting numerical algorithms the finite neuron method (FNM). Proposed research topics include: error estimates for the finite neuron method, universal construction of conforming finite elements for arbitrarily high order partial differential equations, an investigation into how and why the finite neuron method gives a much better asymptotic error estimate than the corresponding finite element method, and the development and analysis of efficient algorithms for using the finite neuron method.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2417818","Conference: NSF Computational Mathematics PI Meeting 2024","DMS","COMPUTATIONAL MATHEMATICS","06/01/2024","02/09/2024","Bamdad Hosseini","WA","University of Washington","Standard Grant","Yuliya Gorb","11/30/2024","$99,999.00","Beatrice Riviere, Akil Narayan","bamdadh@uw.edu","4333 BROOKLYN AVE NE","SEATTLE","WA","981951016","2065434043","MPS","127100","7556, 9263","$0.00","The NSF Computational Mathematics (CompMath) PIs? meeting is the first such event for over a decade and will be held on July 15 and 16, 2024, at the University of Washington in Seattle. The meeting is open to all with interest in CompMath, with a particular emphasis on attracting current and recent PIs of the CompMath program, as well as early career researchers such as junior professors, postdocs, and Ph.D. students working in CompMath. The meeting aims to achieve three primary goals: (1) Community building in CompMath by bringing together the CompMath community to strengthen and expand the network of NSF-supported PIs. Parallel research sessions serve as a platform to highlight the achievements of the PIs and the CompMath program; (2) Engagement of early career scientists through a poster session designed to highlight their works, activities aimed at the professional development of early career researchers, and travel support for these attendees; (3) A forward-looking report will be compiled by the organizers by leveraging community input that presents an overview of discussions during the meeting and summarizes any strategic community recommendations and outcomes from the meeting focusing on the major achievements of CompMath, current challenges, and exciting new research directions.

The CompMath disciplinary program at NSF supports a variety of research endeavors, which include more classical foci of numerical analysis, PDE solving algorithms, and mathematical optimization, along with the more recent and/or nascent subfields of randomized linear algebra, computational imaging, and mathematical aspects of data science and machine learning. Investigators from these subfields indeed attend their own discipline-specific conferences and professional events and also take opportunities to attend meetings in related fields. However, there is a dearth of meeting opportunities between a large collection of researchers who focus on the broad collection of foundational research questions in CompMath. Because the community of PIs supported by project awards from the NSF CompMath program does not have targeted events/conferences where the community broadly meets, the main motivation for this proposal is to organize such a professional meeting. The PI meeting will feature focused presentation sessions where PI?s will highlight their NSF-supported work in the previously described technical research areas.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." diff --git a/Statistics/Awards-Statistics-2024.csv b/Statistics/Awards-Statistics-2024.csv index f2ffebc..e4898b2 100644 --- a/Statistics/Awards-Statistics-2024.csv +++ b/Statistics/Awards-Statistics-2024.csv @@ -1,7 +1,10 @@ "AwardNumber","Title","NSFOrganization","Program(s)","StartDate","LastAmendmentDate","PrincipalInvestigator","State","Organization","AwardInstrument","ProgramManager","EndDate","AwardedAmountToDate","Co-PIName(s)","PIEmailAddress","OrganizationStreet","OrganizationCity","OrganizationState","OrganizationZip","OrganizationPhone","NSFDirectorate","ProgramElementCode(s)","ProgramReferenceCode(s)","ARRAAmount","Abstract" -"2413630","Experimental Design, Uncertainty Quantification and Decision Making for Complex Systems","DMS","OFFICE OF MULTIDISCIPLINARY AC, STATISTICS","08/01/2024","07/15/2024","Qiong Zhang","SC","Clemson University","Standard Grant","Yong Zeng","07/31/2027","$170,000.00","","qiongz@clemson.edu","201 SIKES HALL","CLEMSON","SC","296340001","8646562424","MPS","125300, 126900","075Z, 079Z, 9150","$0.00","Modern engineering and scientific research often involve carrying out experiments with complex systems to solve critical decision-making problems. For example, a complex system characterizing a manufacturing process can be used to determine the optimal material choices and manufacturing procedures in different stages. Since these complex systems may include large-scale computer models and expensive experiments across multiple platforms, it is critical to provide high-quality decisions under limited experimental resources. This award supports fundamental research on an experimental design and uncertainty quantification framework for utilizing structural information of complex systems to address decision-making problems. This framework has the potential to lead to significant improvement in the efficiency and effectiveness of learning and decision-making of complex systems in various engineering and scientific fields. The computational outcome will be made available as open-source software. Additionally, the results from this project will be used in educational activities to demonstrate how experimental design can enhance efficiency in information collection and decision-making for future engineers and scientists.

Motivated by real applications, the project will investigate three specific types of structural information that appear in complex systems: (1) systems that include coupled and/or multiple experimental platforms; (2) systems that exhibit scientific connections among different outputs; and (3) systems that consist of a combination of subsystems characterizing different stages of a complex process. This project will incorporate these types of structural information into statistical surrogates and data acquisition for new sequential experimental algorithms to extend Bayesian optimization approaches developed for ?black-box? systems. Theoretical results on optimal experimental design strategies for different components of the complex systems will be established to guide sequential data acquisition. The project investigator also plans to develop new decision uncertainty quantification methods to effectively assess the quality of the decisions.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2413294","Objective and reliable methods for inference from modern omics data","DMS","STATISTICS","09/01/2024","07/19/2024","Aaron Molstad","MN","University of Minnesota-Twin Cities","Standard Grant","Jun Zhu","08/31/2027","$150,000.00","","amolstad@umn.edu","200 OAK ST SE","MINNEAPOLIS","MN","554552009","6126245599","MPS","126900","","$0.00","Modern ?omics? (e.g., transcriptomics or proteomics) studies often generate data using single-cell or spatially-resolved sequencing technologies. These technologies enable researchers to study, for example, the spatial variation of gene expression across cells or tissues, offering a high-resolution perspective of complex biological dynamics. This perspective allows researchers to better understand disease mechanisms and can lead to the development of novel treatments. However, the data generated by these technologies are high-dimensional and dependent, which can complicate statistical inference. Existing inferential methods are often subjective or unreliable, either requiring user input that may bias or invalidate results, or requiring rigid model assumptions that are frequently violated in practice. This project will address these issues by developing statistical methods that do not rely on user input, and work reliably in more general settings than existing methods. The new methods will be theoretically justified and equipped with fast computational algorithms. Software implementing these methods will be made publicly available, enabling their wide use in academia and industry. The project will also provide training opportunities for both graduate and undergraduate students.

This project develops new statistical methods for inference with high-dimensional dependent data, motivated by challenges in analyzing single-cell and spatially-resolved sequencing data. Specific challenges include the failure of traditional inferential methods when the parameter is at or near the boundary of the parameter space; the need to both generate and test hypotheses from the same data without inflating Type I error rates; and insufficient model flexibility and scalability. The investigator will address each of these issues directly by (i) developing a new test procedure that resolves a well-known challenge of constructing confidence regions for variance components (or functions thereof) near zero; (ii) providing a unified approach for valid post-clustering inference with high-dimensional data from a broad class of distributions; and (iii) developing a general class of penalized mixture models that accommodates multiple latent sources of heterogeneity. The methodological developments in this project lay the groundwork for more general methods addressing more broad challenges in inference near the boundary of the parameter space, post-selection inference, and modeling heterogeneous high-dimensional data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2412661","Self-Normalized Inference for High-Dimensional Time Series","DMS","STATISTICS","08/01/2024","07/19/2024","Ting Zhang","GA","University of Georgia Research Foundation Inc","Standard Grant","Jun Zhu","07/31/2027","$175,000.00","","tingzhang@uga.edu","310 E CAMPUS RD RM 409","ATHENS","GA","306021589","7065425939","MPS","126900","","$0.00","The project aims to initiate a new paradigm for statistical inference of high-dimensional time series. High-dimensional time series refer to a sequence of large dimensional data collected over time, and examples include large panel data in economics, functional magnetic resonance imaging data in neuroscience, stock price data for a large set of companies in finance, cellular usage data over time for a large number of users in telecommunication, high-resolution spatio-temporal data in climate science, and many others. An intrinsic feature of high-dimensional time series data is the temporal dependence among high-dimensional data vectors collected at different time points, while many existing results on high-dimensional data analysis require such vectors to be independent of each other. By allowing dependence to exist not only among different components of the data vector at any given time point but also among data vectors collected at different time points, the project results are expected to lead to a new paradigm for statistical inference and uncertainty quantification of high-dimensional time series. In addition, the project products are expected to be transformative and useful in a wide range of applications to provide the practitioners with a powerful and convenient statistical toolbox for scientific discoveries involving the analysis of high-dimensional time series data. The research developed is expected to be integrated into the undergraduate and graduate education and equip the students with advanced yet accessible statistical tools for analyzing high-dimensional time series data.

The project involves the development of a new paradigm to quantify the accumulative uncertainty of self-normalized statistics over an increasing dimension, and a number of its guided statistical inference problems and real applications. Self-normalization refers to the technique of using recursive estimators to pivotalize the asymptotic distribution of the statistic of interest, which has enjoyed considerable development in the low-dimensional setting. Its extension to the high-dimensional setting, however, can be a nontrivial challenge and directly applying self-normalization to high-dimensional objects can lead to singularities or substantial size distortions. A major gap that prevented self-normalized statistics from prevailing in high-dimensional time series analysis is their non-standard limiting distribution which has been mostly tabulated through numerical approximations. To address this, the project seeks a new approach in connection with harmonic analysis to provide an analytical characterization on how the uncertainty of self-normalized distributions accumulates over an increasing dimension. The results then would guide the development of various statistical methods and their asymptotic theory for self-normalized inference of high-dimensional time series. These include, for example, self-normalized high-dimensional feature selection, simultaneous uncertainty quantification of high-dimensional objects, and extensions to general quantities such as the median, variance, quantiles, autocovariances, ratio statistics, and others.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2426575","A Consensus Study on Frontiers of Statistics in Science and Engineering: 2035 and Beyond","DMS","STATISTICS, IUSE, Climate & Large-Scale Dynamics, Networking Technology and Syst, Ecosystem Science, Comm & Information Foundations, ECR-EDU Core Research, DMREF","09/01/2024","07/19/2024","Michelle Schwalbe","DC","National Academy of Sciences","Continuing Grant","Yong Zeng","08/31/2026","$400,000.00","","mschwalbe@nas.edu","2101 CONSTITUTION AVE NW","WASHINGTON","DC","204180007","2023342254","MPS","126900, 199800, 574000, 736300, 738100, 779700, 798000, 829200","054Z, 094Z, 095Z, 1269, 4444, 7556, 8400","$0.00","This project intends to explore the vitality of research in the field of statistics and consider the future of the discipline. The study will highlight the significance of recent discoveries, the rate of progress at the frontiers of statistics, and emerging trends and fields, as well as address the developments in the realm of deep learning, artificial intelligence, and data science, as well as the statistical questions that motivate this growth. Additionally, the role of statistics in the interactions between mathematics, engineering, and computer science will be examined. The report will also illustrate the importance of statistics in key research domains, such as public health and medicine, and materials science, as well as the impact of research and training in the statistical sciences on science and engineering. This study aims to foster a forward-looking discussion about the state of the statistical sciences and emerging opportunities for the discipline and its stakeholders, including applications relevant to industry and technology, innovation and economic competitiveness, the internet, health and well-being, national security, and other areas of national interest. NSA and NIH also contribute to the study.

The National Academies will assemble an interdisciplinary ad hoc committee to produce a forward-looking assessment of the current state of the statistical sciences and emerging opportunities for the discipline and its stakeholders as they look ahead to 2035 and beyond. The study may make recommendations to funding agencies on how to adjust and expand their portfolios of activities and partnerships to understand the evolution of the field and strengthen the impact of the discipline. Specifically, the study will assess: The state of research in the statistical sciences, examining aspects such as the significance of recent developments and highlighting current and emerging trends, challenges, and directions. It will further assess statistical research in allied fields, and interactions between the statistical sciences and the mathematical, computational, engineering, materials science, and related fields such as biostatistics, probability, machine learning, artificial intelligence, and data science. The study will also examine the statistical needs to support scientific and technological advances, including the importance of statistical sciences and the strength of interdisciplinary collaborations. The role of statistical sciences in key research domains for American competitiveness, such as manufacturing, materials science, blockchain, biomedical and biological sciences, public health, medicine, health equity, geosciences, environmental health and science, astronomy, and energy applications. Finally, the study will also assess statistical education, training, and workforce development.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413107","Collaborative Research: Multi-source Learning: Data-driven Algorithms, Optimality Theory, and Applications","DMS","STATISTICS","08/01/2024","07/17/2024","Zijian Guo","NJ","Rutgers University New Brunswick","Continuing Grant","Tapabrata Maiti","07/31/2027","$40,864.00","","zijguo@stat.rutgers.edu","3 RUTGERS PLZ","NEW BRUNSWICK","NJ","089018559","8489320150","MPS","126900","","$0.00","Massive and diverse high-dimensional datasets are now routinely collected in a wide range of scientific fields. In many instances, in addition to the primary data from the target study, other datasets from different populations or under different environments with a similar structure to the primary data have been collected. Incorporating such related auxiliary data is desirable to make more accurate and informative decisions. For example, the availability of large-scale genomic and proteomic data promises a better understanding of disease processes and suggests the possibility of more accurate prediction of disease outcomes. Efficiently extracting meaningful information from multiple such datasets becomes a critical problem in medical research, which presents unprecedented opportunities to statisticians and data scientists. The project's goal is to devise a collection of advanced statistical tools for efficient integrative analysis of EHR and genomics data.

The PIs aim to address the pressing need for novel statistical methods to perform efficient integrative analysis that combines multiple data sources. The PIs plan to develop new methodologies and optimality theory for efficiently integrating large-scale data from multiple sources and to address critical biomedical problems using the newly developed methods. There are three major research goals to be pursued. One is to develop data-driven algorithms with theoretical optimality guarantees for transfer learning in various settings, including estimation and inference of high-dimensional covariance matrices, covariance functions for functional data, instrumental variable regression, and conformal inference. The second is to develop a class of adversarially robust algorithms that efficiently integrate the heterogeneous information from the multi-source data, including constructing the guided adversarially robust learning and conducting the group significance test for high-dimensional and nonparametric models. The third is to address the urgent needs and new challenges in biomedical studies through the analyses of EHR data and integrative genomics, using the newly developed methods for transfer learning and adversarially robust learning.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413106","Collaborative Research: Multi-source Learning: Data-driven Algorithms, Optimality Theory, and Applications","DMS","STATISTICS","08/01/2024","07/17/2024","T. Tony Cai","PA","University of Pennsylvania","Standard Grant","Tapabrata Maiti","07/31/2027","$250,000.00","","tcai@wharton.upenn.edu","3451 WALNUT ST STE 440A","PHILADELPHIA","PA","191046205","2158987293","MPS","126900","","$0.00","Massive and diverse high-dimensional datasets are now routinely collected in a wide range of scientific fields. In many instances, in addition to the primary data from the target study, other datasets from different populations or under different environments with a similar structure to the primary data have been collected. Incorporating such related auxiliary data is desirable to make more accurate and informative decisions. For example, the availability of large-scale genomic and proteomic data promises a better understanding of disease processes and suggests the possibility of more accurate prediction of disease outcomes. Efficiently extracting meaningful information from multiple such datasets becomes a critical problem in medical research, which presents unprecedented opportunities to statisticians and data scientists. The project's goal is to devise a collection of advanced statistical tools for efficient integrative analysis of EHR and genomics data.

The PIs aim to address the pressing need for novel statistical methods to perform efficient integrative analysis that combines multiple data sources. The PIs plan to develop new methodologies and optimality theory for efficiently integrating large-scale data from multiple sources and to address critical biomedical problems using the newly developed methods. There are three major research goals to be pursued. One is to develop data-driven algorithms with theoretical optimality guarantees for transfer learning in various settings, including estimation/inference of high-dimensional covariance matrices, covariance functions for functional data, instrumental variable regression, and conformal inference. The second is to develop a class of adversarially robust algorithms that efficiently integrate the heterogeneous information from the multi-source data, including constructing the guided adversarially robust learning and conducting the group significance test for high-dimensional and nonparametric models. The third is to address the urgent needs and new challenges in biomedical studies through the analyses of EHR data and integrative genomics, using the newly developed methods for transfer learning and adversarially robust learning.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2413630","Experimental Design, Uncertainty Quantification and Decision Making for Complex Systems","DMS","OFFICE OF MULTIDISCIPLINARY AC, STATISTICS","08/01/2024","07/15/2024","Qiong Zhang","SC","Clemson University","Standard Grant","Yong Zeng","07/31/2027","$170,000.00","","qiongz@clemson.edu","201 SIKES HALL","CLEMSON","SC","296340001","8646562424","MPS","125300, 126900","075Z, 079Z, 9150","$0.00","Modern engineering and scientific research often involve carrying out experiments with complex systems to solve critical decision-making problems. For example, a complex system characterizing a manufacturing process can be used to determine the optimal material choices and manufacturing procedures in different stages. Since these complex systems may include large-scale computer models and expensive experiments across multiple platforms, it is critical to provide high-quality decisions under limited experimental resources. This award supports fundamental research on an experimental design and uncertainty quantification framework for utilizing structural information of complex systems to address decision-making problems. This framework has the potential to lead to significant improvement in the efficiency and effectiveness of learning and decision-making of complex systems in various engineering and scientific fields. The computational outcome will be made available as open-source software. Additionally, the results from this project will be used in educational activities to demonstrate how experimental design can enhance efficiency in information collection and decision-making for future engineers and scientists.

Motivated by real applications, the project will investigate three specific types of structural information that appear in complex systems: (1) systems that include coupled and/or multiple experimental platforms; (2) systems that exhibit scientific connections among different outputs; and (3) systems that consist of a combination of subsystems characterizing different stages of a complex process. This project will incorporate these types of structural information into statistical surrogates and data acquisition for new sequential experimental algorithms to extend Bayesian optimization approaches developed for ?black-box? systems. Theoretical results on optimal experimental design strategies for different components of the complex systems will be established to guide sequential data acquisition. The project investigator also plans to develop new decision uncertainty quantification methods to effectively assess the quality of the decisions.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413928","Collaborative Research: Network Analysis via Optimal Transport of Markov Embeddings","DMS","STATISTICS","08/01/2024","07/16/2024","Andrew Nobel","NC","University of North Carolina at Chapel Hill","Standard Grant","Yulia Gel","07/31/2027","$275,000.00","Sreekalyani Bhamidi","nobel@email.unc.edu","104 AIRPORT DR STE 2200","CHAPEL HILL","NC","275995023","9199663411","MPS","126900","1269, 9263","$0.00","Networks have been used for many years to model and study a variety of phenomena, including social interactions, co-authorship of scholarly work, and financial interactions. More recently, networks themselves have become objects of study. This research project will develop new statistical approaches for comparing and aligning networks, and will include applications of these methods to problems in computational neuroscience, systems biology, and urban planning. This research will advance the state of network analysis by providing new statistical methods as well as theoretical support for these methods. The broader impacts of the project include applications, collaborations with disciplinary scientists, educational outreach from high school to the graduate level, and community outreach.

This research project focuses on the statistical analysis of network data, including the design of new methods, the development of rigorous theoretical support for these methods, and the application of these methods in several relevant scientific domains. The project has four specific objectives. First, investigate optimal transport-based distances for Markov embeddings of networks. Second, develop new methods for network alignment and comparison based on these distances, and apply these or new methods to the problems of model fitting, classification, and node feature prediction. Third, establish rigorous theoretical results concerning the properties of optimal transport distances on networks, investigate relationships between distances and different embedding procedures, and provide theoretical support for the associated methods. Fourth, apply the methods to address problems in computational neuroscience, systems biology, and urban planning. This research will bring together ideas from Markov chains and optimal transport in the setting of network analysis, and the applications of this research will involve the development of efficient, scalable algorithms for analyzing network data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413924","Deep Learning for Survival Analysis, Causal Inference, and Conformal Inference,","DMS","OFFICE OF MULTIDISCIPLINARY AC, STATISTICS","07/15/2024","07/15/2024","Jane-Ling Wang","CA","University of California-Davis","Standard Grant","Jun Zhu","06/30/2027","$250,000.00","","janelwang@ucdavis.edu","1850 RESEARCH PARK DR STE 300","DAVIS","CA","956186153","5307547700","MPS","125300, 126900","079Z","$0.00","This research project will deploy deep learning to three mainstream research areas in statistics: survival analysis, causal inference, and conformal inference. Survival analysis studies event-time data, such as time to death, disease recovery, new employment, or equipment failure. Causal inference aims at assessing the causal effect of one variable (the treatment) on another variable (the outcome), while controlling for potential confounding factors. Conformal inference is a powerful tool to construct prediction intervals without relying on specific assumptions. These three areas have not yet fully benefited from the advantages that deep learning can bring, and the goal of this project is to fill this gap. This research falls squarely in the realm of data-centric AI (artificial intelligence) by tackling some of the key issues where Statistics can make major contributions in the age of AI. A major emphasis is the development of new methodology and theory that will be widely disseminated. The proposed approaches will be applied to various data, including data for breast cancer, hospital care, AIDS studies, and air quality. Codes for the algorithms developed in the project will be posted on CRAN for R or on github for Python. Student researchers will receive training in research, computing and communication skills. The research findings will be incorporated in graduate curricula, undergraduate research projects and short courses at workshops and will be presented at professional meetings.

Project 1 (Deep Learning for Survival Data) will fill a void in deep survival analysis, referring to approaches that employ deep learning for the analysis of incomplete event-time data, by developing hypothesis testing procedures on two fronts: testing the significance of some specific covariates; and goodness of fit tests for survival models. A key feature of survival data is that they routinely involve incomplete observations, such as random right censoring, and therefore regression methods must be adjusted to account for such censorship. Deploying existing methods for inference and testing for deep learning approaches is challenging because of the ability of deep learning to detect the null model structure even while performing the optimal search in the full model. Consequently, conventional test statistics will vanish to zero asymptotically even under the null hypothesis. A new framework of hypothesis testing is thus needed to prevent the test statistic from approaching zero under the null hypothesis. To our knowledge, this is the first attempt to perform significance tests for censored survival data when deep learning is employed to model the risk function nonparametrically. Project 2 (Advances in Causal inference) addresses two problems of high relevance in causal inference: testing for continuous treatments and causal inference for censored survival data. Existing tests for continuous treatment effects fail to attain correct Type-I errors and therefore are not suitable when deploying deep learning. A new test procedure will be designed to resolve this problem with supporting theory. For survival outcomes, the conventional average treatment effect is shown to be ill suited for causal inference, motivating a new paradigm based on median or other quantiles to quantify treatment effects. Project 3 (A New and Improved Approach for Conformal Inference) explores a better conformal score that leads to improved conditional coverage probabilities compared to existing state-of-art score functions. The project will include the first theory for deep learning in conformal inference.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413929","Collaborative Research: Network Analysis via Optimal Transport of Markov Embeddings","DMS","STATISTICS","08/01/2024","07/16/2024","Kevin McGoff","NC","University of North Carolina at Charlotte","Standard Grant","Yulia Gel","07/31/2027","$149,999.00","","kmcgoff1@uncc.edu","9201 UNIVERSITY CITY BLVD","CHARLOTTE","NC","282230001","7046871888","MPS","126900","1269, 9263","$0.00","Networks have been used for many years to model and study a variety of phenomena, including social interactions, co-authorship of scholarly work, and financial interactions. More recently, networks themselves have become objects of study. This research project will develop new statistical approaches for comparing and aligning networks, and will include applications of these methods to problems in computational neuroscience, systems biology, and urban planning. This research will advance the state of network analysis by providing new statistical methods as well as theoretical support for these methods. The broader impacts of the project include applications, collaborations with disciplinary scientists, educational outreach from high school to the graduate level, and community outreach.

This research project focuses on the statistical analysis of network data, including the design of new methods, the development of rigorous theoretical support for these methods, and the application of these methods in several relevant scientific domains. The project has four specific objectives. First, investigate optimal transport-based distances for Markov embeddings of networks. Second, develop new methods for network alignment and comparison based on these distances, and apply these or new methods to the problems of model fitting, classification, and node feature prediction. Third, establish rigorous theoretical results concerning the properties of optimal transport distances on networks, investigate relationships between distances and different embedding procedures, and provide theoretical support for the associated methods. Fourth, apply the methods to address problems in computational neuroscience, systems biology, and urban planning. This research will bring together ideas from Markov chains and optimal transport in the setting of network analysis, and the applications of this research will involve the development of efficient, scalable algorithms for analyzing network data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -21,10 +24,10 @@ "2412832","Collaborative Research: Statistical Modeling and Inference for Object-valued Time Series","DMS","STATISTICS","07/01/2024","06/17/2024","Changbo Zhu","IN","University of Notre Dame","Continuing Grant","Jun Zhu","06/30/2027","$56,755.00","","czhu4@nd.edu","836 GRACE HALL","NOTRE DAME","IN","465566031","5746317432","MPS","126900","","$0.00","Random objects in general metric spaces have become increasingly common in many fields. For example, the intraday return path of a financial asset, the age-at-death distributions, the annual composition of energy sources, social networks, phylogenetic trees, and EEG scans or MRI fiber tracts of patients can all be viewed as random objects in certain metric spaces. For many endeavors in this area, the data being analyzed is collected with a natural ordering, i.e., the data can be viewed as an object-valued time series. Despite its prevalence in many applied problems, statistical analysis for such time series is still in its early development. A fundamental difficulty of developing statistical techniques is that the spaces where these objects live are nonlinear and commonly used algebraic operations are not applicable. This research project aims to develop new models, methodology and theory for the analysis of object-valued time series. Research results from the project will be disseminated to the relevant scientific communities via publications, conference and seminar presentations. The investigators will jointly mentor a Ph.D. student and involve undergraduate students in the research, as well as offering advanced topic courses to introduce the state-of-the-art techniques in object-valued time series analysis.

The project will develop a systematic body of methods and theory on modeling and inference for object-valued time series. Specifically, the investigators propose to (1) develop a new autoregressive model for distributional time series in Wasserstein geometry and a suite of tools for model estimation, selection and diagnostic checking; (2) develop new specification testing procedures for distributional time series in the one-dimensional Euclidean space; and (3) develop new change-point detection methods to detect distribution shifts in a sequence of object-valued time series. The above three projects tackle several important modeling and inference issues in the analysis of object-valued time series, the investigation of which will lead to innovative methodological and theoretical developments, and lay groundwork for this emerging field.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2412403","Robust Extensions to Bayesian Regression Trees for Complex Data","DMS","STATISTICS","08/01/2024","06/17/2024","HENGRUI LUO","TX","William Marsh Rice University","Continuing Grant","Tapabrata Maiti","07/31/2027","$58,710.00","","hl180@rice.edu","6100 MAIN ST","Houston","TX","770051827","7133484820","MPS","126900","","$0.00","This project is designed to extend the capabilities of tree-based models within the context of machine learning. Tree-based models allow for decision-making based on clear, interpretable rules and are widely adopted in diagnostic and learning tasks. This project will develop novel methodologies to enhance their robustness. Specifically, the research will integrate deep learning techniques with tree-based statistical methods to create models capable of processing complex, high-dimensional data from medical imaging, healthcare, and AI sectors. These advancements aim to significantly improve prediction and decision-making processes, enhancing efficiency and accuracy across a broad range of applications. The project also prioritizes inclusivity and education by integrating training components, thereby advancing scientific knowledge and disseminating results through publications and presentations.

The proposed research leverages Bayesian hierarchies and transformation techniques on trees to develop models capable of managing complex transformations of input data. These models will be tailored to improve interpretability, scalability, and robustness, overcoming current limitations in non-parametric machine learning applications. The project will utilize hierarchical layered structures, where outputs from one tree serve as inputs to subsequent trees, forming network architectures that enhance precision in modeling complex data patterns and relationships. Bayesian techniques will be employed to effectively quantify uncertainty and create ensembles, providing reliable predictions essential for critical offline prediction and real-time decision-making processes. This initiative aims to develop pipelines and set benchmarks for the application of tree-based models across diverse scientific and engineering disciplines.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2412015","Statistical methods for point-process time series","DMS","STATISTICS","07/01/2024","06/17/2024","Daniel Gervini","WI","University of Wisconsin-Milwaukee","Standard Grant","Jun Zhu","06/30/2027","$149,989.00","","gervini@uwm.edu","3203 N DOWNER AVE # 273","MILWAUKEE","WI","532113188","4142294853","MPS","126900","","$0.00","This research project will develop statistical models and inference methods for the analysis of random point processes. Random point processes are events that occur at random in time or space according to certain patterns; this project will provide methods for the discovery and analysis of such patterns. Examples of events that can be modelled as random point processes include cyberattacks on a computer network, earthquakes, crimes in a city, spikes of neural activity in humans and animals, car crashes in a highway, and many others. Therefore, the methods to be developed under this project will find applications in many fields, such as national security, economy, neuroscience and geosciences, among others. The project will also provide training opportunities for graduate and undergraduate students in the field of Data Science.

This project will specifically develop statistical tools for the analysis of time series of point processes, that is, for point processes that are observed repeatedly over time; for example, when the spatial distribution of crime in a city is observed for several days. These tools will include trend estimation methods, autocorrelation estimation methods, and autoregressive models. Research activities in this project include the development of parameter estimation procedures, their implementation in computer programs, the study of theoretical large sample properties of these methods, the study of small sample properties by simulation, and their application to real-data problems. Other activities in this project include educational activities, such as the supervision of Ph.D. and Master's students, and the development of graduate and undergraduate courses in Statistics and Data Science.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2412628","Collaborative Research: Partial Priors, Regularization, and Valid & Efficient Probabilistic Structure Learning","DMS","STATISTICS","07/01/2024","06/17/2024","Ryan Martin","NC","North Carolina State University","Standard Grant","Yulia Gel","06/30/2027","$160,000.00","","rgmarti3@ncsu.edu","2601 WOLF VILLAGE WAY","RALEIGH","NC","276950001","9195152444","MPS","126900","1269","$0.00","Modern applications of statistics aim to solve complex scientific problems involving high-dimensional unknowns. One feature that these applications often share is that the high-dimensional unknown is believed to satisfy a complexity-limiting, low-dimensional structure. Specifics of the posited low-dimensional structure are mostly unknown, so a statistically interesting and scientifically relevant problem is structure learning, i.e., using data to learn the latent low-dimensional structure. Because structure learning problems are ubiquitous and reliable uncertainty quantification is imperative, results from this project will have an impact across the biomedical, physical, and social sciences. In addition, the project will offer multiple opportunities for career development of new generations of statisticians and data scientists.

Frequentist methods focus on data-driven estimation or selection of a candidate structure, but currently there are no general strategies for reliable uncertainty quantification concerning the unknown structure. Bayesian methods produce a data-dependent probability distribution over the space of structures that can be used for uncertainty quantification, but it comes with no reliability guarantees. A barrier to progress in reliable uncertainty quantification is the oppositely extreme perspectives: frequentists' anathema of modeling structural/parametric uncertainty versus Bayesians' insistence that such uncertainty always be modeled precisely and probabilistically. Overcoming this barrier requires a new perspective falling between these two extremes, and this project will develop a new framework that features a more general and flexible perspective on probability, namely, imprecise probability. Most importantly, this framework will resolve the aforementioned issues by offering new and powerful methods boasting provably reliable uncertainty quantification in structure learning applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2412408","Monitoring time series in structured function spaces","DMS","STATISTICS","07/01/2024","06/14/2024","Piotr Kokoszka","CO","Colorado State University","Standard Grant","Yulia Gel","06/30/2027","$292,362.00","","Piotr.Kokoszka@colostate.edu","601 S HOWES ST","FORT COLLINS","CO","805212807","9704916355","MPS","126900","1269","$0.00","This project aims to develop new mathematical theory and statistical tools that will enable monitoring for changes in complex systems, for example global trade networks. Comprehensive databases containing details of trade between almost all countries are available. Detecting in real time a change in the typical pattern of trade and identifying countries where this change takes place is an important problem. This project will provide statistical methods that will allow making decisions about an emergence of an atypical pattern in a complex system in real time with certain theoretical guarantees. The project will also offer multiple interdisciplinary training opportunities for the next generation of statisticians and data scientists.

The methodology that will be developed is related to sequential change point detection, but is different because the in-control state is estimated rather than assumed. This requires new theoretical developments because it deals with complex infinite dimensional systems, whereas existing mathematical tools apply only to finite-dimensional systems. Panels of structured functions will be considered and methods for on-line identification of components undergoing change will be devised. All methods will be inferential with controlled probabilities of type I errors. Some of the key aspects of the project can be summarized in the following points. First, statistical theory leading to change point monitoring schemes in infinite dimensional function spaces will be developed. Second, strong approximations valid in Banach spaces will lead to assumptions not encountered in scalar settings and potentially to different threshold functions. Third, for monitoring of random density functions, the above challenges will be addressed in custom metric spaces. Fourth, since random densities are not observable, the effect of estimation will be incorporated. The new methodology will be applied to viral load measurements, investment portfolios, and global trade data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413952","Collaborative Research: Statistical Inference for High Dimensional and High Frequency Data: Contiguity, Matrix Decompositions, Uncertainty Quantification","DMS","STATISTICS","07/01/2024","06/21/2024","Per Mykland","IL","University of Chicago","Standard Grant","Jun Zhu","06/30/2027","$219,268.00","","mykland@galton.uchicago.edu","5801 S ELLIS AVE","CHICAGO","IL","606375418","7737028669","MPS","126900","","$0.00","To pursue the promise of the big data revolution, the current project is concerned with a particular form of such data, high dimensional high frequency data (HD2), where series of high-dimensional observations can see new data updates in fractions of milliseconds. With technological advances in data collection, HD2 data occurs in medicine (from neuroscience to patient care), finance and economics, geosciences (such as earthquake data), marine science (fishing and shipping), and, of course, in internet data. This research project focuses on how to extract information from HD2 data, and how to turn this data into knowledge. As part of the process, the project develops cutting-edge mathematics and statistical methodology to uncover the dependence structure governing HD2 data. In addition to developing a general theory, the project is concerned with applications to financial data, including risk management, forecasting, and portfolio management. More precise estimators, with improved margins of error, will be useful in all these areas of finance. The results will be of interest to main-street investors, regulators and policymakers, and the results will be entirely in the public domain. The project will also provide research training opportunities for students.

In more detail, the project will focus on four linked questions for HD2 data: contiguity, matrix decompositions, uncertainty quantification, and the estimation of spot quantities. The investigators will extend their contiguity theory to the common case where observations have noise, which also permits the use of longer local intervals. Under a contiguous probability, the structure of the observations is often more accessible (frequently Gaussian) in local neighborhoods, facilitating statistical analysis. This is achieved without altering the underlying models. Because the effect of the probability change is quite transparent, this approach also enables more direct uncertainty quantification. To model HD2 data, the investigators will explore time-varying matrix decompositions, including the development of a singular value decomposition (SVD) for high frequency data, as a more direct path to a factor model. Both SVD and principal component analysis (PCA) benefit from contiguity, which eases both the time-varying construction, and uncertainty quantification. The latter is of particular importance not only to set standard errors, but also to determine the trade-offs involved in estimation under longitudinal variation: for example, how many minutes or days are required to estimate a covariance matrix, or singular vectors? The investigators also plan to develop volatility matrices for the drift part of a financial process, and their PCAs. The work on matrix decompositions will also benefit from projected results on spot estimation, which also ties in with contiguity. It is expected that the consequences of the contiguity and the HD2 inference will be transformational, leading to more efficient estimators and better prediction, and that this approach will form a new paradigm for high frequency data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413748","Collaborative Research: NSF MPS/DMS-EPSRC: Stochastic Shape Processes and Inference","DMS","STATISTICS","08/01/2024","06/20/2024","Anuj Srivastava","FL","Florida State University","Standard Grant","Yulia Gel","07/31/2027","$200,000.00","","anuj@stat.fsu.edu","874 TRADITIONS WAY","TALLAHASSEE","FL","323060001","8506445260","MPS","126900","1269, 7929","$0.00","The intimate link between form, or shape, and function is ubiquitous in science. In biology, for instance, the shapes of biological components are pivotal in understanding patterns of normal behavior and growth; a notable example is protein shape, which contributes to our understanding of protein function and classification. This project, led by a team of investigators from the USA and the UK, will develop ways of modeling how biological and other shapes change with time, using formal statistical frameworks that capture not only the changes themselves, but how these changes vary across objects and populations. This will enable the study of the link between form and function in all its variability. As example applications, the project will develop models for changes in cell morphology and topology during motility and division, and changes in human posture during various activities, facilitating the exploration of scientific questions such as how and why cell division fails, or how to improve human postures in factory tasks. These are proofs of concept, but the methods themselves will have much wider applicability. This project will thus not only progress the science of shape analysis and the specific applications studied; it will have broader downstream impacts on a range of scientific application domains, providing practitioners with general and useful tools.

While there are several approaches for representing and analyzing static shapes, encompassing curves, surfaces, and complex structures like trees and shape graphs, the statistical modeling and analysis of dynamic shapes has received limited attention. Mathematically, shapes are elements of quotient spaces of nonlinear manifolds, and shape changes can be modeled as stochastic processes, termed shape processes, on these complex spaces. The primary challenges lie in adapting classical modeling concepts to the nonlinear geometry of shape spaces and in developing efficient statistical tools for computation and inference in such very high-dimensional, nonlinear settings. The project consists of three thrust areas, dealing with combinations of discrete and continuous time, and discrete and continuous representations of shape, with a particular emphasis on the issues raised by topology changes. The key idea is to integrate spatiotemporal registration of objects and their evolution into the statistical formulation, rather than treating them as pre-processing steps. This project will specifically add to the current state-of-the-art in topic areas such as stochastic differential equations on shape manifolds, time series models for shapes, shape-based functional data analysis, and modeling and inference on infinite-dimensional shape spaces.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2412628","Collaborative Research: Partial Priors, Regularization, and Valid & Efficient Probabilistic Structure Learning","DMS","STATISTICS","07/01/2024","06/17/2024","Ryan Martin","NC","North Carolina State University","Standard Grant","Yulia Gel","06/30/2027","$160,000.00","","rgmarti3@ncsu.edu","2601 WOLF VILLAGE WAY","RALEIGH","NC","276950001","9195152444","MPS","126900","1269","$0.00","Modern applications of statistics aim to solve complex scientific problems involving high-dimensional unknowns. One feature that these applications often share is that the high-dimensional unknown is believed to satisfy a complexity-limiting, low-dimensional structure. Specifics of the posited low-dimensional structure are mostly unknown, so a statistically interesting and scientifically relevant problem is structure learning, i.e., using data to learn the latent low-dimensional structure. Because structure learning problems are ubiquitous and reliable uncertainty quantification is imperative, results from this project will have an impact across the biomedical, physical, and social sciences. In addition, the project will offer multiple opportunities for career development of new generations of statisticians and data scientists.

Frequentist methods focus on data-driven estimation or selection of a candidate structure, but currently there are no general strategies for reliable uncertainty quantification concerning the unknown structure. Bayesian methods produce a data-dependent probability distribution over the space of structures that can be used for uncertainty quantification, but it comes with no reliability guarantees. A barrier to progress in reliable uncertainty quantification is the oppositely extreme perspectives: frequentists' anathema of modeling structural/parametric uncertainty versus Bayesians' insistence that such uncertainty always be modeled precisely and probabilistically. Overcoming this barrier requires a new perspective falling between these two extremes, and this project will develop a new framework that features a more general and flexible perspective on probability, namely, imprecise probability. Most importantly, this framework will resolve the aforementioned issues by offering new and powerful methods boasting provably reliable uncertainty quantification in structure learning applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413864","Statistical Properties of Neural Networks","DMS","STATISTICS","07/01/2024","06/18/2024","Sourav Chatterjee","CA","Stanford University","Standard Grant","Tapabrata Maiti","06/30/2027","$225,000.00","","souravc@stanford.edu","450 JANE STANFORD WAY","STANFORD","CA","943052004","6507232300","MPS","126900","1269","$0.00","Neural networks have revolutionized science and engineering in recent years, but their theoretical properties are still poorly understood. The proposed projects aim to gain a deeper understanding of these theoretical properties, especially the statistical ones. It is a matter of intense debate whether neural networks can ""think"" like humans do, by recognizing logical patterns. The project aims to take a small step towards showing that under ideal conditions, perhaps they can. If successful, this will have impact in a vast range of applications of neural networks. This award includes support and mentoring for graduate students.

In one direction, it is proposed to study features of deep neural networks that distinguish them from classical statistical parametric models. Preliminary results suggest that the lack of identifiability is the differentiating factor. Secondly, it is proposed to investigate the extent to which neural networks may be seen as algorithm approximators, going beyond the classical literature on universal function approximation for neural networks. This perspective may shed light on recent empirical phenomena in neural networks, including the surprising emergent behavior of transformers and large language models.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413834","Collaborative Research: Nonparametric Learning in High-Dimensional Survival Analysis for causal inference and sequential decision making","DMS","STATISTICS","07/01/2024","06/18/2024","Zhezhen Jin","NY","Columbia University","Standard Grant","Jun Zhu","06/30/2027","$100,000.00","","zj7@columbia.edu","615 W 131ST ST","NEW YORK","NY","100277922","2128546851","MPS","126900","","$0.00","Data with survival outcomes are commonly encountered in real-world applications to capture the time duration until a specific event of interest occurs. Nonparametric learning for high dimensional survival data offers promising avenues in practice because of its ability to capture complex relationships and provide comprehensive insights for diverse problems in medical and business services, where vast covariates and individual metrics are prevalent. This project will significantly advance the methods and theory for nonparametric learning in high-dimensional survival data analysis, with a specific focus on causal inference and sequential decision making problems. The study will be of interest to practitioners in various fields, particularly providing useful methods for medical researchers to discover relevant risk factors, assess causal treatment effects, and utilize personalized treatment strategies in contemporary health sciences. It will also provide useful analytics tools beneficial to financial and related institutions for assessing user credit risks and facilitating informed decisions through personalized services. The theoretical and empirical studies to incorporate complex nonparametric structures in high-dimensional survival analysis, together with their interdisciplinary applications, will create valuable training and research opportunities for graduate and undergraduate students, including those from underrepresented minority groups.

Under flexible nonparametric learning frameworks, new embedding methods and learning algorithms will be developed for high dimensional survival analysis. First, the investigators will develop supervised doubly robust linear embedding and supervised nonlinear manifold learning method for supervised dimension reduction of high dimensional survival data, without imposing stringent model or distributional assumptions. Second, a robust nonparametric learning framework will be established for estimating causal treatment effect for high dimensional survival data that allows the covariate dimension to grow much faster than the sample size. Third, motivated by applications in personalized service, the investigators will develop a new nonparametric multi-stage algorithm for high dimensional censored bandit problems that allows flexibility with potential non-linear decision boundaries with optimal regret guarantees.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413405","Collaborative Research: Statistical Optimal Transport: Foundation, Computation and Applications","DMS","STATISTICS","07/01/2024","06/18/2024","Kengo Kato","NY","Cornell University","Standard Grant","Yong Zeng","06/30/2027","$160,000.00","","kk976@cornell.edu","341 PINE TREE RD","ITHACA","NY","148502820","6072555014","MPS","126900","079Z","$0.00","Comparing probability models is a fundamental task in almost every data-enabled problem, and Optimal Transport (OT) offers a powerful and versatile framework to do so. Recent years have witnessed a rapid development of computational OT, which has expanded applications of OT to statistics, including clustering, generative modeling, domain adaptation, distribution-to-distribution regression, dimension reduction, and sampling. Still, understanding the fundamental strengths and limitations of OT as a statistical tool is much to be desired. This research project aims to fill this important gap by advancing statistical analysis (estimation and inference) and practical approximation of two fundamental notions (average and quantiles) in statistics and machine learning, demonstrated through modern applications for measure-valued data. The project also provides research training opportunities for graduate students.

The award contains three main research projects. The first project will develop a new regularized formulation of the Wasserstein barycenter based on the multi-marginal OT and conduct an in-depth statistical analysis, encompassing sample complexity, limiting distributions, and bootstrap consistency. The second project will establish asymptotic distribution and bootstrap consistency results for linear functionals of OT maps and will study sharp asymptotics for entropically regularized OT maps when regularization parameters tend to zero. Building on the first two projects, the third project explores applications of the OT methodology to two important statistical tasks: dimension reduction and vector quantile regression. The research agenda will develop a novel and computationally efficient principal component method for measure-valued data and a statistically valid duality-based estimator for quantile regression with multivariate responses. The three projects will produce novel technical tools integrated from OT theory, empirical process theory, and partial differential equations, which are essential for OT-based inferential methods and will inspire new applications of OT to measure-valued and multivariate data.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -34,9 +37,9 @@ "2413553","Collaborative Research: Statistical Network Integration","DMS","STATISTICS","07/01/2024","06/17/2024","Jesús Arroyo","TX","Texas A&M University","Continuing Grant","Yulia Gel","06/30/2027","$37,118.00","","jarroyo@tamu.edu","400 HARVEY MITCHELL PKY S STE 30","COLLEGE STATION","TX","778454375","9798626777","MPS","126900","1269","$0.00","This project pursues the contemporary problem of statistical network integration facing scientists, practitioners, and theoreticians. The study of networks and graph-structured data has received growing attention in recent years, motivated by investigations of complex systems throughout the biological and social sciences. Models and methods have been developed to analyze network data objects, often focused on single networks or homogeneous data settings, yet modern available data are increasingly heterogeneous, multi-sample, and multi-modal. Consequently, there is a growing need to leverage data arising from different sources that result in multiple network observations with attributes. This project will develop statistically principled data integration methodologies for neuroimaging studies, which routinely collect multiple subject data across different groups (strains, conditions, edge groups), modalities (functional and diffusion MRI), and brain covariate information (phenotypes, healthy status, gene expression data from brain tissue). The investigators will offer interdisciplinary mentoring opportunities to students participating in the research project and co-teach a workshop based on the proposed research.

The goals of this project are to establish flexible, parsimonious latent space models for network integration and to develop efficient, theoretically justified inference procedures for such models. More specifically, this project will develop latent space models to disentangle common and individual local and global latent features in samples of networks, propose efficient spectral matrix-based methods for data integration, provide high-dimensional structured penalties for dimensionality reduction and regularization in network data, and develop cross-validation methods for multiple network data integration. New theoretical developments spanning concentration inequalities, eigenvector perturbation analysis, and distributional asymptotic results will elucidate the advantages and limitations of these methods in terms of signal aggregation, heterogeneity, and flexibility. Applications of these methodologies to the analysis of multi-subject brain network data will be studied. Emphasis will be on interpretability, computation, and theoretical justification.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413484","New Approaches to Sensitivity Analysis in Observational Studies","DMS","STATISTICS","09/01/2024","06/17/2024","Colin Fogarty","MI","Regents of the University of Michigan - Ann Arbor","Continuing Grant","Yong Zeng","08/31/2027","$58,516.00","","fogartyc@umich.edu","1109 GEDDES AVE, SUITE 3300","ANN ARBOR","MI","481091079","7347636438","MPS","126900","075Z, 079Z","$0.00","While randomized experiments remain the gold standard for elucidating cause and effect relations, countless societally important ""what-if?"" questions cannot be addressed through clinical trials for a litany of reasons, ranging from ethical concerns to logistical infeasibility. For this reason, observational studies, wherein the assignment of group status to individuals is outside the control of the researcher, often represent the only path forward for inferring causal effects. While observational data are often inexpensive to collect and plentiful, regrettably, they suffer from inescapable biases due to self-selection. In short, associations between group status and outcomes of interest need not reflect causal effects, as the groups being compared might have considerable differences on the basis of factors unavailable for adjustment. This project will develop new methods for sensitivity analysis in observational studies, which answer the question, ""How much-unmeasured confounding would need to exist to overturn a study's finding of a causal effect?"" Quantifying the robustness of observational findings to hidden bias will help frame the debate around the reliability of such studies, allowing researchers to highlight findings that are particularly resilient to lurking variables. This project provides both theoretical guidance on how to extract the most out of a sensitivity analysis and computationally tractable methods for making this guidance actionable. Moreover, when randomized experimentation is possible, the developed methods will help researchers use existing observational studies for hypothesis generation, enabling them to find sets of promising outcome variables whose causal effects may be verified through follow-up experimentation. This award includes support for work with graduate students.

This project develops a new set of statistical methods for conducting sensitivity analyses after matching. These methods aim to overcome shortcomings of the existing approach, conferring computational, theoretical, and practical benefits. The project will provide a new approach to sensitivity analysis after matching called weighting-after-matching. The project will establish computational benefits, theoretical improvements in design sensitivity, and practical improvements in the power of a sensitivity analysis by using weighting-after-matching in lieu of the traditional unweighted approach. The project will also establish novel methods for sensitivity analysis with multiple outcome variables. These innovations will include a scalable multiple testing procedure for observational studies, facilitating exploratory analysis while providing control of the proportion of false discoveries, and methods for sensitivity analysis using weighting-after-matching for testing both sharp null hypotheses of no effect at all and hypotheses on average treatment effects. Finally, the project will establish previously unexplored benefits from using matching and weighting in combination, two modes of adjustment in observational studies commonly viewed as competitors. This will help bridge the divide between matching estimators and weighting estimators in the context of a sensitivity, in so doing providing a natural avenue for theoretical comparisons of these approaches.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2412052","Tackling High Dimensionality for Modern Machine Learning: Theory and Visualization","DMS","STATISTICS","07/01/2024","06/17/2024","Yiqiao Zhong","WI","University of Wisconsin-Madison","Continuing Grant","Tapabrata Maiti","06/30/2027","$67,291.00","","yiqiao.zhong@wisc.edu","21 N PARK ST STE 6301","MADISON","WI","537151218","6082623822","MPS","126900","","$0.00","This research project aims to address the recent challenges of modern machine learning from a statistical perspective. Deep Learning and particularly Large Language Models have the potential to transform our society, yet their scientific underpinning is much less developed. In particular, large-scale black-box models are deployed in applications with little understanding about when they may or may not work as expected. The research is expected to advance the understanding of modern machine learning. It will also provide accessible tools to improve the interpretations and safety of models. This award will involve and support graduate students.

The project is motivated by recent statistical phenomena such as double descent and benign overfitting that involve training a model with many parameters. Motivated by the empirical discoveries in Deep Learning, the project will develop insights into overfitting in imbalanced classification in high dimensions and the effects of reparametrization in contrastive learning. Understanding the generalization errors under overparametrization in practical scenarios, such as imbalanced classification, will likely lead to better practice of reducing overfitting. This project will also explore interpretations for black-box models and complicated methods: (1) in Transformers, high-dimensional embedding vectors are decomposed into interpretable components; (2) in t-SNE, embedding points are assessed by metrics related to map discontinuity. By using classical ideas from factor analysis and leave-one-out, this project will result in new visualization tools for interpretations and diagnosis.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2412629","Collaborative Research: Partial Priors, Regularization, and Valid & Efficient Probabilistic Structure Learning","DMS","STATISTICS","07/01/2024","06/17/2024","Chuanhai Liu","IN","Purdue University","Standard Grant","Yulia Gel","06/30/2027","$160,000.00","","chuanhai@purdue.edu","2550 NORTHWESTERN AVE # 1100","WEST LAFAYETTE","IN","479061332","7654941055","MPS","126900","","$0.00","Modern applications of statistics aim to solve complex scientific problems involving high-dimensional unknowns. One feature that these applications often share is that the high-dimensional unknown is believed to satisfy a complexity-limiting, low-dimensional structure. Specifics of the posited low-dimensional structure are mostly unknown, so a statistically interesting and scientifically relevant problem is structure learning, i.e., using data to learn the latent low-dimensional structure. Because structure learning problems are ubiquitous and reliable uncertainty quantification is imperative, results from this project will have an impact across the biomedical, physical, and social sciences. In addition, the project will offer multiple opportunities for career development of new generations of statisticians and data scientists.

Frequentist methods focus on data-driven estimation or selection of a candidate structure, but currently there are no general strategies for reliable uncertainty quantification concerning the unknown structure. Bayesian methods produce a data-dependent probability distribution over the space of structures that can be used for uncertainty quantification, but it comes with no reliability guarantees. A barrier to progress in reliable uncertainty quantification is the oppositely extreme perspectives: frequentists' anathema of modeling structural/parametric uncertainty versus Bayesians' insistence that such uncertainty always be modeled precisely and probabilistically. Overcoming this barrier requires a new perspective falling between these two extremes, and this project will develop a new framework that features a more general and flexible perspective on probability, namely, imprecise probability. Most importantly, this framework will resolve the aforementioned issues by offering new and powerful methods boasting provably reliable uncertainty quantification in structure learning applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413891","Nonparametric estimation in causal inference: optimality in traditional models and newer ones","DMS","STATISTICS","08/01/2024","06/14/2024","Matteo Bonvini","NJ","Rutgers University New Brunswick","Continuing Grant","Yong Zeng","07/31/2027","$59,393.00","","mb1662@stat.rutgers.edu","3 RUTGERS PLZ","NEW BRUNSWICK","NJ","089018559","8489320150","MPS","126900","075Z","$0.00","This project provides new methods for estimating causal effects from non-randomized studies. Quantifying the causal effect of a variable on another one is of fundamental importance in science because it allows for the understanding of what happens if a certain action is taken, e.g., if a drug is prescribed to a patient. When randomized experiments are not feasible, e.g., because of costs or ethical concerns, quantifying the effect of a treatment on an outcome can be very challenging. Roughly, this is because the analysis must ensure that the treated and untreated units are ?comparable,? a condition implied by proper randomization. In these settings, the analyst typically proceeds in two steps: 1) they introduce the key assumptions needed to identify the causal effect, and 2) they specify a model for the distribution of the data, often nonparametric, to accommodate modern, complex datasets, as well as the appropriate estimation strategy. One key difficulty in non-randomized studies is that estimating causal effects typically requires estimating nuisance components of the data distribution that are not of direct interest and that can be potentially quite hard to estimate. Focused on the second part of the analysis, this project aims to design optimal methods for estimating causal effects in different settings. Informally, an optimal estimator converges to the true causal effect ?as quickly as possible? as a function of the sample size and thus leads to the most precise inferences. Establishing optimality has thus two fundamental benefits: 1) it leads to procedures that make the most efficient use of the available data, and 2) it serves as a benchmark against which future methods can be evaluated. In this respect, the theoretical and methodological contributions of this project are expected to lead to substantial improvements in the analysis of data from many domains, such as medicine and the social sciences. The project also aims to offer opportunities for training and mentoring graduate and undergraduate students.

For certain estimands and data structures, the principles of semiparametric efficiency theory can be used to derive optimal estimators. However, they are not directly applicable to causal parameters that are ?non-smooth? or for which the nuisance parts of the data distribution can only be estimated at such slow rates that root-n convergence of the causal effect estimator is not attainable. As part of this project, the Principal Investigator aims to study the optimal estimation of prominent examples of non-smooth parameters, such as causal effects defined by continuous treatments. Furthermore, this project will consider optimal estimation of ?smooth? parameters, such as certain average causal effects, in newer nonparametric models for which relatively fast rates of convergence are possible, even if certain components of the data distribution can only be estimated at very slow rates. In doing so, the project aims to propose new techniques for reducing the detrimental effect of the nuisance estimators? bias on the quality of the causal effect estimator. It also aims to design and implement inferential procedures for the challenging settings considered, thereby enhancing the adoption of the methods proposed in practice.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413557","Collaborative Research: Systemic Shock Inference for High-Frequency Data","DMS","STATISTICS","07/01/2024","06/14/2024","Jose Figueroa-Lopez","MO","Washington University","Standard Grant","Jun Zhu","06/30/2027","$99,957.00","","figueroa@math.wustl.edu","ONE BROOKINGS DR","SAINT LOUIS","MO","63110","3147474134","MPS","126900","","$0.00","Unexpected ?shocks,? or abrupt deviations from periods of stability naturally occur in time-dependent data-generating mechanisms across a variety of disciplines. Examples include crashes in stock markets, flurries of activity on social media following news events, and changes in animal migratory patterns during global weather events, among countless others. Reliable detection and statistical analysis of shock events is crucial in applications, as shock inference can provide scientists deeper understanding of large systems of time-dependent variables, helping to mitigate risk and manage uncertainty. When large systems of time-dependent variables are observed at high sampling frequencies, information at fine timescales can reveal hidden connections and provide insights into the collective uncertainty shared by an entire system. High-frequency observations of such systems appear in econometrics, climatology, statistical physics, and many other areas of empirical science that can benefit from reliable inference of shock events. This project will develop new statistical techniques for the both the detection and analysis of shocks in large systems of time-dependent variables observed at high temporal sampling frequencies. The project will also involve mentoring students, organizing workshops, and promoting diversity in STEM.

The investigators will study shock inference problems in a variety of settings in high dimensions. Special focus will be paid to semi-parametric high-frequency models that display a factor structure. Detection based on time-localized principal component analysis and related techniques will be explored, with a goal towards accounting for shock events that impact a large number of component series in a possibly asynchronous manner. Time-localized bootstrapping methods will also be considered for feasible testing frameworks for quantifying the system-level impact of shocks. Complimentary lines of inquiry will concern estimation of jump behavior in high-frequency models in multivariate contexts and time-localized clustering methods.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2412629","Collaborative Research: Partial Priors, Regularization, and Valid & Efficient Probabilistic Structure Learning","DMS","STATISTICS","07/01/2024","06/17/2024","Chuanhai Liu","IN","Purdue University","Standard Grant","Yulia Gel","06/30/2027","$160,000.00","","chuanhai@purdue.edu","2550 NORTHWESTERN AVE # 1100","WEST LAFAYETTE","IN","479061332","7654941055","MPS","126900","","$0.00","Modern applications of statistics aim to solve complex scientific problems involving high-dimensional unknowns. One feature that these applications often share is that the high-dimensional unknown is believed to satisfy a complexity-limiting, low-dimensional structure. Specifics of the posited low-dimensional structure are mostly unknown, so a statistically interesting and scientifically relevant problem is structure learning, i.e., using data to learn the latent low-dimensional structure. Because structure learning problems are ubiquitous and reliable uncertainty quantification is imperative, results from this project will have an impact across the biomedical, physical, and social sciences. In addition, the project will offer multiple opportunities for career development of new generations of statisticians and data scientists.

Frequentist methods focus on data-driven estimation or selection of a candidate structure, but currently there are no general strategies for reliable uncertainty quantification concerning the unknown structure. Bayesian methods produce a data-dependent probability distribution over the space of structures that can be used for uncertainty quantification, but it comes with no reliability guarantees. A barrier to progress in reliable uncertainty quantification is the oppositely extreme perspectives: frequentists' anathema of modeling structural/parametric uncertainty versus Bayesians' insistence that such uncertainty always be modeled precisely and probabilistically. Overcoming this barrier requires a new perspective falling between these two extremes, and this project will develop a new framework that features a more general and flexible perspective on probability, namely, imprecise probability. Most importantly, this framework will resolve the aforementioned issues by offering new and powerful methods boasting provably reliable uncertainty quantification in structure learning applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413425","Collaborative Research: Synergies between Steins Identities and Reproducing Kernels: Modern Tools for Nonparametric Statistics","DMS","STATISTICS","07/01/2024","06/17/2024","Bharath Sriperumbudur","PA","Pennsylvania State Univ University Park","Standard Grant","Yong Zeng","06/30/2027","$179,999.00","","bks18@psu.edu","201 OLD MAIN","UNIVERSITY PARK","PA","168021503","8148651372","MPS","126900","079Z","$0.00","The project aims to conduct comprehensive statistical and computational analyses, with the overarching objective of advancing innovative nonparametric data analysis techniques. The methodologies and theories developed are anticipated to push the boundaries of modern nonparametric statistical inference and find applicability in other statistical domains such as nonparametric latent variable models, time series analysis, and sequential nonparametric multiple testing. This project will enhance the interconnections among statistics, machine learning, and computation and provide training opportunities for postdoctoral fellows, graduate students, and undergraduates.

More specifically, the project covers key problems in nonparametric hypothesis testing, intending to establish a robust framework for goodness-of-fit testing for distributions on non-Euclidean domains with unknown normalization constants. The research also delves into nonparametric variational inference, aiming to create a particle-based algorithmic framework with discrete-time guarantees. Furthermore, the project focuses on nonparametric functional regression, with an emphasis on designing minimax optimal estimators using infinite-dimensional Stein's identities. The study also examines the trade-offs between statistics and computation in all the aforementioned methods. The common thread weaving through these endeavors is the synergy between various versions of Stein's identities and reproducing kernels, contributing substantially to the advancement of models, methods, and theories in contemporary nonparametric statistics.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2412833","Collaborative Research: Statistical Modeling and Inference for Object-valued Time Series","DMS","STATISTICS","07/01/2024","06/17/2024","Xiaofeng Shao","IL","University of Illinois at Urbana-Champaign","Standard Grant","Jun Zhu","06/30/2027","$174,997.00","","xshao@illinois.edu","506 S WRIGHT ST","URBANA","IL","618013620","2173332187","MPS","126900","","$0.00","Random objects in general metric spaces have become increasingly common in many fields. For example, the intraday return path of a financial asset, the age-at-death distributions, the annual composition of energy sources, social networks, phylogenetic trees, and EEG scans or MRI fiber tracts of patients can all be viewed as random objects in certain metric spaces. For many endeavors in this area, the data being analyzed is collected with a natural ordering, i.e., the data can be viewed as an object-valued time series. Despite its prevalence in many applied problems, statistical analysis for such time series is still in its early development. A fundamental difficulty of developing statistical techniques is that the spaces where these objects live are nonlinear and commonly used algebraic operations are not applicable. This research project aims to develop new models, methodology and theory for the analysis of object-valued time series. Research results from the project will be disseminated to the relevant scientific communities via publications, conference and seminar presentations. The investigators will jointly mentor a Ph.D. student and involve undergraduate students in the research, as well as offering advanced topic courses to introduce the state-of-the-art techniques in object-valued time series analysis.

The project will develop a systematic body of methods and theory on modeling and inference for object-valued time series. Specifically, the investigators propose to (1) develop a new autoregressive model for distributional time series in Wasserstein geometry and a suite of tools for model estimation, selection and diagnostic checking; (2) develop new specification testing procedures for distributional time series in the one-dimensional Euclidean space; and (3) develop new change-point detection methods to detect distribution shifts in a sequence of object-valued time series. The above three projects tackle several important modeling and inference issues in the analysis of object-valued time series, the investigation of which will lead to innovative methodological and theoretical developments, and lay groundwork for this emerging field.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2413558","Collaborative Research: Systemic Shock Inference for High-Frequency Data","DMS","STATISTICS","07/01/2024","06/14/2024","Benjamin Boniece","PA","Drexel University","Continuing Grant","Jun Zhu","06/30/2027","$26,626.00","","cooper.boniece@drexel.edu","3141 CHESTNUT ST","PHILADELPHIA","PA","191042875","2158956342","MPS","126900","","$0.00","Unexpected ?shocks,? or abrupt deviations from periods of stability naturally occur in time-dependent data-generating mechanisms across a variety of disciplines. Examples include crashes in stock markets, flurries of activity on social media following news events, and changes in animal migratory patterns during global weather events, among countless others. Reliable detection and statistical analysis of shock events is crucial in applications, as shock inference can provide scientists deeper understanding of large systems of time-dependent variables, helping to mitigate risk and manage uncertainty. When large systems of time-dependent variables are observed at high sampling frequencies, information at fine timescales can reveal hidden connections and provide insights into the collective uncertainty shared by an entire system. High-frequency observations of such systems appear in econometrics, climatology, statistical physics, and many other areas of empirical science that can benefit from reliable inference of shock events. This project will develop new statistical techniques for the both the detection and analysis of shocks in large systems of time-dependent variables observed at high temporal sampling frequencies. The project will also involve mentoring students, organizing workshops, and promoting diversity in STEM.

The investigators will study shock inference problems in a variety of settings in high dimensions. Special focus will be paid to semi-parametric high-frequency models that display a factor structure. Detection based on time-localized principal component analysis and related techniques will be explored, with a goal towards accounting for shock events that impact a large number of component series in a possibly asynchronous manner. Time-localized bootstrapping methods will also be considered for feasible testing frameworks for quantifying the system-level impact of shocks. Complimentary lines of inquiry will concern estimation of jump behavior in high-frequency models in multivariate contexts and time-localized clustering methods.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." @@ -69,10 +72,10 @@ "2342821","Conference: Emerging Statistical and Quantitative Issues in Genomic Research in Health Sciences","DMS","STATISTICS","02/01/2024","01/24/2024","Xihong Lin","MA","Harvard University","Standard Grant","Tapabrata Maiti","01/31/2027","$61,920.00","","xlin@hsph.harvard.edu","1033 MASSACHUSETTS AVE STE 3","CAMBRIDGE","MA","021385366","6174955501","MPS","126900","7556","$0.00","The 2023 Conference of the Program in Quantitative Genomics (PQG), entitled ?Diversity in Genetics and Genomics? will take place at the Joseph B. Martin Conference Center at the Harvard Medical School on October 17-18, 2023. This long-standing Harvard T. H. Chan School Public Health Program in Quantitative Genomics Conference series focuses on timely interdisciplinary discussions on emerging statistical and computational challenges in genetic and genomic science. The focus of each conference evolves in parallel to scientific frontiers. A key feature of the series is its interdisciplinary nature, where quantitative and subject-matter scientists jointly discuss statistical and quantitative issues that arise in cutting-edge genetic and genomic research in human diseases. Conference participants critique existing quantitative methods, discuss emerging statistical and quantitative issues, identify priorities for future research, and disseminate results. Diversity in genetic and genomics has been increasingly embraced not only for enabling more powerful studies but also because of the need to avoid further exacerbation of structured inequalities in healthcare systems and to chart a path forward for their amelioration. Significant effort has been made in recent years to improve study participant and workforce diversity in genetics and genomics, including the inclusion of diverse groups in discovery and functional studies and translational efforts to empower or pave the road for equitable clinical impact. The 2023 conference will provide a platform to engage inter-disciplinary researchers to have in-depth discussions on the quantitative challenges and opportunities in increasing diversity in genetic and genomic research. We will make serious efforts to recruit junior researchers, including graduate students, postdoctoral fellows, in particular underrepresented minorities and women, as speakers and participants.

The impetus for the 2023 conference theme comes from the pressing need to address the statistical and quantitative issues in diversity in genetic and genomic research. The three topics of the conference include (1) Diversity for gene mapping and studying variant functions; (2) Diversity for translational genetics: polygenic risk and clinical implementation; (3) How do we move forward while acknowledging the past? Examples of the first topic include multi-ancestry genetic association tests, fine-mapping, and eQTL analysis. Examples of the second topic include trans-ethnic polygenic risk prediction and transferred learning. Examples of the third topic include enhancing transparency in the use of population descriptors in genomics research and building global collaborative genetic research frameworks. The education and research activities discussed at the conference will make important contributions to advance efforts on increasing diversity of genetic and genomic research, and will help create the scientific basis and workforce required to ensure and sustain US competitiveness both economically and technologically, prolonging and saving lives, and promoting national security. For more information, see www.hsph.harvard.edu/pqg-conference/.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2403813","Conference: Theory and Foundations of Statistics in the Era of Big Data","DMS","STATISTICS","02/01/2024","02/01/2024","Xin Zhang","FL","Florida State University","Standard Grant","Tapabrata Maiti","01/31/2025","$14,800.00","Srijan Sengupta","henry@stat.fsu.edu","874 TRADITIONS WAY","TALLAHASSEE","FL","323060001","8506445260","MPS","126900","7556","$0.00","The Department of Statistics at Florida State University (FSU) will host a three-day conference titled ""Theory and Foundations of Statistics in the Era of Big Data"" in Tallahassee, Florida, from April 19 to 21, 2024. The main objective of the conference is to bring together a global community of statisticians and data scientists to chart the state-of-the-art, challenges, and the future trajectory of contemporary statistical foundations, theory, and practice. The format of the conference includes three plenary sessions, six invited sessions showcasing current senior leaders in the field who have made foundational contributions to statistics, two special invited sessions for early-career researchers, a poster session for graduate students, and a banquet talk by a leading expert. The special invited sessions and poster session will provide a unique opportunity for early-career researchers and graduate students not only to showcase their research work but also to benefit from in-depth intellectual interactions with leaders in the field in a small conference setting.

The main objective of the conference is to bring together present-day statistics and science innovators and senior leaders with emerging young researchers to identify, discuss, and decipher solutions to these foundational issues and challenges faced by modern-day statistics and data science. Providing support for junior researchers and graduate students who do not have access to other sources of funding to attend this important and timely gathering of researchers working on the foundational aspects of statistical sciences is also key to maintaining the current leadership of U.S. institutions in this field. It is extremely timely to have such an event to stimulate and comprehend the major contemporary challenges in the foundation, theory, and implementation of the field that is currently playing such an important role in every sphere of social media, economic security, public health, and beyond. This conference will be in partnership with the International Indian Statistical Association (IISA) and co-sponsored by the American Statistical Association (ASA), the National Institute of Statistical Science (NISS), and the Institute of Mathematical Statistics (IMS). The conference website is https://sites.google.com/view/theory-and-foundations-of-stat/

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2349991","Conference: Statistics in the Age of AI","DMS","STATISTICS","03/01/2024","12/18/2023","Xiaoke Zhang","DC","George Washington University","Standard Grant","Tapabrata Maiti","02/28/2025","$18,400.00","","xkzhang@gwu.edu","1918 F ST NW","WASHINGTON","DC","200520042","2029940728","MPS","126900","7556","$0.00","The conference ?Statistics in the Age of AI? will be held at George Washington University, Washington, DC on May 9-11, 2024. With the boom of artificial intelligence (AI), partly accelerated by the launching of large language models (e.g., ChatGPT), AI tools have reached every corner of our society. In this era where many business and scientific problems are being tackled via AI systems, statistics has become more critical than ever since it can offer uncertainty quantification, causal analysis, and interpretability among others, which most AI systems are lacking. This conference will bring together researchers and practitioners in academics and industries to explore the impact of AI on statistical research, education, and practice and also to brainstorm how statistics can contribute to AI. The conference organizers encourage participation and attendance by students, post-doctoral scholars, early-career researchers, and individuals from underrepresented groups.

The conference features short courses, poster and oral presentations, and panel discussions. The two short courses will focus on causal inference and conformal inference respectively. The presentations and panel discussions will address efficient handling of data for AI models and architectures, uncertainty quantification, and responsible decision-making among other topics. Further information will become available on the conference website: https://statistics.columbian.gwu.edu/statistics-age-ai.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2338760","CAREER: Statistical Inference in Observational Studies -- Theory, Methods, and Beyond","DMS","STATISTICS","07/01/2024","01/10/2024","Rajarshi Mukherjee","MA","Harvard University","Continuing Grant","Jun Zhu","06/30/2029","$81,885.00","","rmukherj@hsph.harvard.edu","1033 MASSACHUSETTS AVE STE 3","CAMBRIDGE","MA","021385366","6174955501","MPS","126900","1045","$0.00","Causal inference refers to a systematic way of deciphering causal relationships between entities from empirical observations ? an epistemic framework that underlies past, present, and future scientific and social development. For designing statistical methods for causal inference, the gold standard pertains to randomized clinical trials where the researcher assigns treatment/exposure to subjects under study based on pure chance mechanisms. The random assignment negates systematic bias between the observed relationship between the treatment/exposure and outcome due to unknown common factors referred to as confounders. However, randomized clinical trials are often infeasible, expensive, and ethically challenging. In contrast, modern technological advancement has paved the way for the collection of massive amounts of data across a spectrum of possibilities such as health outcomes, environmental pollution, medical claims, educational policy interventions, and genetic mutations among many others. Since accounting for confounders in such data is the fundamental aspect of conducting valid causal inference, one of the major foci of modern causal inference research have been to design procedures to account for complex confounding structures without pre-specifying unrealistic statistical models. Despite the existence of a large canvas of methods in this discourse, the complete picture of the best statistical methods for inferring the causal effect of an exposure on an outcome while adjusting for arbitrary confounders remains largely open. Moreover, there are several popularly used methods that require rigorous theoretical justification and subsequent modification for reproducible statistical research in the domain of causal inference. This project is motivated by addressing these gaps and will be divided into two broad interconnected themes. In the first part, this project provides the first rigorous theoretical lens to the most popular method of confounder adjustment in large-scale genetic studies to find causal variants of diseases. This will in turn bring forth deeper questions about optimal statistical causal inference procedures that will be explored in the second part of the project. Since the project is designed to connect ideas from across statistical methods, probability theory, computer science, and machine learning, it will provide unique learning opportunities to design new courses and discourses. The project will therefore integrate research with education through course development, research mentoring for undergraduate and graduate students, especially those from underrepresented groups, and summer programs.

This project will focus on two broad and interrelated themes tied together by the motivation of conducting statistical and causal inference with modern observational data. The first part of the project involves providing the first detailed theoretical picture of the most popular principal component-based method of population stratification adjustment in genome-wide association studies. This part of the project also aims to provide new methodologies to correct for existing and previously unknown possible biases in the existing methodology as well as guidelines for practitioners for choosing between methods and design of studies. By recognizing the fundamental tenet of large-scale genetic data analysis as the identification of causal genetic determinants of disease phenotypes, the second part of the project develops the first complete picture of optimal statistical inference of causal effects in both high-dimensional under sparsity and nonparametric models under smoothness conditions. Moreover, this part of the project responds to the fundamental question of tuning learning algorithms for estimating nuisance functions, such as outcome regression and propensity score for causal effect estimation, to optimize the downstream mean-squared error of causal effect estimates instead of prediction errors associated with these regression functions. The overall research will connect ideas from high-dimensional statistical inference, random matrix theory, higher-order semiparametric methods, and information theory.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." -"2338018","CAREER: Single-Fidelity vs. Multi-Fidelity Computer Experiments: Unveiling the Effectiveness of Multi-Fidelity Emulation","DMS","STATISTICS","06/01/2024","12/05/2023","Chih-Li Sung","MI","Michigan State University","Continuing Grant","Jun Zhu","05/31/2029","$79,437.00","","sungchih@msu.edu","426 AUDITORIUM RD RM 2","EAST LANSING","MI","488242600","5173555040","MPS","126900","1045","$0.00","Computer models have become indispensable tools across diverse fields, enabling the simulation of complex phenomena and facilitating decision-making without costly real-world experiments. Traditionally, computer models are simulated using single, high-accuracy simulations, employing a high level of detail and resolution throughout. Recent advancements, however, have shifted attention towards multi-fidelity simulations, balancing computational cost and accuracy by leveraging various levels of detail and resolution in the simulation. A key question arises: is it more effective to use single-fidelity or multi-fidelity simulations? This is a question practitioners often confront when conducting computer simulations. The research aims to address this fundamental question directly, providing valuable insights for practical decision-making. By leveraging insights gained from computational cost comparisons, the research will enhance the ability to predict complex scientific phenomena accurately and has the potential to revolutionize fields such as engineering, medical science, and biology. The project contributes to outreach and diversity efforts, inspiring youth and increasing female representation in STEM research. Moreover, collaborations with diverse research groups, as well as involvement in the REU exchange program, provide opportunities to engage undergraduate students, nurturing their interest in research and encouraging them to pursue careers in STEM. Research findings will be disseminated through publications and conferences. The code developed will be shared to foster collaboration and encourage others to build upon these innovative methodologies.

This research addresses the fundamental question of whether to conduct single-fidelity or multi-fidelity computer experiments by investigating the effectiveness of multi-fidelity simulations. It begins by examining the computational cost comparison between the two approaches, finding that multi-fidelity simulations, under certain conditions, can theoretically require more computational resources while achieving the same predictive ability. To mitigate the negative effects of low-fidelity simulations, a novel and flexible statistical emulator, called the Recursive Nonadditive (RNA) emulator, is proposed to leverage multi-fidelity simulations, and a sequential design scheme based on this emulator is developed, which maximizes the effectiveness by selecting inputs and fidelity levels based on a criterion that balances uncertainty reduction and computational cost. Furthermore, two novel multi-fidelity emulators, called ""secure emulators,"" are developed, which theoretically guarantee superior predictive performance compared to single-fidelity emulators, regardless of design choices.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2339241","CAREER: Learning stochastic spatiotemporal dynamics in single-molecule genetics","DMS","Cellular Dynamics and Function, STATISTICS, MATHEMATICAL BIOLOGY","07/01/2024","01/29/2024","Christopher Miles","CA","University of California-Irvine","Continuing Grant","Amina Eladdadi","06/30/2029","$239,517.00","","cemiles@uci.edu","160 ALDRICH HALL","IRVINE","CA","926970001","9498247295","MPS","111400, 126900, 733400","068Z, 079Z, 1045, 7465, 8038","$0.00","The ability to measure which genes are expressed in cells has revolutionized our understanding of biological systems. Discoveries range from pinpointing what makes different cell types unique (e.g., a skin vs. brain cell) to how diseases emerge from genetic mutations. This gene expression data is now a ubiquitously used tool in every cell biologist?s toolbox. However, the mathematical theories for reliably extracting insight from this data have lagged behind the amazing progress of the techniques for harvesting it. This CAREER project will develop key theoretical foundations for analyzing imaging data of gene expression. The advances span theory to practice, including developing mathematical models and machine-learning approaches that will be used with data from experimental collaborators. Altogether, the project aims to create a new gold standard of techniques in studying spatial imaging data of gene expression and enable revelation of new biological and biomedical insights. In addition, this proposed research will incorporate interdisciplinary graduate students and local community college undergraduates to train the next generation of scientists in the ever-evolving intersection of data science, biology, and mathematics. Alongside research activities, the project will create mentorship networks for supporting first-generation student scientists in pursuit of diversifying the STEM workforce.

The supported research is a comprehensive program for studying single-molecule gene expression spatial patterns through the lens of stochastic reaction-diffusion models. The key aim is to generalize mathematical connections between these models and their observation as spatial point processes. The new theory will incorporate factors necessary to describe spatial gene expression at subcellular and multicellular scales including various reactions, spatial movements, and geometric effects. This project will also establish the statistical theory of inference on the resulting inverse problem of inferring stochastic rates from only snapshots of individual particle positions. Investigations into parameter identifiability, optimal experimental design, and model selection will ensure robust and reliable inference. In complement to the developed theory, this project will implement and benchmark cutting-edge approaches for efficiently performing large-scale statistical inference, including variational Bayesian Monte Carlo and physics-informed neural networks. The culmination of this work will be packaged into open-source software that infers interpretable biophysical parameters from multi-gene tissue-scale datasets.

This CAREER Award is co-funded by the Mathematical Biology and Statistics Programs at the Division of Mathematical Sciences and the Cellular Dynamics & Function Cluster in the Division of Molecular & Cellular Biosciences, BIO Directorate.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2340241","CAREER: New Frameworks for Ethical Statistical Learning: Algorithmic Fairness and Privacy","DMS","STATISTICS","07/01/2024","01/23/2024","Linjun Zhang","NJ","Rutgers University New Brunswick","Continuing Grant","Yong Zeng","06/30/2029","$90,127.00","","linjun.zhang@rutgers.edu","3 RUTGERS PLZ","NEW BRUNSWICK","NJ","089018559","8489320150","MPS","126900","1045","$0.00","With the unprecedented impact of data science and machine learning in many aspects of our daily lives, such as healthcare, finance, education, and law, there is an urgent need to design ethical statistical learning algorithms that account for fairness and privacy. This project tackles the challenge of integrating ethical principles into the fabric of statistical learning. The approach prioritizes fairness by enhancing statistical algorithms to perform equitably, particularly in scenarios with limited sample sizes and where sensitive attributes are restricted by legal or societal norms. In parallel, this project addresses privacy by developing a general framework for studying the privacy-accuracy trade-off under new privacy constraints emerging with the advances in generative AI. The practical upshot of this work is the application of these methods to biomedical fields, accompanied by the release of open-source software, broadening the impact and encouraging ethical practices in statistical learning across various domains. This project promotes equitable and private data handling and provides research training opportunities to students.

The research objective of this project is to develop rigorous statistical frameworks for ethical machine learning, with a focus on algorithmic fairness and data privacy. More specifically, the project will: (1) develop innovative statistical methods that ensure fairness in a finite-sample and distribution-free manner; (2) design algorithms that ensure fairness while complying with societal and legal constraints on sensitive data; (3) establish new frameworks to elucidate the trade-off between statistical accuracy and new privacy concepts in generative AI, including machine unlearning and copyright protection. Taken together, the outcome of this research will build a firm foundation of ethical statistical learning and shed light on the development of new theoretical understanding and practical methodology with algorithmic fairness and privacy guarantees.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2338760","CAREER: Statistical Inference in Observational Studies -- Theory, Methods, and Beyond","DMS","STATISTICS","07/01/2024","01/10/2024","Rajarshi Mukherjee","MA","Harvard University","Continuing Grant","Jun Zhu","06/30/2029","$81,885.00","","rmukherj@hsph.harvard.edu","1033 MASSACHUSETTS AVE STE 3","CAMBRIDGE","MA","021385366","6174955501","MPS","126900","1045","$0.00","Causal inference refers to a systematic way of deciphering causal relationships between entities from empirical observations ? an epistemic framework that underlies past, present, and future scientific and social development. For designing statistical methods for causal inference, the gold standard pertains to randomized clinical trials where the researcher assigns treatment/exposure to subjects under study based on pure chance mechanisms. The random assignment negates systematic bias between the observed relationship between the treatment/exposure and outcome due to unknown common factors referred to as confounders. However, randomized clinical trials are often infeasible, expensive, and ethically challenging. In contrast, modern technological advancement has paved the way for the collection of massive amounts of data across a spectrum of possibilities such as health outcomes, environmental pollution, medical claims, educational policy interventions, and genetic mutations among many others. Since accounting for confounders in such data is the fundamental aspect of conducting valid causal inference, one of the major foci of modern causal inference research have been to design procedures to account for complex confounding structures without pre-specifying unrealistic statistical models. Despite the existence of a large canvas of methods in this discourse, the complete picture of the best statistical methods for inferring the causal effect of an exposure on an outcome while adjusting for arbitrary confounders remains largely open. Moreover, there are several popularly used methods that require rigorous theoretical justification and subsequent modification for reproducible statistical research in the domain of causal inference. This project is motivated by addressing these gaps and will be divided into two broad interconnected themes. In the first part, this project provides the first rigorous theoretical lens to the most popular method of confounder adjustment in large-scale genetic studies to find causal variants of diseases. This will in turn bring forth deeper questions about optimal statistical causal inference procedures that will be explored in the second part of the project. Since the project is designed to connect ideas from across statistical methods, probability theory, computer science, and machine learning, it will provide unique learning opportunities to design new courses and discourses. The project will therefore integrate research with education through course development, research mentoring for undergraduate and graduate students, especially those from underrepresented groups, and summer programs.

This project will focus on two broad and interrelated themes tied together by the motivation of conducting statistical and causal inference with modern observational data. The first part of the project involves providing the first detailed theoretical picture of the most popular principal component-based method of population stratification adjustment in genome-wide association studies. This part of the project also aims to provide new methodologies to correct for existing and previously unknown possible biases in the existing methodology as well as guidelines for practitioners for choosing between methods and design of studies. By recognizing the fundamental tenet of large-scale genetic data analysis as the identification of causal genetic determinants of disease phenotypes, the second part of the project develops the first complete picture of optimal statistical inference of causal effects in both high-dimensional under sparsity and nonparametric models under smoothness conditions. Moreover, this part of the project responds to the fundamental question of tuning learning algorithms for estimating nuisance functions, such as outcome regression and propensity score for causal effect estimation, to optimize the downstream mean-squared error of causal effect estimates instead of prediction errors associated with these regression functions. The overall research will connect ideas from high-dimensional statistical inference, random matrix theory, higher-order semiparametric methods, and information theory.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." +"2338018","CAREER: Single-Fidelity vs. Multi-Fidelity Computer Experiments: Unveiling the Effectiveness of Multi-Fidelity Emulation","DMS","STATISTICS","06/01/2024","12/05/2023","Chih-Li Sung","MI","Michigan State University","Continuing Grant","Jun Zhu","05/31/2029","$79,437.00","","sungchih@msu.edu","426 AUDITORIUM RD RM 2","EAST LANSING","MI","488242600","5173555040","MPS","126900","1045","$0.00","Computer models have become indispensable tools across diverse fields, enabling the simulation of complex phenomena and facilitating decision-making without costly real-world experiments. Traditionally, computer models are simulated using single, high-accuracy simulations, employing a high level of detail and resolution throughout. Recent advancements, however, have shifted attention towards multi-fidelity simulations, balancing computational cost and accuracy by leveraging various levels of detail and resolution in the simulation. A key question arises: is it more effective to use single-fidelity or multi-fidelity simulations? This is a question practitioners often confront when conducting computer simulations. The research aims to address this fundamental question directly, providing valuable insights for practical decision-making. By leveraging insights gained from computational cost comparisons, the research will enhance the ability to predict complex scientific phenomena accurately and has the potential to revolutionize fields such as engineering, medical science, and biology. The project contributes to outreach and diversity efforts, inspiring youth and increasing female representation in STEM research. Moreover, collaborations with diverse research groups, as well as involvement in the REU exchange program, provide opportunities to engage undergraduate students, nurturing their interest in research and encouraging them to pursue careers in STEM. Research findings will be disseminated through publications and conferences. The code developed will be shared to foster collaboration and encourage others to build upon these innovative methodologies.

This research addresses the fundamental question of whether to conduct single-fidelity or multi-fidelity computer experiments by investigating the effectiveness of multi-fidelity simulations. It begins by examining the computational cost comparison between the two approaches, finding that multi-fidelity simulations, under certain conditions, can theoretically require more computational resources while achieving the same predictive ability. To mitigate the negative effects of low-fidelity simulations, a novel and flexible statistical emulator, called the Recursive Nonadditive (RNA) emulator, is proposed to leverage multi-fidelity simulations, and a sequential design scheme based on this emulator is developed, which maximizes the effectiveness by selecting inputs and fidelity levels based on a criterion that balances uncertainty reduction and computational cost. Furthermore, two novel multi-fidelity emulators, called ""secure emulators,"" are developed, which theoretically guarantee superior predictive performance compared to single-fidelity emulators, regardless of design choices.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2339829","CAREER: Statistical foundations of particle tracking and trajectory inference","DMS","STATISTICS","04/01/2024","01/19/2024","Jonathan Niles-Weed","NY","New York University","Continuing Grant","Yong Zeng","03/31/2029","$89,991.00","","jdw453@nyu.edu","70 WASHINGTON SQ S","NEW YORK","NY","100121019","2129982121","MPS","126900","1045","$0.00","Many problems in human microbiology, astronomy, high-energy physics, fluid dynamics, and aeronautics involve large collections of moving ""particles"" with complicated dynamics. Learning how these systems work requires developing statistical procedures for estimating these dynamics on the basis of noisy observations. The goal of this research is to develop scalable, practical, and reliable methods for this task, with a particular focus on developing statistical theory for applications in cosmology, cellular biology, and machine learning. This research will also include a large outreach component based on broadening access to research opportunities for undergraduates and graduate students.

The technical goals of this proposal are to develop computationally efficient estimators for multiple particle tracking in d dimensions when the particles evolve based on a known or unknown stochastic process, to develop Bayesian methods for posterior sampling based on observed trajectories, and to extend these methods to obtain minimax estimation procedures for smooth paths in the Wasserstein space of probability measures. The research also aims to develop estimators for more challenging models with the growth and interaction of particles.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2338464","CAREER: Distribution-Free and Adaptive Statistical Inference","DMS","STATISTICS","01/15/2024","01/11/2024","Lihua Lei","CA","Stanford University","Continuing Grant","Yulia Gel","12/31/2028","$75,564.00","","lihualei@stanford.edu","450 JANE STANFORD WAY","STANFORD","CA","943052004","6507232300","MPS","126900","1045","$0.00","Recent years have witnessed a growing trend across scientific disciplines to embrace complex modeling and black-box machine learning algorithms. Despite the remarkable success of handling complex data structures and fitting sophisticated regression functions, there remains a substantial gap regarding the integration of rigorous statistical principles into these pipelines. The main difficulty revolves around achieving reliable uncertainty quantification and robust statistical inference without artificially simplifying the complexity inherent in these advanced tools. Most existing frameworks that aim to bridge the gap rely on strong assumptions under which the machine learning algorithm can accurately estimate the data generating distribution. Nevertheless, these assumptions are often hard to justify, especially for modern machine learning algorithms that have yet to be fully understood. This research project aims to develop new frameworks for statistical inference that wrap around any machine learning algorithms or complex models without concerning about failure modes. The resulting methods are able to address the potential threats to inferential validity caused by black-box machine learning algorithms in a wide range of applied fields, including medicine, healthcare, economics, political science, epidemiology, and climate sciences. Open source software will also be developed to help applied researchers integrate rigorous statistical inference into their domain-specific modeling workflows without compromising the effectiveness of modern tools in non-inferential tasks. This may further alleviate hesitation in adopting modern machine learning methods and catalyze collaboration between scientific and engineering fields. Throughout the project, the PI will mentor undergraduate and graduate students, equipping them with solid understandings of statistical principles to become future leaders in face of rapidly evolving machine learning techniques.

This proposal will focus on distribution-free inference, which is immune to misspecification of parametric models, violation of nonparametric assumptions like smoothness or shape constraints, inaccuracy of asymptotic approximations due to limited sample size, high dimensionality, boundary cases, or irregularity. To avoid making uninformative decisions, an ideal distribution-free inference framework should also be adaptive to good modeling. This means that it should be as efficient as other frameworks that rely on distributional assumptions. Adaptivity alleviates the tradeoff between robustness and efficiency. The PI will develop distribution-free and adaptive inference frameworks for three specific problems. First, in causal inference, tighter identified set can be obtained for partially identified causal effects by incorporating pre-treatment covariates. However, existing frameworks for sharp inference require estimating conditional distributions of potential outcomes given covariates. The PI will develop a generic framework based on duality theory that is able to wrap around any estimates of conditional distributions and make distribution-free and adaptive inference. Second, many target parameters in medicine, political economy, and causal inference can be formulated through extremums of the conditional expectation of an outcome given covariates. In contrast to classical methods that impose distributional assumptions to enable consistent estimation of the conditional expectation, the PI will develop a distribution-free framework for testing statistical null hypotheses and constructing valid confidence intervals on the extremums directly. Finally, the use of complex models and prediction algorithms in time series nowcasting and forecasting presents challenges for reliable uncertainty quantification. To address this, the PI will develop a framework based on model predictive control and conformal prediction that is able to wrap around any forecasting algorithms and calibrate it to achieve long-term coverage, without any assumptions on the distribution of the time series. The ultimate goal of this research is to bring insights and present a suite of tools to empower statistical reasoning with machine learning and augment machine learning with statistical reasoning.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria." "2335569","Collaborative Research: Planning: FIRE-PLAN:High-Spatiotemporal-Resolution Sensing and Digital Twin to Advance Wildland Fire Science","DMS","S&CC: Smart & Connected Commun, STATISTICS, HDBE-Humans, Disasters, and th, Cross-BIO Activities, EPCN-Energy-Power-Ctrl-Netwrks","01/01/2024","08/09/2023","Xiaolin Hu","GA","Georgia State University Research Foundation, Inc.","Standard Grant","Yulia Gel","12/31/2025","$52,000.00","","xhu@cs.gsu.edu","58 EDGEWOOD AVE NE","ATLANTA","GA","303032921","4044133570","MPS","033Y00, 126900, 163800, 727500, 760700","019E, 042E, 042Z, 132Z, 5294, 7275, 9150","$0.00","The number of catastrophic wildfires in the United States has been steadily increasing in recent decades, which generate casualties, large loss of properties, and dramatic environmental changes. However, it is difficult to make accurate predictions of wildland fire spread in real time for firefighters and emergency response teams. Although many fire spread models have been developed, one of the biggest challenges in their operational use is the lack of ground truth fire data at high spatiotemporal resolutions, which are indispensable for model evaluation and improvements. The objective of this planning project is to bring together wildland fire science researchers, fire sensing and data science experts, and diverse stakeholders to develop standards and requirements for high-spatiotemporal-resolution wildland fire sensing and digital twin construction. An organizing committee will be formed from wildland fire science, engineering, and stake holder communities including fire ecology and behavior modeling, pollution monitoring, robotics, cyber physical systems (CPS), wildfire fighting, indigenous cultural burns, and prescribed fires. A series of physical and remote workshops will be held focusing on themes such as open fire data for wildland fire modeling validation, digital twins for prescribed fires, and safe and efficient wildland fire data collection.

Research tasks of this planning project include: 1) identification of key high-spatiotemporal-resolution fire metrics and data representations to support fire model validation and fire operations, 2) proposition of sensing strategies and algorithms for fire sensing and suppression robots and cyber physical systems that can support safe and efficient collection of desired high-resolution fire data, 3) development and evaluation of data assimilation and digital twin construction using high-resolution data to advance fire behavior modeling, coupled fire-atmosphere modeling, and smoke modeling, and 4) prototype and initial fire data ecosystem demonstration including collection of cultural burn data and establishment of GeoFireData, a benchmark fire data sharing and digital twin website, which can support different fire operation types such as fire spread model validation and controlled burn planning. The special attention will be devoted to interdisciplinary training of the next generation of scientists working with wildfire risks at the interface of computational sciences, engineering, ecology, and data sciences.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria."