Skip to content

Commit

Permalink
fixing small issues
Browse files Browse the repository at this point in the history
Signed-off-by: Steven Hahn <[email protected]>
  • Loading branch information
quantumsteve committed Aug 22, 2024
1 parent 58a4137 commit 1e81af3
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
1 change: 1 addition & 0 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,7 @@ @misc{atanasov2017
eprint={1710.09356},
archivePrefix={arXiv},
primaryClass={cs.NA},
journal={arXiv},
doi = {10.48550/arXiv.1710.09356},
URL = {https://doi.org/10.48550/arXiv.1710.09356}
}
Expand Down
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ bibliography: paper.bib

# Summary

Many areas of science exhibit physical processes that are described by high dimensional partial differential equations (PDEs), e.g., the 4D [@dorf2013], 5D [@candy2009] and 6D models [@juno2018] describing magnetized fusion plasmas, models describing quantum chemistry, or derivatives pricing [@bandrauk2007]. In such problems, the so called "curse of dimensionality" whereby the number of degrees of freedom (or unknowns) required to be solved for scales as $N^D$ where $N$ is the number of grid points in any given dimension $D$. A simple, albeit naive, 6D example is demonstrated in the left panel of Figure \ref{fig:scaling}. With $N=1000$ grid points in each dimension, the memory required just to store the solution vector, not to mention forming the matrix required to advance such a system in time, would exceed an exabyte - and also the available memory on the largest of supercomputers available today. The right panel of Figure \ref{fig:scaling} demonstrates potential savings for a range of problem dimensionalities and grid resolution. While there are methods to simulate such high-dimensional systems, they are mostly based on Monte-Carlo methods [@e2020], which rely on a statistical sampling such that the resulting solutions include noise. Since the noise in such methods can only be reduced at a rate proportional to $\sqrt{N_p}$ where $N_p$ is the number of Monte-Carlo samples, there is a need for continuum, or grid/mesh-based methods for high-dimensional problems, which both do not suffer from noise and bypass the curse of dimensionality. We present a simulation framework that provides such a method using adaptive sparse grids [@pfluger2010].
Many areas of science exhibit physical processes that are described by high dimensional partial differential equations (PDEs), e.g., the 4D [@dorf2013], 5D [@candy2009] and 6D models [@juno2018] describing magnetized fusion plasmas, models describing quantum chemistry, or derivatives pricing [@bandrauk2007]. Such problems are affected by the so-called "curse of dimensionality" where the number of degrees of freedom (or unknowns) required to be solved for scales as $N^D$ where $N$ is the number of grid points in any given dimension $D$. A simple, albeit naive, 6D example is demonstrated in the left panel of Figure \ref{fig:scaling}. With $N=1000$ grid points in each dimension, the memory required just to store the solution vector, not to mention forming the matrix required to advance such a system in time, would exceed an exabyte - and also the available memory on the largest of supercomputers available today. The right panel of Figure \ref{fig:scaling} demonstrates potential savings for a range of problem dimensionalities and grid resolution. While there are methods to simulate such high-dimensional systems, they are mostly based on Monte-Carlo methods [@e2020], which rely on a statistical sampling such that the resulting solutions include noise. Since the noise in such methods can only be reduced at a rate proportional to $\sqrt{N_p}$ where $N_p$ is the number of Monte-Carlo samples, there is a need for continuum, or grid/mesh-based methods for high-dimensional problems, which both do not suffer from noise and bypass the curse of dimensionality. We present a simulation framework that provides such a method using adaptive sparse grids [@pfluger2010].

The Adaptive Sparse Grid Discretization (ASGarD) code is a framework specifically targeted at solving high-dimensional PDEs using a combination of a Discontinuous-Galerkin Finite Element solver implemented atop an adaptive sparse grid basis. The adaptivity aspect allows for the sparsity of the basis to be adapted to the properties of the problem of interest, which facilitates retaining the advantages of sparse grids in cases where the standard sparse grid selection rule is not the best match. A prototype of the non-adaptive sparse-grid implementation was used to produce the results of @dazevedo2020 for 3D time-domain Maxwell's equations. ASGarD's functionality was recently extended to solve the Vlasov–Poisson–Lenard–Bernstein Model at lower computational cost [@schnake2024]. The implementation utilizes both CPU and GPU resources, as well as being single- and multi-node capable. Performance portability is achieved by casting the computational kernels as linear algebra operations and relying on vendor-provided BLAS libraries. Several test problems are provided, including advection up to 6D with either explicit or implicit timestepping.

Expand Down

0 comments on commit 1e81af3

Please sign in to comment.