-
-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PDE and PDAE interface #997
Comments
Note that discussion is missing probably the most important detail which is that there already is a high-level discretization-independent PDE interface which is the PDESystem. https://docs.sciml.ai/ModelingToolkit/stable/systems/PDESystem This is how finite difference, finite volume, and physics-informed neural network discretizations all use the same interface. How it's working was last recorded back in the ModelingToolkit workshop https://www.youtube.com/live/HEVOgSLBzWA?si=JRHE4qro1CKnx_O4&t=1970. I think it needs an updated video that goes into more detail, but it's at least a good overview of the core ideas behind it. One of the things we are looking to do in the near future (as is described in some of the seminar talks that I give) is to incorporate pseudospectral methods, Trixi, VoronoiFEM, and Ferrite into this same interface. @xtalax is on the path to handle pseudospectral, but it would be nice to get some help on getting the two others in. The core issue right now is that we do need to be a bit more careful about domain specifications. PDESystem right now uses DomainSets.jl in order to specify the domains of integration, but since all of the solvers we have right now only support hypercube domains we haven't dealt too much with this aspect. I think the core of what we want is this two way street:
From this specification, a finite difference method can look at that and throw a high level error saying "I only support these domains", while a finite element method would look at a general DomainSet domain and turn it into a mesh before solving. Now the purpose of the common mesh format is that the symbolic interface is supposed to be solver independent. However, when you have to specify the domain in some "discretized" way, that gets a hairy. This is why a common mesh format would be required since at some level you need to be able to talk about such domains but then have it communicate to any solver. As discussed on Slack, the core point here is that it's likely that any solver will have its own internal formats anyways because it will specialize how it wants to do the compute, and so there will be a step in the symbolic interface where the PDE solver like Ferrite first interprets the symbolic expression and then converts the common mesh into the mesh it wants, and then solves on the converted mesh. The big aspect that is then has ties to the (Note that there's also discussions of a different kind of PDESystem,
The answer there is effectively, no. It's not possible. The reason is because the functional form can change: a reaction-diffusion-advection equation is described by three functions (MATLAB's pdpde), but then the KS equation needs the 3rd and 4th derivatives, and then you have some nonlinear cases that don't fit in, and ... It's not possible to have an interface that covers all of the possible functions that someone would have to provide on every space, which is why we went back to the drawing board and came back with the symbolic PDESystem interface. In the end, the only general specification is symbolic in some form since that is what could target different functional forms.
This is done in the Every PDE is a PDAE though BTW, with the boundary conditions algebraic equations. Standard methods simply perform tearing on the algebraic equations in order to get an ODE, but with ModelingToolkit style transformations you can naively discretize ignoring the BCs and then apply the tearing pass and automatically get the same ODE by reducing the algebraic equations. You can show for example things like ghost points are exactly equivalent to this method. So using ModelingToolkit in this way makes the generalization of PDEs to PDAEs trivial, since the DAEs are just handled the same way as BCs and the BCs require no special handling. For SPDEs, there's But yes, there can be other ways of representing the PDE systems.
Note that's not quite right. These are "the same thing" in the sense of a PDESystem. The PDESystem interface is all about discretizing a high level PDE description into any of the computable mathematical forms. The computable mathematical forms are things like NonlinearProblem, ODEProblem, SDEProblem, OptimizationProblem, etc. So for example, in this higher level idea of "a discretization is a compiler transformation of PDESystem into XSystem", you have relations like:
What is important here is that the PDESystem is transformed into a numerically solvable system type, with observable attributes so that For PDESystem, there are then two things:
The reason why So for example, if someone has a weird PDE discretization they want to build that "requires" lowering to a loop, in the parlance of this general system that's "just" a discretizer which takes in a PDESystem and spits out a
I think this is covered by the previous answer.
I'll just point to https://docs.sciml.ai/MethodOfLines/stable/tutorials/heatss/ as an example of doing exactly that in the high level interface.
If you take a look at MethodOfLines.jl in its specification, you have to choose which independent variables to discretize. If you choose to discretize all of the variables, you just get a NonlinearSystem instead of an ODESystem. So this kind of thing should be handled by the discretizer.
One disadvantage of going purely symbolic is that sometimes people want to specify these things with data (as a vector), but in order to be completely general (since you don't know what the discretizer will do) you do have to specify these things continuously. That's not the worst thing in the world, you simply have to put an interpolation over the data you wish to specify. An example of that is shown here: https://docs.sciml.ai/MethodOfLines/stable/tutorials/icbc_sampled/.
So then as you can see from the previous portions, what's required is that a re-interpretation map is provided with the solution. This is what the
Right now it has seemed to be "fine", but yes as we grow the discretization set it may be difficult. I think one nice way to handle this would be for each discretizer package to have a system type. So for example, MethodOfLines should probably add an extra stage in here where it's If we let users target that lowered form, the connection there would not be discretizer independent, but could expose the full functionality of the discretizer.
Those are "just" different So, what's next?What I'm trying to say here is that, we have done a ton with PDESystem / PDESolution and it seems to be working. We wanted to get that working well enough before taking the next step, and I think we're at the right point to be doing that transition. The next step of course is a giant leap, and that's go beyond what we have with MethodOfLines.jl and NeuralPDE.jl and start incorporating Trixi.jl, Ferrite.jl, VoronoiFVM.jl, and start allowing discretize dispatches that support arbitrary meshes. That's going to be a major leap, but I've been talking with @xtalax about it already and we were going to start doing that very soon. In fact, we have a current grant in which we have to be doing this 😅 , so the process has already started. We would very much appreciate your help getting Ferrite into this system if you're willing to go along with the ride.
Yes, that's the next step here. What we're trying to support are the following interfaces testing on the heat equation:
Due to grant obligations, we're working on the Decapodes.jl one ASAP (that's already started), and that's one that requires handling unstructured meshes so that's why this process is already underway. So any help to get Ferrite into the same form could be tremendously helpful. |
Thanks for such a detailed answer Chris. I have added the first paragraph as citation in my answer for visibility. I really forgot about these packages, sorry. I will comment on some parts. Skipped parts are not ignored but I just agree then. :)
You are probably aware, but I should note it anyway. Discretizing the geometry can be quickly a really hard issue, especially in 3D, and not done properly this will negatively impact the downstream performance. We also unfortunately lack infrastructure in Julia do to this (which is related to the issue that we don't have a nice common mesh interface). I think GridAP, Ferrite and Trixi can all handle Gmsh (which is commonly used for FEM in the groups I know), so maybe here it might be a starter to see how we could interface between DomainSets.jl. With the OpenCascade we should be able to generate at least with mediocre effort nice tetrahedral grids, However, we for the first steps we can constrain ourselves to the geometries provided by DomainSets.jl, because every framework can easily deal with the discretization of these by themselves. I think this is reasonable to get the discussion going and to work our way towards answering how to deal with this part of the pipeline.
Not sure if this is really hard, but maybe I am missing something. Let me elaborate. At least for Ferrite speaking we have infrastructure to evaluate the solution at arbitrary points. I haven't implemented this part, but basically we build a KDTree and then search for N candidate elements. For each candidate you then have to either solve a linear or nonlinear problem (if the geometry is linear or nonlinear) and then you can already evaluate. This should be not hard to port between other frameworks, if they don't provide something similar by themselves. For the temporal interpolation we should be able to just use whatever the attached time integrator provides. Not recommended to do visualization with tho. Each framework has at least a basic visualizer and I am actively working on unifying these (in Makie), but this will likely take some more months, because of missing spare time and hands. Still such an eval is still very helpful for postpressing and debugging purposes.
I think I disagree with parts of the answer here, but I think this stems from me being not precise in the question here. It should not be impossible. We could bring the same argument for time dependent problems (ODEs/SDEs/...), where we have a very nice classification and the trick to reduce them to first order ODEs. However, for PDEs we can apply a similar trick to get systems of first order PDEs. The evaluation function can be handled analogue to how you handle DAEs in the ecosystem, i.e. evaluating the residual by via On the other side I definitely agree on the point that symbolic representations can be super useful!
I see and it kinda sense for me. I treat them differently, because for boundary conditions it is usually clear (in finite elements at least) what to do in the (semi-)discretization. Not disagreeing on this point tho.
Thanks for the detailed view here. I think I get your point here, that the common (and unquestionably important) thing is that we need to track and convey information about the PDE into all subproblems, no matter how we (semi-)discretize. And I think we basically agree on the high level, we have to track the discretization information recursively. I have to think more about this, whether leaving some derivatives intact is really something special and how it may affect internal treatment.
Sure. Happy to help here. I just want to note transparently that my spare time is currently a bit limited, because I agreed to help in several projects already, but I think that I still should be able to manage to fit this. I have already started working on a Symbolics.jl utility layer on top of Ferrite, but it is still super limited. I could only manage to get some simple 1D problem working for now. I haven't bothered to open issues for the problems I encountered, because they basically boil down to JuliaSymbolics/Symbolics.jl#973 . Copy pasta of my work below in the spoiler. Maybe this can be helpful as a starter. My idea was basically to start in the inner loops and work my way "to the oudside", but I never came very far. It follows a Poisson problem in 1D, which is the translation of the first tutorial https://ferrite-fem.github.io/Ferrite.jl/dev/tutorials/heat_equation/ . This example requires the master branch of Ferrite. using Ferrite, ModelingToolkit, LinearAlgebra, SparseArrays
import Symbolics: ClosedInterval
using SymbolicUtils
# TODO correct definitions.
function grad(var)
Differential(x₁)(var)
end
# Element
@parameters x₁
Ωₑ = (x₁ ∈ ClosedInterval(-1,1)) # TODO replace with RefLine
∫ = Integral(Ωₑ)
extract_integrand = @rule ∫(~ex) => ~ex
recover_cdot = @rule ~x * ~y => ~x ⋅ ~y
# Bilinear form
a(u,v) = ∫(grad(u) ⋅ grad(v))
# Linear form
@parameters f
b(v) = ∫(1.0*v)
@variables u(..) δu(..)
# System
eqs_weak = [a(u(x₁),δu(x₁)) ~ b(δu(x₁))]
grid = generate_grid(Line, (20,));
ip = Lagrange{RefLine, 1}()
qr = QuadratureRule{RefLine}(2)
cellvalues = CellValues(qr, ip);
dh = DofHandler(grid)
add!(dh, :u, ip)
close!(dh);
K = create_sparsity_pattern(dh);
ch = ConstraintHandler(dh);
∂Ω = union(
getfaceset(grid, "left"),
getfaceset(grid, "right"),
);
dbc = Dirichlet(:u, ∂Ω, (x, t) -> 0)
add!(ch, dbc);
close!(ch)
function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues)
n_basefuncs = getnbasefunctions(cellvalues)
## Reset to 0
fill!(Ke, 0)
fill!(fe, 0)
## Loop over quadrature points
for q_point in 1:getnquadpoints(cellvalues)
## Get the quadrature weight
dΩ = getdetJdV(cellvalues, q_point)
## Loop over test shape functions
for i in 1:n_basefuncs
δu = shape_value(cellvalues, q_point, i)
∇δu = shape_gradient(cellvalues, q_point, i)
## Add contribution to fe
fe[i] += δu * dΩ
## Loop over trial shape functions
for j in 1:n_basefuncs
∇u = shape_gradient(cellvalues, q_point, j)
## Add contribution to Ke
Ke[i, j] += (∇δu ⋅ ∇u) * dΩ
end
end
end
return Ke, fe
end
function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler)
## Allocate the element stiffness matrix and element force vector
n_basefuncs = getnbasefunctions(cellvalues)
Ke = zeros(n_basefuncs, n_basefuncs)
fe = zeros(n_basefuncs)
## Allocate global force vector f
f = zeros(ndofs(dh))
## Create an assembler
assembler = start_assemble(K, f)
## Loop over all cels
for cell in CellIterator(dh)
## Reinitialize cellvalues for this cell
reinit!(cellvalues, cell)
## Compute element contribution
assemble_element!(Ke, fe, cellvalues)
## Assemble Ke and fe into K and f
assemble!(assembler, celldofs(cell), Ke, fe)
end
return K, f
end
function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues, eq::Equation)
n_basefuncs = getnbasefunctions(cellvalues)
## Reset to 0
fill!(Ke, 0)
fill!(fe, 0)
## Loop over quadrature points
for q_point in 1:getnquadpoints(cellvalues)
## Get the quadrature weight
dΩ = getdetJdV(cellvalues, q_point)
## Loop over test shape functions
for i in 1:n_basefuncs
δuᵢ = shape_value(cellvalues, q_point, i)
∇δuᵢ = shape_gradient(cellvalues, q_point, i)
## Add contribution to fe
fe[i] += Symbolics.value(substitute(eq.rhs, Dict(δu(x₁) => δuᵢ))) * dΩ
## Loop over trial shape functions
for j in 1:n_basefuncs
∇uᵢ = shape_gradient(cellvalues, q_point, j)
## Add contribution to Ke
Ke[i, j] += Symbolics.value(substitute(eq.lhs, Dict(grad(u(x₁)) => ∇uᵢ[1], grad(δu(x₁)) => ∇δuᵢ[1]))) * dΩ
end
end
end
return Ke, fe
end
function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler, eq::Equation)
eq_element = extract_integrand(eqs_weak[1].lhs) ~ extract_integrand(eqs_weak[1].rhs)
## Allocate the element stiffness matrix and element force vector
n_basefuncs = getnbasefunctions(cellvalues)
Ke = zeros(n_basefuncs, n_basefuncs)
fe = zeros(n_basefuncs)
## Allocate global force vector f
f = zeros(ndofs(dh))
## Create an assembler
assembler = start_assemble(K, f)
## Loop over all cels
for cell in CellIterator(dh)
## Reinitialize cellvalues for this cell
reinit!(cellvalues, cell)
## Compute element contribution
assemble_element!(Ke, fe, cellvalues, eq_element)
## Assemble Ke and fe into K and f
assemble!(assembler, celldofs(cell), Ke, fe)
end
return K, f
end
K, f = assemble_global(cellvalues, K, dh);
apply!(K, f, ch)
u₁ = K \ f;
K, f = assemble_global(cellvalues, K, dh, eqs_weak[1]);
apply!(K, f, ch)
u₂ = K \ f;
@test u₁ ≈ u₂
eq_qp_lhs = extract_integrand(eqs_weak[1].lhs)
@variables ∇δuᵢ ∇uⱼ
eq_qp_lhs_2 = substitute(eq_qp_lhs, Dict(grad(u(x₁)) => ∇uⱼ, grad(δu(x₁)) => ∇δuᵢ))
bilinearform_evaluation = build_function(eq_qp_lhs_2, ∇δuᵢ, ∇uⱼ, expression = Val{false})
eq_qp_rhs = extract_integrand(eqs_weak[1].rhs)
@variables δuᵢ
eq_qp_rhs_2 = substitute(eq_qp_rhs, Dict(δu(x₁) => δuᵢ))
linearform_evaluation = build_function(eq_qp_rhs_2, δuᵢ, expression = Val{false})
function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues, bifo::typeof(bilinearform_evaluation), lifo::typeof(linearform_evaluation))
n_basefuncs = getnbasefunctions(cellvalues)
## Reset to 0
fill!(Ke, 0)
fill!(fe, 0)
## Loop over quadrature points
for q_point in 1:getnquadpoints(cellvalues)
## Get the quadrature weight
dΩ = getdetJdV(cellvalues, q_point)
## Loop over test shape functions
for i in 1:n_basefuncs
## Linear form
uᵢ = shape_value(cellvalues, q_point, i)
fe[i] += lifo(uᵢ) * dΩ
## Bilinear form
∇uᵢ = shape_gradient(cellvalues, q_point, i)
## Loop over trial shape functions
for j in 1:n_basefuncs
∇uⱼ = shape_gradient(cellvalues, q_point, j)
## Add contribution to Ke
Ke[i, j] += bifo(∇uᵢ[1], ∇uⱼ[1]) * dΩ
end
end
end
return Ke, fe
end
function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler, bifo::typeof(bilinearform_evaluation), lifo::typeof(linearform_evaluation))
## Allocate the element stiffness matrix and element force vector
n_basefuncs = getnbasefunctions(cellvalues)
Ke = zeros(n_basefuncs, n_basefuncs)
fe = zeros(n_basefuncs)
## Allocate global force vector f
f = zeros(ndofs(dh))
## Create an assembler
assembler = start_assemble(K, f)
## Loop over all cels
for cell in CellIterator(dh)
## Reinitialize cellvalues for this cell
reinit!(cellvalues, cell)
## Compute element contribution
assemble_element!(Ke, fe, cellvalues, bifo, lifo)
## Assemble Ke and fe into K and f
assemble!(assembler, celldofs(cell), Ke, fe)
end
return K, f
end
K, f = assemble_global(cellvalues, K, dh, bilinearform_evaluation, linearform_evaluation);
apply!(K, f, ch)
u₃ = K \ f;
@test u₁ ≈ u₃ |
With this issue I would like to get the discussion about a PDE/PDAE interface for the DifferentialEquations.jl ecosystem started.
I should also note that I might be quite biased, because I am also helping in the development of Ferrite.jl, so I am reliant on information from other PDE solver developers, especially the ones using more specialized discretization techniques.
Problem specification
For completeness let me start by stating the abstract problem: For some "nice" domain$\Omega \times T$ (where T might be empty) we want to approximate a solution $u(x,t)$ the problem $F(x,t,u,\partial_t u, \partial_{x_1} u, \partial_{x_2} u, ..., \partial^n_{x_1} \partial^m_{x_2} u,...) = 0$ with a set of boundary conditions.
For the approximations of such problems we usually use 2 classes of approaches.
A. Direct methods (e.g. space-time finite element discretization for space-time problems, or just classical finite elements for time independent problems) -> Transforms the problem to a (non)linear problem, which we could solve with (Non)linearSolve.jl
B. Partitioned techniques (e.g. only semidiscretize in space with finite elements or semidiscretize in time ) -> Transforms the problem into a PDE/ODE.../ which needs further treatment.
Note: The argument is analogue for e.g. SPDE problems.
Another relevant difficulty is the specific form the the PDE. E.g. strong form? Weak form? Ultra weak form? Mixed form? Dual form? Other form?
"State of the art"
Currently many large frameworks provide suitable functions to evaluate their discretizations which are passed down to DifferentialEquations.jl (e.g. into ODEProblem, have not seen NonlinearProblem yet). I think this is one way to see it. However, in the long run it would be nice to have a common interface to e.g. simplify the comparison of different discretization methods for the same problem without fully implementing everything in different frameworks.
EDIT 1: For visibility I put part of Chris answer here, because I totally forgot about these when writing this up. Sorry!
Design considerations
For me the tricky part is always the separation of concerns. I try to summarize what which concrete questions make it difficult for me:
PDEProblem
? Or should we have multiple ones, but what is the granularity? (Related: What about PDAEs/SDEs/...?)PDEFunction
analogon to anODEFunction
?InitialBoundaryValueProblem
would be maximally inconsistent as a concrete type. So maybe it should be some different types ofFooBarPDEFunction
s (not even sure how to name these now :))?Also a bit further down the road
9. How do we support as many backends as possible and how does the interface look like?
Example and starting point
I think we should start with time-independent problems (but we should not forget about the space-time case).
Let me try to sketch a workflow on a high level (no time dependence), to elaborate why I ended up with the high granularity:
GridAPGalerkinDiscretization(field1=>(LagrangePolynomials,GaussQuadrature)),...)
More specifically we might think about it like this in some hypothetical code for some basic steady state heat problem
Implementing this should be straight forward. However it is not clear what exactly
model
should be, because the specific model evaluation is highly framework dependent. We might be able to pass some MTK prepresentation, but I am not sure how useful this is, because the frameworks still need to figure out how to discretion the specific form, which can be non-trivial (this is basically what e.g. FEniCS does and even after big efforts it is still limited). We can also pass the complete symbolic representation of the problem around, but we might end up eliminating any framework usage (and performance).Alternatively we can pass e.g. "element routines" (e.g. what to do on a single finite element for a finite element route). However, this design comes with its own limitations (e.g. cannot handle element patches and handle interfaces). Furthermore, this is not compatible with other approaches like for example generalized finite differences or some finite volume techniques.
Maybe a good starting point is to get a PDEProblem to solve a the heat equation example with some frameworks?
https://ferrite-fem.github.io/Ferrite.jl/dev/tutorials/heat_equation/
https://gridap.github.io/Tutorials/dev/pages/t001_poisson/
https://j-fu.github.io/VoronoiFVM.jl/stable/examples/Example201_Laplace2D/
Ambiguities to consider
We already have https://github.com/SciML/BoundaryValueDiffEq.jl to solve boundary value problems, however it is designed around 1D problems. However, the PDE community often uses the term "boundary value problem" synonymous for a PDE problem with given boundary conditions in any dimension (in contrast to initial value and initial-boundary value problems.
Relevant issues
References
The text was updated successfully, but these errors were encountered: