You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
add ability to calculate gradient of some output function wrt parameters using adjoint sensitivities (faster than forward sensitivities models with 100s of parameters)
refactor solver state to be a solver specific struct and implement a new trait SolverState, each solver can be save or load their state and resume
write a generic checkpointing struct that can (a) save a particular solve as a sequence of states defining a list of n segments of the solution trajectory, (b) activate segment i and (c) interpolate the solution at any point in segment i
solve_integrate_adjoint(t_max): solves adjoint problem with a functional given by f = \int_0^t_max out(t) dt. Returns dfdp, where p is the parameter vector and dfdp is a dense matrix. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, (d) return solution of adjoint equations at final time as the result
solve_sum_squared_adjoint(t_discrete, data): solves adjoint problem with functional given by f = sum_i (out(t_i) - data_i)^2. Returns same as above. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, using event pullbacks to adjust the state at each data point (d) return solution of adjoint equations at final time as the result
add solve_integrate and solve_sum_squared, same as above but return value of f instead. Implement
add solve_integrate_fwd and solve_sum_squared_fwd, same as above but use forward sensitivities instead of adjoints
The text was updated successfully, but these errors were encountered:
add ability to calculate gradient of some output function wrt parameters using adjoint sensitivities (faster than forward sensitivities models with 100s of parameters)
SolverState
, each solver can be save or load their state and resumen
segments of the solution trajectory, (b) activate segmenti
and (c) interpolate the solution at any point in segmenti
solve_integrate_adjoint(t_max)
: solves adjoint problem with a functional given byf = \int_0^t_max out(t) dt
. Returnsdfdp
, wherep
is the parameter vector anddfdp
is a dense matrix. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, (d) return solution of adjoint equations at final time as the resultsolve_sum_squared_adjoint(t_discrete, data)
: solves adjoint problem with functional given byf = sum_i (out(t_i) - data_i)^2
. Returns same as above. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, using event pullbacks to adjust the state at each data point (d) return solution of adjoint equations at final time as the resultsolve_integrate
andsolve_sum_squared
, same as above but return value off
instead. Implementsolve_integrate_fwd
andsolve_sum_squared_fwd
, same as above but use forward sensitivities instead of adjointsThe text was updated successfully, but these errors were encountered: