Below I will describe an algorithm for solving IVP by any Embedded Runge—Kutta method of any order for any dimensionality of the system based on algorithm I have implemented in General-algorithm-of-the-explicit-Runge-Kutta-method. I haven't implemented adaptive step size here, maybe I'll do it in a while. Here are two scripts — for a method with and without FSAL property. The FSAL (First Same As Last) property means that if the same function evaluations (or calculations) are required at the start point of a step and at the end point of the next step, the function value calculated at the end point of the current step can be reused as the initial value for the next step without having to recalculate it. The output, compared to my algorithm for explicit methods, only adds an array with the estimates of local errors (ELE).
- Embedded Runge—Kutta methods. Butcher tableau
- Description of the implemented algorithm
- Example
- Notes
- Planned Features
- References
Let an initial value problem (IVP) be specified as follows:
where
The idea behind the Embedded Runge—Kutta methods is to provide two approximations:
is of order
is of order
The difference between them gives an estimate of the local error for a less accurate result and can be used to control the step size:
The approximation
Butcher tableau for the
The procedure for filling the matrix is identical as in my algorithm for Explicit Runge—Kutta methods. At each iteration (also take a look here) we need to initialize the matrix
and the matrix
Then the formulas for filling the matrix
The ExampleOfUse.mlx file shows the obtaining of The Lotka—Volterra Attractor
and The TSUCS2 Attractor
The Lotka—Volterra Attractor has been obtained using DVERK method, which has no FSAL propety, so it's important to use the odeEmbeddedGeneral (odeEmbeddedGeneral_optimized) function.
The TSUCS2 Attractor has been obtained using RK5(4)7M method, which has FSAL propety, so it's recommended to use odeFSALEmbeddedGeneral (odeFSALEmbeddedGeneral_optimized), which takes this property into account, but it's also possible to use odeEmbeddedGeneral (odeEmbeddedGeneral_optimized) function, only in this case FSAL won't be taken into account.
-
c_vector
: vector of coefficients$\mathbf{c}$ of Butcher tableau for the selected method; -
A_matrix
: matrix of coefficients$\mathbf{A}$ of Butcher tableau for the selected method; -
b_vector
: vector of coefficients$\mathbf{b}$ of Butcher tableau for the selected method; -
b_hat_vector
: vector of coefficients$\mathbf{\hat{b}}$ of Butcher tableau for the selected method; -
odefun
: functions to solve, specified as a function handle that defines the functions to be integrated; -
tspan
: interval of integration, specified as a two-element vector; -
tau
: time discretization step; -
incond
: vector of initial conditions.
t
: vector of evaluation points used to perform the integration;xsol
: solution matrix in which each row corresponds to a solution at the value returned in the corresponding row oft
.ELE
: local errors vector in which each row corresponds to a estimated local error at the value returned in the corresponding row oft
.
The codes from the odeEmbeddedGeneral.m & odeFSALEmbeddedGeneral.m scripts shows a more illustrative integration procedure, for understanding from a theoretical point of view. The optimized versions of these scripts odeEmbeddedGeneral_optimized.m & odeFSALEmbeddedGeneral_optimized.m look as follows:
function [t, xsol, ELE] = odeEmbeddedGeneral_optimized(c_vector, A_matrix, b_vector, b_hat_vector, odefun, tspan, tau, incond)
s_stages = length(c_vector);
m = length(incond);
c_vector = reshape(c_vector, [s_stages 1]);
b_vector = reshape(b_vector, [s_stages 1]);
b_hat_vector = reshape(b_hat_vector, [s_stages 1]);
d_vector = b_hat_vector - b_vector;
incond = reshape(incond, [m 1]);
t = (tspan(1) : tau : tspan(2))';
xsol = zeros(length(incond), length(t));
xsol(:, 1) = incond(:);
K_matrix = zeros(m, s_stages);
ELE = zeros(length(t), 1);
for n = 1:length(t)-1
K_matrix(:, 1) = odefun(t(n), xsol(:, n));
for i = 2:s_stages
K_matrix(:, i) = odefun(t(n) + tau * c_vector(i), xsol(:, n) + tau * K_matrix(:, 1:i-1) * A_matrix(i, 1:i-1)');
end
xsol(:, n+1) = xsol(:, n) + tau * K_matrix * b_vector;
ELE(n+1) = norm(tau * K_matrix * d_vector, "inf");
end
xsol = xsol';
end
function [t, xsol, ELE] = odeFSALEmbeddedGeneral_optimized(c_vector, A_matrix, b_vector, b_hat_vector, odefun, tspan, tau, incond)
s_stages = length(c_vector);
m = length(incond);
c_vector = reshape(c_vector, [s_stages 1]);
b_vector = reshape(b_vector, [s_stages 1]);
b_hat_vector = reshape(b_hat_vector, [s_stages 1]);
d_vector = b_hat_vector - b_vector;
incond = reshape(incond, [m 1]);
t = (tspan(1) : tau : tspan(2))';
xsol = zeros(length(incond), length(t));
xsol(:, 1) = incond(:);
K_matrix = zeros(m, s_stages);
ELE = zeros(length(t), 1);
K_matrix(:, s_stages) = odefun(t(1), xsol(:, 1));
for n = 1:length(t)-1
K_matrix(:, 1) = K_matrix(:, s_stages);
for i = 2:s_stages
K_matrix(:, i) = odefun(t(n) + tau * c_vector(i), xsol(:, n) + tau * K_matrix(:, 1:i-1) * A_matrix(i, 1:i-1)');
end
xsol(:, n+1) = xsol(:, n) + tau * K_matrix * b_vector;
ELE(n+1) = norm(tau * K_matrix * d_vector, "inf");
end
xsol = xsol';
end
With only 27 & 28 lines for such a powerful instrument, it looks awesome, doesn't it?
Here no unnecessary variables are created, and the K_matrix
is initialized as zero matrix only once, because the algorithm allows not to fill it with zeros at each iteration, but just to overwrite the columns at this iteration without using the columns with coeficients from the previous one:
K_matrix(:, i) = odefun(t(n) + tau * c_vector(i), xsol(:, n) + tau * K_matrix(:, 1:i-1) * A_matrix(i, 1:i-1)')
- based on this script, add specific integrators, as I did here with the Runge-Kutta method of order 4.
- add adaptive step size.
- Butcher, J. (2016). Numerical methods for ordinary differential equations. https://doi.org/10.1002/9781119121534
- Hairer, E., Nørsett, S. P., & Wanner, G. (1993). Solving Ordinary Differential Equations I: Nonstiff Problems (2nd ed.). Springer. https://doi.org/10.1007/978-3-540-78862-1
- Dormand, J. R., & Prince, P. J. (1980). A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 6(1), 19–26. https://doi.org/10.1016/0771-050x(80)90013-3
- Samardzija, N., & Greller, L. D. (1988). Explosive route to chaos through a fractal torus in a generalized Lotka-Volterra model. Bulletin of Mathematical Biology, 50(5), 465–491. https://doi.org/10.1007/BF02458847
- Li, D. (2008). A three-scroll chaotic attractor. Physics Letters A, 372(4), 387–393. https://doi.org/10.1016/j.physleta.2007.07.045