Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Interpolation & ParametrizedInterpolation #314

Merged
merged 46 commits into from
Oct 24, 2024

Conversation

SebastianM-C
Copy link
Contributor

@SebastianM-C SebastianM-C commented Aug 2, 2024

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

This PR proposes a new way of doing interpolations with the MTKStdLib, taking advantage of the new MTK@v9 features.

As an example, this would look like

using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using ModelingToolkitStandardLibrary.Blocks
using DataInterpolations
using OrdinaryDiffEq

@variables y(t) = 0
@parameters u[1:15] = rand(15)
@parameters x[1:15] = 0:14

@named i = ParametrizedInterpolation(LinearInterpolation, u, x)
eqs = [D(y) ~ i.output.u]

@named model = ODESystem(eqs, t, systems=[i])
sys = structural_simplify(model)

prob = ODEProblem(sys, [], (0.0, 4))
sol = solve(prob)

The main difference when compared to the approaches of TimeVaryingFunction and SampledData is that the interpolation data is now represented via symbolic parameters, taking advantage of the new array symbolics.

This allows us to change the interpolation data without rebuiling the system again via standard remake and setp, making it SII compatible as opposed to SampledData.

using Plots
plot(sol)

prob_new = remake(prob, p=[sys.i.u=>rand(15)])
sol_new = solve(prob_new)
plot!(sol_new)

image

Moreover, the codegen is simplified, thus avoiding inlining the interpolation objects in the expression.

using RuntimeGeneratedFunctions
expr = RuntimeGeneratedFunctions.get_expression(prob.f.f.f_iip)

gives

:((ˍ₋out, ˍ₋arg1, ˍ₋arg2, ˍ₋arg3, ˍ₋arg4, t)->begin
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:373 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:374 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:375 =#
          begin
              begin
                  begin
                      var"i₊output₊u(t)" = (ModelingToolkitStandardLibrary.Blocks.apply_interpolation)(t, ˍ₋arg4[1])
                      begin
                          #= /home/sebastian/.julia/packages/Symbolics/2UpZj/src/build_function.jl:546 =#
                          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:422 =# @inbounds begin
                                  #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:418 =#
                                  ˍ₋out[1] = var"i₊output₊u(t)"
                                  #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:420 =#
                                  nothing
                              end
                      end
                  end
              end
          end
      end)

There are a couple of things that I would like to mention about the implementation:

  • I tried to make this independent of the interpolation package, but we need a function that expresses the calling the interpolation (apply_interpolation) and this needs to be registered for each interpolation type. I implemented an extension for DataInterpolation, so that's preffered over other pacakges, but I beleive that it should be possible to use this with a custom interpolation.

  • I'm assuming 1D interpolation. I think this can be theoretically extended to more than that, but I'm not sure if that's something that's an immediate priority.

  • In the extension implementation I have to register apply_interpolation, but this requires Symbolics. Since Symbolics is a direct dependency of MTKStdLib, I am loading it via using ModelingToolkitStandardLibrary.Blocks.Symbolics, is this how am I supposed to do it?

  • Should this replace or deprecate SampledData? This should be able to do more things (it's not restricted to a particular interpolation type) with a less complex implementation.

cc @ChrisRackauckas @baggepinnen

Edit: There have been a couple of changes in this PR, but the main idea is that it introduces interpolations as specialized blocks in MTKStdLib. We have 2 types of blocks,

  • Interpolation: this is the recommended one in most cases
  • ParametrizedInterpolation: this is recommended if one wants to optimize the data or change it without rebuilding the model. Currently this is a slower due to using GeneralLazyBufferCache for AD compatibility.

Both blocks have inputs and outputs and the DataInterpolations extension is no longer needed. The docs were updated to demonstrate how to use these blocks and the use of TimeVaryingFunction is no longer there.

@ChrisRackauckas
Copy link
Member

This is what I was proposing should just replace the current SampledData and TimeVaryingFunction.

In the extension implementation I have to register apply_interpolation, but this requires Symbolics. Since Symbolics is a direct dependency of MTKStdLib, I am loading it via using ModelingToolkitStandardLibrary.Blocks.Symbolics, is this how am I supposed to do it?

Yes

I tried to make this independent of the interpolation package, but we need a function that expresses the calling the interpolation (apply_interpolation) and this needs to be registered for each interpolation type. I implemented an extension for DataInterpolation, so that's preffered over other pacakges, but I beleive that it should be possible to use this with a custom interpolation.

We only really need DataInterpolations. We can make a SciML package InterpolationInterface.jl if we need to, but right now DataInterpolations is the only useful interpolations package since Interpolations.jl has way too many limitations, so I wouldn't worry about it.

@baggepinnen
Copy link
Contributor

This is what I was proposing should just replace the current SampledData and TimeVaryingFunction.

It should replace SampledData, but TimeVaryingFunction is more generally useful for non-interpolation functions though

@SebastianM-C
Copy link
Contributor Author

Regarding derivatives, I know that DataInterpolations has special handling, so I just forward the Symbolics.derivative call on apply_interpolation to the underlying interpolation. Is there anything else that I need to add?

@SebastianM-C
Copy link
Contributor Author

SebastianM-C commented Aug 5, 2024

The docs failures are quite strange. Looking at the dc motor code and running it locally, I don't understand how

connect(sys.speed_sensor.w, :y, feedback.input2)
connect(sys.pi_controller.ctr_output, :u, source.V)

runs in CI given that feedback & source are defined inside the model. Locally this errors (as expected), but CI errors a bit after.

Moving on, the error is

ERROR: Initialization expression model₊inertia₊phi(t) is currently not supported. If its a higher order derivative expression, then only the dummy derivative expressions are supported.

which I believe it's just a misleading error message, as using sys.inertia.phi instead of model.inertia.phi, indicating that the namespaceing of the component is the issue.

I don't understand how this tutorial worked in the first place 😅

I also bumped the MTK compat since this PR needs parameter dependencies and that was broken for some time. In any case, [email protected] is ancient and again surprises me that no one had to bump it before to get the downgrade CI working 😅

@baggepinnen
Copy link
Contributor

runs in CI given that feedback & source are defined inside the model. Locally this errors (as expected), but CI errors a bit after.

It doesn't run in CI, it's a julia block and not an @example block.

Blocks.get_comp_sensitivity(model, :y, op = Dict(model.inertia.phi => 0.0, model.inertia.w => 0.0))

Could have worked since 0 was the default value before, but no longer isn't

@SebastianM-C
Copy link
Contributor Author

@AayushSabharwal I think there might be a codegen bug in the latest
MTK.

The prob.f.f.f_iip looks like

julia> prob.f.f.f_iip
RuntimeGeneratedFunction(#=in ModelingToolkit=#, #=using ModelingToolkit=#, :((ˍ₋out, ˍ₋arg1, ˍ₋arg2, ˍ₋arg3, ˍ₋arg4, ˍ₋arg5, t)->begin
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:373 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:374 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:375 =#
          begin
              begin
                  i₊x = reshape(view(ˍ₋arg2, 1:15), (15,))
                  i₊u = reshape(view(ˍ₋arg2, 16:30), (15,))
                  begin
                      var"i₊output₊u(t)" = (ModelingToolkitStandardLibrary.Blocks.apply_interpolation)(ˍ₋arg5[1], t)
                      begin
                          #= /home/sebastian/.julia/packages/Symbolics/qKoME/src/build_function.jl:546 =#
                          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:422 =# @inbounds begin
                                  #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:418 =#
                                  ˍ₋out[1] = var"i₊output₊u(t)"
                                  #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:420 =#
                                  nothing
                              end
                      end
                  end
              end
          end
      end))

but the parameters are

julia> collect(prob.p)
4-element Vector{Any}:
 [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0    0.4452005375902247, 0.27260738099007953, 0.12211828219228682, 0.07903709080712873, 0.45123745321084807, 0.872779020856403, 0.7417928230058485, 0.7812799868464684, 0.6814546815330172, 0.32821342669957054]
 LinearInterpolation{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Vector{Float64}, Float64}[LinearInterpolation{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Vector{Float64}, Float64}([0.16623813954545685, 0.8138788623862058, 0.936459137051247, 0.6545058313889524, 0.29096219036766036, 0.4452005375902247, 0.27260738099007953, 0.12211828219228682, 0.07903709080712873, 0.45123745321084807, 0.872779020856403, 0.7417928230058485, 0.7812799868464684, 0.6814546815330172, 0.32821342669957054], 0.0:1.0:14.0, Float64[], DataInterpolations.LinearParameterCache{Vector{Float64}}(Float64[]), false, Base.RefValue{Int64}(1), false, false)]
 UnionAll[LinearInterpolation]
 Tuple[()]

The problem is that

var"i₊output₊u(t)" = (ModelingToolkitStandardLibrary.Blocks.apply_interpolation)(ˍ₋arg5[1], t)

calls apply_interpolation with ˍ₋arg5, but that's not the correct argument. It should be the one that corresponds to the dependent parameter (prob.p.dependent[1][1]), the interpolation object in this case.

This worked on [email protected], so I'm assuming that the recent changes in [email protected] might have introduced this.

@SebastianM-C
Copy link
Contributor Author

The current MTK#master does not have this error, but in the codegen I see

julia> prob.f.f.f_iip
RuntimeGeneratedFunction(#=in ModelingToolkit=#, #=using ModelingToolkit=#, :((ˍ₋out, ˍ₋arg1, ˍ₋arg2, ˍ₋arg3, ˍ₋arg4, t)->begin
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:373 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:374 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:375 =#
          begin
              begin
                  i₊x = reshape(view(ˍ₋arg2, 1:15), (15,))
                  i₊u = reshape(view(ˍ₋arg2, 16:30), (15,))
                  begin
                      i₊interpolator = (ModelingToolkitStandardLibrary.Blocks.build_interpolation)(ˍ₋arg3[1], i₊u, i₊x, ˍ₋arg4[1])
                      begin
                          var"i₊output₊u(t)" = (ModelingToolkitStandardLibrary.Blocks.apply_interpolation)(i₊interpolator, t)
                          begin
                              #= /home/sebastian/.julia/packages/Symbolics/qKoME/src/build_function.jl:546 =#
                              #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:422 =# @inbounds begin
                                      #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:418 =#
                                      ˍ₋out[1] = var"i₊output₊u(t)"
                                      #= /home/sebastian/.julia/packages/SymbolicUtils/EGhOJ/src/code.jl:420 =#
                                      nothing
                                  end
                          end
                      end
                  end
              end
          end
      end))

Is it expected that the parameter dependencies are always executed in the problem function? The parameter dependency implementation pre SciML/ModelingToolkit.jl#2934 would have cached the dependent parameters in the MTKParameters.

@ChrisRackauckas Will dependent parameter caching no longer be available in the future?

@AayushSabharwal
Copy link
Member

I'm not sure about the bug, nothing immediately stands out as a potential cause.

The not caching parameter dependencies is intentional. It was problematic to maintain, especially with AD and SciMLSensitivity. The new implementation of lazy computation is more in line with observed variables, which is exactly what parameter dependencies are.

@SebastianM-C
Copy link
Contributor Author

Needs the [email protected] release to get CI in a working state.

@hersle
Copy link

hersle commented Sep 10, 2024

Was this completed with MTK v9.34+? 😃

@SebastianM-C
Copy link
Contributor Author

Almost. I mainly stopped because I'm questioning if this approach is good vs just having support for callable parameters in MTK. I think that for most people callable parameters would be enough since having the interpolation data as a vector parameter is not something that one might need in general.
Maybe one potential application is doing an ensemble solve over multiple datasets, but the way the current PR works is not that performant (again due to avoiding callable parameters).

Do you have an application where this PR would be useful?

@hersle
Copy link

hersle commented Sep 11, 2024

I am solving a "big" ODE model with a block-lower-triangular (BLT) structure (e.g. SciML/ModelingToolkit.jl#337), such as

D(x1) ~ 0
D(x2) ~ f(x1)
D(x3) ~ f(x1, x2)
...

Here the solution for xn(t) depends only on the solutions for lower n, so instead of solving the big system all at once, I am solving it in several stages: first the smaller system D(x1) ~ 0, then the smaller system D(x2) ~ f(x1) using the full solution of x1(t) instead of re-integrating it together with D(x1) ~ 0, and so on. This strategy is necessary for numerical stability and performance of my full model (which is more complicated).

Splines are what I use to pass on the full lower-n solutions when solving for xn(t). Ideally, I want to automate the task of replacing equations of motion with splined full solutions, so having good support for splines as parameters is important. I also want derivatives of the splines to work automatically (e.g. if D(x3) ~ f(x1, x2, D(x1), D(x2))).

I am currently relying on workarounds (e.g. the "proxy function" in SciML/ModelingToolkit.jl#2823) that does not fulfill these requirements. It looks to me like this PR improves this situation a lot! But as you said, I think properly working callable parameters would also solve my problems (e.g. what I want to work in SciML/ModelingToolkit.jl#2823).

@SebastianM-C
Copy link
Contributor Author

This looks like something that MTK should automate entirely 😅 I think this is what SciML/NonlinearSolve.jl#388 could help with in the future.
Also, I'm not sure I understand why not using the DiffEq solutions that have solver defined interpolations and derivatives instead of splines, but that's off topic 😅

The main advantage of the approach of this PR is that the parameters of an interpolation object are represented symbolically in the system, but I'm not so sure anymore when is this useful...
I suppose it could be a replacement for the SampledData component, which has similar aims.

@ChrisRackauckas
Copy link
Member

Yeah... don't do this by hand 😅 , you'll get some fancy new tools for exactly this kind of problem over the next two months. I actually am trying to avoid some of the other work in order to work on the general tooling within the solvers for that, so hold your horses 😅

@hersle
Copy link

hersle commented Sep 12, 2024

Thanks a lot! I completely agree that MTK should automate this process. I would kill for this feature, Chris 😅 Would it be helpful if I created a separate issue for this?

Yes, I really want to use the custom ODESolution interpolations instead of manual splines. I don't think it's feasible right now, so that is why I'm manually splining the solutions instead. But it is less accurate and more error-prone than what I believe using the ODESolution would be. For the purposes of this PR, I suppose callable parameters would be "better" if it could accomodate both callable splines and callable ODESolutions.

@AayushSabharwal
Copy link
Member

Roadmap for callable parameters is in SciML/ModelingToolkit.jl#2910

@SebastianM-C
Copy link
Contributor Author

Looks like the latest MTK breaks a lot of stuff due to initialization issues.

@SebastianM-C
Copy link
Contributor Author

Needs SciML/ModelingToolkit.jl#3079 since currently parameter deps don't allow callable parameters.

With parameter deps, the implementation was simplified and I think we can remove the DataInterpolations extension as the API doesn't need it anymore.

I also looked a bit into the performance and I think that it's better now compared to the previous implementation.
The main issue is that in order to make it ForwardDiff compatible, this uses GeneralLazyBufferCache and for that the return type can't be inferred.

This leads to flamegraphs like

image

As we can see there's some runtime dispatch and GC inside the RGF. Looking more closely with Cthulhu, I saw that this is due to the interpolation type not being inferred.

To asses the impact we can do a type assert on the return of function (f::CachedInterpolation{T})(u, x, args) where T

return interp:::T

and that leads to

image

I this adversary case, where the ODE is trivial,

@named i = ParametrizedInterpolation(LinearInterpolation, u, x)
eqs = [D(y) ~ i.output.u]
@named model = ODESystem(eqs, t, systems = [i])

and we provide the solver, the problem is

prob = ODEProblem{true, SciMLBase.FullSpecialize}(sys, [], (0.0, 4))

and saving is turned off withsave_everystep=false, we get:

# without the type assert
julia> @btime solve($prob, Tsit5(), save_everystep=false)
  5.910 μs (178 allocations: 6.70 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.9736997481549214]

vs

# with the type assert
julia> @btime solve($prob, Tsit5(), save_everystep=false)
  5.041 μs (40 allocations: 4.55 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.9736997481549214]

(Of course, the type assert is rendering the GeneralLazyBufferCache useless, so this was done only for the purpose of a fair benchmark.)

Moreover, if we compare against a model where the interpolation is directly used as a callable parameter,

@parameters (i::LinearInterpolation)(..)
eqs = [D(y) ~ i(t)]
@named model = ODESystem(eqs, t)
sys = structural_simplify(model)

prob = ODEProblem{true, SciMLBase.FullSpecialize}(sys, [i=>LinearInterpolation(u, x)], (0.0, 4))

we get
image

and the timings are

julia> @btime solve($prob, Tsit5(), save_everystep=false)
  5.280 μs (178 allocations: 6.70 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.9736997481549214]

So the type unstable version in this PR is only a bit worse than the implementation that doesn't represent the data (due to some issue in the RGF that leads to dynamic dispatch and GC; I don't know yet what's happening there, I'll update the post when I will know). The allocation count is identical, so I suspect that maybe the interpolation is again at fault.

@ChrisRackauckas I think that this implementation could be useful for cases where SampledData was used (as a replacement) & it doesn't seem much worse than just not having the data in the model. I think it could work as a more advanced option and I will rewrite the docs to mention this at the end instead of the beginning.

I think that depending on what we want from the AD support, we can improve this further. I haven't looked into reverse mode AD for this, as that would have entirely different challenges.

@AayushSabharwal
Copy link
Member

@parameters (i::LinearInterpolation)(..)

LinearInterpolation is not a concrete type, and as a result it won't be inferred properly. If the function is typed more concretely, it would help with inference.

@SebastianM-C
Copy link
Contributor Author

Oh, right, thanks! Fixing that leads to
image

I redid the benchmark above on a different computer and with the fixed type inference, we have

This PR:

julia> @btime solve($prob, Tsit5(), save_everystep=false)
  13.200 μs (212 allocations: 5.42 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.3600656448688886]

This PR with the type assert

julia> @btime solve($prob, Tsit5(), save_everystep=false)
  10.300 μs (61 allocations: 3.05 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.3600656448688886]

minimal case with

itp = LinearInterpolation(u, x)
@parameters (i::typeof(itp))(..)
eqs = [D(y) ~ i(t)]
@named model = ODESystem(eqs, t)
sys = structural_simplify(model)

prob = ODEProblem{true, SciMLBase.FullSpecialize}(
    sys, [i => LinearInterpolation(u, x)], (0.0, 4))

gives

julia> @btime solve($prob, Tsit5(), save_everystep=false)
  7.550 μs (61 allocations: 3.05 KiB)
retcode: Success
Interpolation: 1st order linear
t: 2-element Vector{Float64}:
 0.0
 4.0
u: 2-element Vector{Vector{Float64}}:
 [0.0]
 [2.3600656448688886]

so, as expected with the type assert it's a bit solver than the minimal case and that's due to the fact that we always have to check if the data was changed and that iterates over the entire data. Maybe there's some further optimization that can be done, but I would leave that for another PR😅 and finish the docs part here.

@SebastianM-C SebastianM-C marked this pull request as ready for review October 3, 2024 11:03
@SebastianM-C SebastianM-C changed the title [RFC] add ParametrizedInterpolation Add Interpolation & ParametrizedInterpolation Oct 3, 2024
@SebastianM-C
Copy link
Contributor Author

The tests error due to a bug in codegen,

RuntimeGeneratedFunction(#=in ModelingToolkit=#, #=using ModelingToolkit=#, :((ˍ₋out, ˍ₋arg1, ˍ₋arg2, t)->begin
          #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:385 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:386 =#
          #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:387 =#
          begin
              begin
                  begin
                      begin
                          var"i₊output₊u(t)" = (ˍ₋arg2[1])(var"i₊input₊u(t)")
                          var"i₊input₊u(t)" = t
                          begin
                              #= /home/sebastian/.julia/packages/Symbolics/rtkf9/src/build_function.jl:546 =#
                              #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:434 =# @inbounds begin
                                      #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:430 =#
                                      ˍ₋out[1] = var"i₊output₊u(t)"
                                      #= /home/sebastian/.julia/packages/SymbolicUtils/ij6YM/src/code.jl:432 =#
                                      nothing
                                  end
                          end
                      end
                  end
              end
          end
      end))

As we can see, var"i₊input₊u(t)" is used before it was assigned to t.

ChrisRackauckas and others added 2 commits October 16, 2024 11:25
Co-authored-by: Aayush Sabharwal <[email protected]>
Co-authored-by: Aayush Sabharwal <[email protected]>
@ChrisRackauckas
Copy link
Member

Rebase onto the working master?

@SebastianM-C
Copy link
Contributor Author

I was about to rebase, but you were faster with the merge 😅

@SebastianM-C
Copy link
Contributor Author

I'm not sure what's with with the formatting on the docs page, the VS Code formatter doesn't change anything and as far as I'm aware it's still JuliaFormatter.

@SebastianM-C
Copy link
Contributor Author

SebastianM-C commented Oct 20, 2024

julia> prob.f.initializeprob.f.f.f_iip
RuntimeGeneratedFunction(#=in ModelingToolkit=#, #=using ModelingToolkit=#, :((ˍ₋out, ˍ₋arg1, ˍ₋arg2, ˍ₋arg3)->begin
          #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:385 =#
          #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:386 =#
          #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:387 =#
          begin
              begin
                  src₊ts = reshape(view(ˍ₋arg2, 7:257), (251,))
                  src₊data = reshape(view(ˍ₋arg2, 258:508), (251,))
                  begin
                      begin
                          var"model₊x(t)" = 0.0
                          var"model₊dx(t)" = 0.0
                          var"model₊f(t)" = 0.0
                          var"model₊ddx(t)" = 0.0
                          var"clk₊output₊u(t)" = (+)(ˍ₋arg2[4], if (<)(ˍ₋arg2[3], ˍ₋arg2[2])
                                      0
                                  else
                                      (+)((*)(-1, ˍ₋arg2[2]), ˍ₋arg2[3])
                                  end)
                          var"src₊input₊u(t)" = var"clk₊output₊u(t)"
                          var"src₊output₊u(t)" = src₊interpolator(var"src₊input₊u(t)")
                          var"model₊input₊u(t)" = var"model₊f(t)"
                          begin
                              #= C:\Users\sebastian\.julia\packages\Symbolics\kErIg\src\build_function.jl:546 =#
                              #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:434 =# @inbounds begin
                                      #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:430 =#
                                      ˍ₋out[1] = var"src₊output₊u(t)"
                                      ˍ₋out[2] = (+)((*)(-1, var"model₊ddx(t)"), (*)(1//10, (+)((+)(var"model₊f(t)", (*)(ˍ₋arg2[6], var"model₊dx(t)")), (*)(ˍ₋arg2[1], var"model₊x(t)"))))
                                      #= C:\Users\sebastian\.julia\packages\SymbolicUtils\jf8aQ\src\code.jl:432 =#
                                      nothing
                                  end
                          end
                      end
                  end
              end
          end
      end))

I think there's a bug in the initialization, as the generated code does not include the callable parameter (src₊interpolator is not defined). In the prob.f it appears correctly (and that's why the tests pass).

cc @AayushSabharwal

Also, the SampledData component is broken with the latest MTK since there are stricter checks on parameter types and Parameter pretending to be Real doesn't work anymore in the example in the docs, so I avoided running that in the doc build.

@AayushSabharwal
Copy link
Member

How can I reproduce the initialization issue you're referring to?

@SebastianM-C
Copy link
Contributor Author

using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using ModelingToolkitStandardLibrary.Blocks
using DataInterpolations
using OrdinaryDiffEq
using DataFrames
using Plots

function MassSpringDamper(; name)
    @named input = RealInput()
    vars = @variables f(t)=0 x(t)=0 dx(t)=0 ddx(t)=0
    pars = @parameters m=10 k=1000 d=1

    eqs = [
           f ~ input.u
           ddx * 10 ~ k * x + d * dx + f
           D(x) ~ dx
           D(dx) ~ ddx]

    ODESystem(eqs, t, vars, pars; name, systems = [input])
end

function MassSpringDamperSystem(data, time; name)
    @named src = ParametrizedInterpolation(LinearInterpolation, data, time)
    @named clk = ContinuousClock()
    @named model = MassSpringDamper()

    eqs = [
        connect(model.input, src.output)
        connect(src.input, clk.output)
    ]

    ODESystem(eqs, t; name, systems = [src, clk, model])
end

function generate_data()
    dt = 4e-4
    time = 0:dt:0.1
    data = sin.(2 * pi * time * 100)

    return DataFrame(; time, data)
end

df = generate_data() # example data

@named system = MassSpringDamperSystem(df.data, df.time)
sys = structural_simplify(system)
prob = ODEProblem(sys, [], (0, df.time[end]))
sol = solve(prob)
plot(sol)

I think that the equations from parameter deps might not be included in the initialization system.

@AayushSabharwal
Copy link
Member

Right, yeah I think I know exactly what's missing here :D Missed a spot in parameter initialization

@AayushSabharwal
Copy link
Member

SciML/ModelingToolkit.jl#3135 should fix this

@SebastianM-C
Copy link
Contributor Author

I don't understand why the docs build doesn't see the latest MTK release as available.

I think we can fix the docs in a separate PR, the tests pass now.

@ChrisRackauckas
Copy link
Member

Okay please follow up with the docs fix.

@ChrisRackauckas ChrisRackauckas merged commit bca6e28 into SciML:main Oct 24, 2024
8 of 12 checks passed
@SebastianM-C SebastianM-C deleted the smc/interp branch October 24, 2024 16:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants