Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors when trying to include the calculation of eigenvalues (in the neuralFMU -simulation and -training) #125

Open
juguma opened this issue Feb 8, 2024 · 0 comments

Comments

@juguma
Copy link

juguma commented Feb 8, 2024

Problem description and MWE

The attached script is a reduced and modified variant of simple_hybrid_ME.ipynb.
Removed has been anything unnecessary to reproduce the error.
Modifications are:

  • inclusion of recordEigenvaluesSensitivity=:ForwardDiff, recordEigenvalues = true in the loss function (lossSum)
  • renaming of train! to _train! *, only one iteration, removal of some arguments, adding of gradient =:ForwardDiff
# imports
using FMI
using FMIFlux
using FMIFlux.Flux
using FMIZoo
using DifferentialEquations: Tsit5
import Plots

# set seed
import Random
Random.seed!(42);

tStart = 0.0
tStep = 0.01
tStop = 5.0
tSave = collect(tStart:tStep:tStop)

realFMU = fmiLoad("SpringFrictionPendulum1D", "Dymola", "2022x")
fmiInfo(realFMU)

initStates = ["s0", "v0"]
x₀ = [0.5, 0.0]
params = Dict(zip(initStates, x₀))
vrs = ["mass.s", "mass.v", "mass.a", "mass.f"]

realSimData = fmiSimulate(realFMU, (tStart, tStop); parameters=params, recordValues=vrs, saveat=tSave)
posReal = fmi2GetSolutionValue(realSimData, "mass.s")
fmiUnload(realFMU)

simpleFMU = fmiLoad("SpringPendulum1D", "Dymola", "2022x")

# loss function for training
function lossSum(p)
    global posReal
    solution = neuralFMU(x₀; p=p,recordEigenvaluesSensitivity=:ForwardDiff, recordEigenvalues = true)
    posNet = fmi2GetSolutionState(solution, 1; isIndex=true)   
    FMIFlux.Losses.mse(posReal, posNet) 
end

# NeuralFMU setup
numStates = fmiGetNumberOfStates(simpleFMU)
net = Chain(x -> simpleFMU(x=x, dx_refs=:all),
            Dense(numStates, 16, tanh),
            Dense(16, 16, tanh),
            Dense(16, numStates))
neuralFMU = ME_NeuralFMU(simpleFMU, net, (tStart, tStop), Tsit5(); saveat=tSave);

# train
paramsNet = FMIFlux.params(neuralFMU)
optim = Adam()
FMIFlux._train!(lossSum, paramsNet, Iterators.repeated((), 1), optim; gradient =:ForwardDiff) 

Reported error

MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(lossSum), Float64}, Float64, 32})

Closest candidates are:
(::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat
@ Base rounding.jl:207
(::Type{T})(::T) where T<:Number
@ Core boot.jl:792
Float64(::IrrationalConstants.Fourπ)
@ IrrationalConstants C:\Users\JUR.julia\packages\IrrationalConstants\vp5v4\src\macro.jl:112
...

Stacktrace:
[1] convert(::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(lossSum), Float64}, Float64, 32})
@ Base .\number.jl:7
[2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(lossSum), Float64}, Float64, 32}, i1::Int64)
@ Base .\array.jl:1021
[3] _generic_matvecmul!(C::Vector{…}, tA::Char, A::Matrix{…}, B::Vector{…}, _add::LinearAlgebra.MulAddMul{…})
@ LinearAlgebra C:\Users\JUR\AppData\Local\Programs\julia-1.10.0\share\julia\stdlib\v1.10\LinearAlgebra\src\matmul.jl:743
[4] generic_matvecmul!
@ LinearAlgebra C:\Users\JUR\AppData\Local\Programs\julia-1.10.0\share\julia\stdlib\v1.10\LinearAlgebra\src\matmul.jl:687 [inlined]
[5] mul!
@ LinearAlgebra C:\Users\JUR\AppData\Local\Programs\julia-1.10.0\share\julia\stdlib\v1.10\LinearAlgebra\src\matmul.jl:66 [inlined]
[6] mul!
@ LinearAlgebra C:\Users\JUR\AppData\Local\Programs\julia-1.10.0\share\julia\stdlib\v1.10\LinearAlgebra\src\matmul.jl:237 [inlined]
[7] jvp!(jac::FMISensitivity.FMU2Jacobian{…}, x::Vector{…}, v::Vector{…})
@ FMISensitivity C:\Users\JUR.julia\packages\FMISensitivity\Yt2rV\src\FMI2.jl:1323
[...]

Remarks

  • The same happens if you use recordEigenvaluesSensitivity=:none in the lossSum.
  • You can replace the gradient with :ReverseDiff (in the lossSum and in _train!), and end up with another error. So that doesn't work either.
  • The combination :none (in lossSum) and :ReverseDiff (in _train!) works, however, if one wants to include the eigenvalues in the senstitivity calculation, this is not an option, is it?
  • I haven't tried with Zygote, I don't care about Zygote ;-)

*this is actually a bug that this is not updated, but _train! is probably not the preferred resolution, rather train! with the neuralFMU instead of params as the second argument

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant