Replies: 2 comments 7 replies
-
Hi! :D I don't think this is a wrong place to ask, but maybe the #science channel on Discord/Matrix would be better as I'm not sure how many get notifications on these discussions. But I can answer here 😄
I don't really understand the question here. There is an |
Beta Was this translation helpful? Give feedback.
-
Thanks. It really helped. It seems I made some progress, however the optimization algo procedure (steepestDescend()) seems to require a float and my formula returns a Tensor. I will look into how to optimize based on the covariance matrix (result). Also, this only works with openblas in my machine import ggplotnim, numericalnim, arraymancer, benchy
# / 0.00 0.00 0.00 0.00 0.00 \
# | 1.00 0.00 0.00 0.00 0.00 |
# A = | \lam..2=0.33 0.00 0.00 0.00 0.00 |
# | \lam..3=0.33 0.00 0.00 0.00 0.00 |
# \ 0.00 0.00 0.00 0.00 0.00 /
#
#
# / sigma_s=1.00 0.00 0.00 0.00 0.00 \
# | 0.00 e1=1.00 0.00 0.00 0.00 |
# S = | 0.00 0.00 e2=1.00 0.00 0.00 |
# | 0.00 0.00 0.00 e3=1.00 0.00 |
# \ 0.00 0.00 0.00 0.00 0.00 /
#
#
# / 0 1 0 0 0 \
# F = | 0 0 1 0 0 |
# \ 0 0 0 1 0 /
# Model Covariance matrix = F (I-A)^{-1} S (I-A)^{-T} F^T
proc ramPath(F: Tensor[float], A: Tensor[float], S: Tensor[float]): Tensor[float] =
var F = [[0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0]].toTensor().reshape(3, 5) # Reshape to 3x5
# Matrix A (5x5)
var A = [[0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0],
[0.33, 0.0, 0.0, 0.0, 0.0],
[0.33, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]].toTensor()
# Matrix S (5x5)
var S = [[1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]].toTensor()
let I = [[1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]].toTensor()
let invA = pinv(I - A) # Calculate (I - A)^{-1}
var result = F * invA * S * invA.transpose() * F.transpose()
return result
let theta0 = [
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1]].toTensor()
let solutionSteepest = steepestDescent(ramPath, theta0)
echo "Steepest: ", solutionSteepest /home/luis/Documents/OneDrive/Coding/Nim/Learning/MLestimator.nim(56, 39) Error: type mismatch
Expression: steepestDescent(ramPath, theta0)
[1] ramPath: proc (F: Tensor[system.float], A: Tensor[system.float], S: Tensor[system.float]): Tensor[system.float]{.gcsafe.}
[2] theta0: Tensor[system.int]
Expected one of (first mismatch at [position]):
[1] proc steepestDescent[U; T: not Tensor](f: proc (x: Tensor[U]): T; x0: Tensor[U];
options: OptimOptions[U, StandardOptions] = steepestDescentOptions[U]();
analyticGradient: proc (x: Tensor[U]): Tensor[T] = nil): Tensor[U]
[1] proc steepest_descent(deriv: proc (x: float64): float64; start: float64;
gamma: float64 = 0.01; precision: float64 = 0.00001;
max_iters: Natural = 1000): float64 |
Beta Was this translation helpful? Give feedback.
-
I have been trying to generate a minimal working optimization of the Reticular Action Model from McArdle 1984, with the general formula F (I-A)^{-1} S (I-A)^{-T} F^T. Where F is a filter matrix, S holds the covariances, and A the regression paths. The objective is to work on this until I get a general structural equation model optimizer in Nim as an exercise for myself, and maybe a future little package.
I figured I could start from the Rosenbrock example in the docs, and with the help from open ai, the farthest I got was:
I am still very far from a solution, right now I just needed help on identifying the correct procedure to generate the identity matrix correctly to substitute this dummy code: let I = eyefloat # Identity matrix.
Sorry if this is not the correct place to ask for this kind of help.
Beta Was this translation helpful? Give feedback.
All reactions