Skip to content

Implementation of trainlm in Matlab that uses the Levenberg_Marquardt backpropagation for training neural networks.

Notifications You must be signed in to change notification settings

xuesongwang/pytorch_Levenberg_Marquardt_optimizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

torch_Levenberg_Marquardt_optimizer

Pytorch implementation of trainlm in Matlab that uses the Levenberg_Marquardt backpropagation for training neural networks.

It has the efficiency advantage over stochastic gradient descents but is restricted to smaller networks. The repository is built on torchimize which enables convex optimization on GPUs based on the torch.Tensor class. Make sure to pip install torchimize before running this code.

Our contribution is to write a test code with the paramters being inside torch.nn.Module, which is the conventional way of defining neural networks in pytorch. The pipeline follows this thread to compute the Jocobian and update network parameters.

One can define their own opt_loss_function and opt_jacobian_function similar to ours and call the optimizer function lsq_lma to get the learned parameters:

p = torch.cat([param.view(-1).clone() for param in net.parameters()])

result = lsq_lma(
    p=p, 
    function=opt_loss_function,
    jac_function=opt_jacobian_function,
    args=(x, y, net),
    max_iter=500,
    gtol=1e-11
)

print("learned parameters",result[-1])

About

Implementation of trainlm in Matlab that uses the Levenberg_Marquardt backpropagation for training neural networks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages