Skip to content
This repository has been archived by the owner on Jun 15, 2023. It is now read-only.

Gradient based optimizer: how to control learning rate #482

Open
dyang37 opened this issue Aug 17, 2022 · 2 comments
Open

Gradient based optimizer: how to control learning rate #482

dyang37 opened this issue Aug 17, 2022 · 2 comments

Comments

@dyang37
Copy link

dyang37 commented Aug 17, 2022

This is more of a question than an issue, but since I cannot find a proper place to post this, so any help would be appreciated:)
In vanila simpleITK, we have control over optimization hyperparameters like learning rate. This can be done with something like

    registration_method.SetOptimizerAsGradientDescent(
        learningRate=0.1,
        numberOfIterations=500,
        convergenceMinimumValue=1e-6,
        convergenceWindowSize=20,
    )

Question: is there a similar way to control learning rate in SimleElastix?
I know that we can control certain optimization parameters by setting the parameter map, something like

parameterMapVector = sitk.GetDefaultParameterMap('rigid')
parameterMapVector['MaximumNumberOfIterations'] = ['5000']
parameterMapVector['NumberOfResolutions'] = ['4']

But I was not able to find any information regarding the learning rate.

@dyang37 dyang37 changed the title control learning rate Gradient based optimizer: how to control learning rate Aug 17, 2022
@dyang37
Copy link
Author

dyang37 commented Aug 17, 2022

Adding some more information: The reason why I would like to have direct control over the learning rate is because the problem I've been working with requires extremely accurate approximation of the transformation parameters.
While the default behavior of the multi-resolution optimizer works okay, it is not to the level of accuracy that I'm hoping for (which is fine, since I understand that the default behavior of the optimizer is designed to work on most image registration applications).
So this question could alternatively be, is there a way to use the previously evaluated parameters as the initial condition, and perform a more accurate estimation of the transformation params (e.g. set learning rate or max step size)?

@cs123951
Copy link

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants