스케줄러 질문드려요
#100
-
# optimizer
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12) 위의 코드 실행한 결과인데 lr값이 너무 확줄어드네요 |
Beta Was this translation helpful? Give feedback.
Answered by
ppskj178
Oct 6, 2021
Replies: 1 comment 1 reply
-
gamma 값 기본값이 0.1이라 수정해서 테스트중입니다. |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
ppskj178
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
gamma 값 기본값이 0.1이라 수정해서 테스트중입니다.