Skip to content

Commit

Permalink
Default TPU optimization to much more common values `learning_rate=3e…
Browse files Browse the repository at this point in the history
…-4`, `batch_size=64` (a common source of errors).

This will change the behavior for NSynth training for the original ICLR2020 paper on TPU, but can be reset to 1e-5 and 128 manually.

PiperOrigin-RevId: 321403423
  • Loading branch information
jesseengel authored and Magenta Team committed Jul 15, 2020
1 parent ed3764c commit dd3fb52
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 3 deletions.
3 changes: 1 addition & 2 deletions ddsp/training/gin/optimization/base_tpu.gin
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,4 @@
include 'optimization/base.gin'

# Larger batch size for TPU.
learning_rate = 1e-5
batch_size = 128 # (4x2, 8 per a core)
batch_size = 64 # (4x2, 4 per a core)
4 changes: 4 additions & 0 deletions ddsp/training/gin/papers/iclr2020/nsynth_ae.gin
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# -*-Python-*-
include 'models/ae.gin'
include 'datasets/nsynth.gin'

# To recreate original experiment optimization params, uncomment lines below.
# learning_rate = 1e-5
# batch_size = 128
2 changes: 1 addition & 1 deletion ddsp/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@
pulling in all the dependencies in __init__.py.
"""

__version__ = '0.7.0'
__version__ = '0.8.0'

0 comments on commit dd3fb52

Please sign in to comment.