Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEBLEU example with TeacherMaskSoftmaxEmbeddingHelper and Triggers #45

Open
wants to merge 68 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
d7e3910
add differentiable_expected_bleu loss
wwt17 Sep 30, 2018
b83a145
modify DEBLEU loss interface from logits to probs
wwt17 Oct 5, 2018
87bd449
add TeacherMaskSoftmaxEmbeddingHelper
wwt17 Oct 5, 2018
a730a25
change API of sess
wwt17 Oct 5, 2018
86c2f9e
add xe ; refine configs
wwt17 Oct 5, 2018
e10c78b
fix a typo in doc
wwt17 Oct 6, 2018
bdbca3b
add summary and checkpoints ; add train configs
wwt17 Oct 6, 2018
ffbf14d
remove duplicated config
wwt17 Oct 6, 2018
7e5b92b
copy tf.batch_gather
wwt17 Oct 6, 2018
d8f7449
config dataset val=test
zcyang Oct 6, 2018
4bf0d1e
Merge branch 'DEBLEU' of https://github.com/wwt17/texar into DEBLEU
zcyang Oct 6, 2018
9bdbe09
add triggers ; now the whole code is runnable
wwt17 Oct 7, 2018
d04e4e0
add learning rate
zcyang Oct 7, 2018
92f1498
Merge branch 'DEBLEU' of https://github.com/wwt17/texar into DEBLEU
zcyang Oct 7, 2018
3f74126
add mask summary ; fix action
zcyang Oct 8, 2018
69129f4
fix random shift bug
zcyang Oct 8, 2018
142665f
don't restore Adam status
zcyang Oct 8, 2018
c836e53
fix save path
Oct 8, 2018
ffe568d
add flags.restore_adam
Oct 8, 2018
73d1c7b
add global_step onto saved ckpt
Oct 8, 2018
bf92f2e
add flags.restore_mask
Oct 10, 2018
9fe74cb
remove config_model_full.py ; rename debleu ; rename some arguments ;…
Oct 13, 2018
9dcde6a
fix checkpoint save and restore bug
Oct 13, 2018
038478e
refine trigger
Oct 13, 2018
101d5a1
refine trigger
Oct 13, 2018
b293bc1
add trigger save & restore (not tested yet)
Oct 14, 2018
9b2b382
move module triggers into texar/utils
Oct 14, 2018
190d5b3
refine codes
Oct 14, 2018
a4fdd5a
add comments to debleu.py
Oct 14, 2018
77c0a52
add name_scope to TeacherMaskSoftmaxEmbeddingHelper
Oct 14, 2018
c70b8e2
fix lr decay boundaries
Oct 14, 2018
6daaac8
fix save trigger path
Oct 14, 2018
afacfe9
add docs
wwt17 Oct 14, 2018
0794ddc
add more trigger docs
wwt17 Oct 15, 2018
0d3e187
update README.md
wwt17 Oct 15, 2018
b095785
rename some filenames ; add val/test datasets
Oct 15, 2018
06c5727
add config_train_iwslt14_en-fr.py
Oct 15, 2018
5305d38
update README.md
Oct 15, 2018
3aab0a6
replace moses bleu by nltk bleu
Oct 16, 2018
8ca85a9
modify model
Oct 16, 2018
78b6994
refine models
Oct 17, 2018
82bc6a8
refine summary ; batch_size=160
Oct 18, 2018
fffd648
remove exponetial decay configs ; fix summary bug
Oct 18, 2018
6d07aa1
add stages
Oct 19, 2018
923ea8c
add config_train
Oct 19, 2018
56b44c7
modify 2-layer encoder to 1-layer
Oct 20, 2018
c6991c8
change configs to bowen's
Oct 20, 2018
7e89acf
open trigger file in binary mode
Oct 20, 2018
de78471
add binary mode
Oct 20, 2018
b822f39
use new datasets ; reinitialize optimizer when annealing ; modify con…
Oct 22, 2018
9d6e4bb
replace name_scope by variable_scope in TeacherMaskSoftmaxEmbeddingHe…
Oct 22, 2018
ed1f6f3
fix lr bug
Oct 22, 2018
3cfc217
reset model and configs to those in pytorch codes ; fix connector bug…
Oct 29, 2018
18da2c7
anneal to bs160 4:2 mask ; reinitialize mask after restoring
Oct 29, 2018
1ac619d
add lr1e6_1_0.py config
Oct 30, 2018
c227c28
add more model configs
Nov 2, 2018
316e41c
refine code ; now everything is automatical
Nov 3, 2018
0f157e8
make mask pattern Tensors and use placeholder
Nov 4, 2018
c4c4288
reconstruct triggers ; modify code
Nov 4, 2018
2b1fe5a
add test units for triggers
Nov 5, 2018
ec20a9e
rewrite ScheduledStepsTrigger; correct and refine some docs TODO: 1.…
Nov 5, 2018
ad56c3e
fix final annealing bug
Nov 5, 2018
1f3e212
add config restore_from
Nov 5, 2018
8988209
add test units for ScheduledStepsTrigger and fix some bugs
Nov 6, 2018
5851220
fix docs for triggers
wwt17 Nov 6, 2018
8fdf62e
remove unfinished MovingAverageConvergenceTrigger
wwt17 Nov 6, 2018
3b58883
update README.md
wwt17 Nov 6, 2018
7b673ab
merge master
Nov 7, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add test units for triggers
Wentao Wang committed Nov 5, 2018
commit 2b1fe5a325b0028e5043f25ad3348fcc937e9513
68 changes: 68 additions & 0 deletions texar/utils/triggers_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
"""
Unit tests for triggers.
"""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import random

from texar.utils.triggers import Trigger, BestEverConvergenceTrigger


class TriggerTest(tf.test.TestCase):
"""Tests :class:`~texar.utils.Trigger`.
"""

def test(self):
trigger = Trigger(0, lambda x: x+1)
for step in range(100):
trigger.trigger()
self.assertEqual(trigger.user_state, step+1)

class BestEverConvergenceTriggerTest(tf.test.TestCase):
"""Tests :class:`~texar.utils.BestEverConvergenceTrigger`.
"""

def test(self):
for i in range(100):
n = random.randint(1, 100)
seq = list(range(n))
random.shuffle(seq)
threshold_steps = random.randint(0, n // 2 + 1)
minimum_interval_steps = random.randint(0, n // 2 + 1)
trigger = BestEverConvergenceTrigger(
0, lambda x: x+1, threshold_steps, minimum_interval_steps)

best_ever_step, best_ever_score, last_triggered_step = -1, -1, None

for step, score in enumerate(seq):
if score > best_ever_score:
best_ever_step = step
best_ever_score = score

triggered_ = step - best_ever_step >= threshold_steps and \
(last_triggered_step is None or
step - last_triggered_step >= minimum_interval_steps)
if triggered_:
last_triggered_step = step

triggered = trigger(step, score)

self.assertEqual(trigger.best_ever_step, best_ever_step)
self.assertEqual(trigger.best_ever_score, best_ever_score)
self.assertEqual(trigger.last_triggered_step,
last_triggered_step)
self.assertEqual(triggered, triggered_)

trigger = BestEverConvergenceTrigger(0, lambda x: x+1, 0, 0)
for step in range(100):
trigger.trigger()
self.assertEqual(trigger.user_state, step+1)


if __name__ == "__main__":
tf.test.main()