-
Couldn't load subscription status.
- Fork 19
misc notes
Higepon Taro Minowa edited this page Jul 14, 2017
·
1 revision
- [done] 6/17 Make old chatbot compatible with tensorflow 1.0
- Make basic chatbot
- 6/11 Prepare pycharm env with python 3.3 and latest TensorFlow
- 6/12 Port data.py to Python3
- 6/12 fix O(1) slowness
- 6/12 Compare generated file with the original ones.
- 6/12 Investigate why data.py is so slow
- 6/17 Write own train.py
- 6/13 make an empty file
- 6/13 commit it
- 6/14 Read https://www.tensorflow.org/install/migration
- 6/17 Make model
- 6/17 Comment for each unknown lines
- 6/17 write model function
- 6/17 See if it's easy to port the old chatbot to 0.1.0 using https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/seq2seq_model.py
- 6/17 Very naive implmentation with the same data as CS20SI
- 6/17 Compare the two bots
- 6/17 Probably rewrite it in new seq2seq API
- 6/18 Find a best way to record trial and error
- 6/18 model parameters (do we have to keep all the parameter files?)
- Yes, with directory in tag name
- 6/18 How we construct model
- 6/18 Maybe tag or release?
- 6/18 Think what should be done while training? Read papers?
- 6/18 model parameters (do we have to keep all the parameter files?)
- 6/18 Re-read http://web.stanford.edu/class/cs20si/assignments/a3.pdf and make a list of todo again
- 6/18 Re-read seq2seq model API https://www.tensorflow.org/tutorials/seq2seq
- 6/18 Improvement 1: Support Tensorboard
- 6/19 Try AdagradOptimizer and compare hom much faster it converges
- 6/19 Check if dataset is on memory
- Improvement 2: Construct the response in a non-greedy way
- 6/24 Understand general beam search
- 6/25 Read wikipedia
- 6/25 Find actual tensorflow beam search example
- 6/25 Make list of items to immplement
- 6/25 wait, maybe implment in the graph is the right way?
- 6/25 Understand current greedy model throughly?