Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow Implementation #23

Open
1 of 5 tasks
3ygun opened this issue Apr 19, 2017 · 4 comments
Open
1 of 5 tasks

TensorFlow Implementation #23

3ygun opened this issue Apr 19, 2017 · 4 comments

Comments

@3ygun
Copy link
Collaborator

3ygun commented Apr 19, 2017

Goals

Use TensorFlow for model back-ends to enable better extensibility. This issue will server as a base of operations regarding the progress and/or discussion around the implementation.

Aspects

@tylermzeller and I believe the following would be some of the required implementation plan to enable the above goal:

  • Rewrite Server to use TensorFlow as it's testing interface
    • A simple extension of the existing NodeJS server application wouldn't be possible due to TensorFlow's lack of support for JavaScript bindings.
    • In light of this a rewrite in Python would probably be most appropriate. It supports the largest part of TensorFlow's implemented models and documentation while being accessible and performant
  • Rewrite of the Andorid app to use the TensorFlow bindings and TrainingInterface explored and developed in TensorFlow on Android

Pre-Requisites

NOTE: most of these can be explored through desktop TensorFlow applications

  • Validate that weights can be changed after a model is loaded
    • If this is not the case a work around must be found (such as a model reload)
    • EDIT: view the TF documentation on variables.

Structure Project as

Single app vs Library

@tylermzeller
Copy link
Collaborator

tylermzeller commented Apr 21, 2017

StackOverflow article on how to assign values to variables in TF

@3ygun
Copy link
Collaborator Author

3ygun commented Jun 6, 2017

Testing with:

import tensorflow as tf

with tf.Session() as sess:

    x = tf.placeholder(tf.float32, shape=[None, 784], name="x")
    y = tf.placeholder(tf.float32, [None, 10], name="y")
    w = tf.placeholder(tf.float32, [784, 10], name="weights_in")

    W = tf.Variable(tf.zeros([784, 10]), name="weights")
    b = tf.Variable(tf.zeros([10]))

    y_out = tf.add(tf.matmul(x, W), b, name="y_out")

    #cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_out), reduction_indices=[1]))
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_out))
    train_step = tf.train.AdamOptimizer(0.005).minimize(cross_entropy, name="train")

    correct_prediction = tf.equal(tf.argmax(y_out,1), tf.argmax(y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name="test")

    init = tf.variables_initializer(tf.global_variables(), name="init")

    tf.train.write_graph(sess.graph_def,
                         './',
                         'mnist_adam_0.005_mlp.pb', as_text=False)

@3ygun
Copy link
Collaborator Author

3ygun commented Jun 18, 2017

Performance

When running the PR #32 I get the following performance bottleneck with the Data file reading taking up a large majority of the time:

currentcalculationbottleneck

@tylermzeller
Copy link
Collaborator

tylermzeller commented Jun 18, 2017

I don't know that there are good alternatives to buffered IO. The only other way that could be possible is reading the data once and keeping all the samples in memory. Just to allocate enough memory for the training features, we need 784 4-byte floats to represent one feature vector. 60000 samples gives us 784x4x60,000 bytes which is ~188 Mb of memory. Thats a bit much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants