Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does this package work for tensorflow 1.15? #49

Open
Jingnan-Jia opened this issue Jan 2, 2020 · 7 comments
Open

Does this package work for tensorflow 1.15? #49

Jingnan-Jia opened this issue Jan 2, 2020 · 7 comments

Comments

@Jingnan-Jia
Copy link

I found that the last date of commit is 2 years ago, so maybe this package is not applied in tensorflow1.15? Does anyone could make sure it?? It can not work in my codes with tf1.15. I need to know if it is because of the version of tensorflow or my own codes.

@wen8411
Copy link

wen8411 commented Feb 6, 2020

1.15 could be the last version on which this works. The package uses tf.contrib.graph_editor while starting from 2.0 tf.contrib is removed. Is there plan to have this work in TF2?

@fernandocamargoai
Copy link

@wen8411 but do you know if it works in eager mode of tf 1.15? And the contrib module was basically moved to tf-addons, so it would just need to be refactored here.

@wen8411
Copy link

wen8411 commented Feb 7, 2020 via email

@golden0080
Copy link

@wen8411 is it possible to describe what kind of models worked with this repo?

I'm adopting this in my model, but got some errors like:

ValueError: Operation 'training/Adam/gradients/header/conv_4/batch_normalization_63/cond/ReadVariableOp_2/Switch' has no attr named '_XlaCompile'.

@golden0080
Copy link

@wen8411 And to your point of graph_editor in TF 2.x, it's removed from TF source code basically. They were sunsetting contrib modules and that included graph_editor.

@nouiz
Copy link

nouiz commented Oct 6, 2020

The errors give me the impression that this repo doesn't support XLA. Try disabling XLA.

@golden0080
Copy link

Thank you @nouiz for the suggestion. After disabling XLA, I was able to get different errors:

ValueError: Tensor conversion requested dtype float32 for Tensor with dtype resource: <tf.Tensor 'training/Adam/gradients/gradients_1/training/Adam/gradients/header/conv_4/batch_normalization_63/cond/ReadVariableOp_2/Switch_grad/Switch_1:1' shape=() dtype=resource>
TypeError: Tensors in list passed to 'inputs' of 'Merge' Op have types [float32, resource] that don't all match.

I know this may be related to my model architecture, any ideas why this is happening?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants