-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove softmax classification layer and compute gradient #5
Comments
Hi @Jamesswiz! I remember that pop does not cleanly remove the last layer. You need to do something like model = Model(model.layers[0], model.layers[-1]). I know that it's hacky but I don't know of other ways to fix it. |
HI, @experiencor thanks for help! Well the correct way is: inp=model.input but again it doesn't works with your code and I get the error: The name 'dense_2_13/BiasAdd:0' refers to a Tensor which does not exist. Now the code is trying to find the bias adding in TF graph! any idea how to deal with this? |
@Jamesswiz Sorry I cannot help you anymore. I'm quite slow with tensorflow bugs. |
@experiencor No problem, I think I have figured it out. Going to test it on various examples to check if things are right. Anyway, It will be good if you can update your code and mention this case also: since we should remove softmax layer to see gradients well, as output is depending on previous all nodes which we don't want. I am not sure what is usual convention in image processing as I work in speech. I guess its better to consider the previous layer and then propagate error backwards? Again great stuff! thanks for sharing the code and helping me resolve the bugs |
@Jamesswiz Yes, sure. Can you share the way that you resolve it? Or better make a pull request 😁. |
@experiencor Hi, well I am new to git so not sure how pull request works. Actually the error was due to the saved model files in /tmp/ directory because of which you need to reset the jupyter-nootbook kernel each time you make changes in the TF graph with the current session on. The code is working fine and results are interesting. Haven't tested on images though. Remove softmax layer from keras model:
Tip: one can compute gradients with respect to any layer's input and output by defining new model in this way. |
Hi @experiencor , self.imported_y = self.guided_graph.get_tensor_by_name(model.output.name)[0][output_index] By default output_index is set to 0. waiting for a reply! Best |
Hi,
I want to remove the softmax layer and then compute gradients using guided backpropogation. I used pop() for this:
However, I am getting the following error while using:
from guided_backprop import GuidedBackprop
guided_bprop = GuidedBackprop(model)
I guess pop() does not changes the TF network graph. Can you please help how do I handle this?
Q: does your code ensures that the gradients are with respect to the output node with maximum activation?
The text was updated successfully, but these errors were encountered: