Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request: export graph and invariant output names for boxes and scores #16

Open
fastlater opened this issue Oct 23, 2017 · 2 comments

Comments

@fastlater
Copy link

fastlater commented Oct 23, 2017

Using the code below in the demo.py, I converted the graph proto to a file so I can see all the nodes using tensorboard.

tf.train.write_graph(sess.graph.as_graph_def(),'./tmp', 'output_inference_graph.pb', as_text=False)
writer = tf.summary.FileWriter("./tmp/demo", sess.graph)

However, I tried to freeze the model using the latest checkpoints, but I got graph_def invalid when I try to use it. Maybe it is because the detection outputs come from a list of tensors and not a single output, and also because without export function, the file become big (250mb).

     nodes1='Gather_2,Gather_6,Gather_10,Gather_14,Gather_18,Gather_22,Gather_26,Gather_30,Gather_34,Gather_38,Gather_42,Gather_46,Gather_50,Gather_54,Gather_58,Gather_62,Gather_66,Gather_70,Gather_74,Gather_78,'          

nodes2='Gather_3,Gather_7,Gather_11,Gather_15,Gather_19,Gather_23,Gather_27,Gather_31,Gather_35,Gather_39,Gather_43,Gather_47,Gather_51,Gather_55,Gather_59,Gather_63,Gather_67,Gather_71,Gather_75,Gather_79,' 
 
nodes3='Reshape,mean_iou/mean_iou,mean_iou/AssignAdd'
            output_node_names=nodes1+nodes2+nodes3

            print(output_node_names)
            output_graph_def = graph_util.convert_variables_to_constants(sess,input_graph_def,output_node_names.split(","))
        
            output_graph="./tmp/blitznet_frozen_model.pb"
            with tf.gfile.GFile(output_graph, "wb") as f:
                f.write(output_graph_def.SerializeToString())

In the object detection api, they use tf.identity to create dummy nodes for the final 3 outputs[class, score, num_det]. According to what do I understand from Blitznet graph, the boolean_mask_1 subgraph handles the object detection task and the mean_iou subgraph handles the segmentation task. However, in the blitznet graph, the outputs [detection boxes, detection scores, detection categories] are not visible. Mean_iou ('mean_iou/mean_iou') is the only one that I can find. Is it necessary that the outputs for detection and score have to be gathered in a list (net_out) with 20 tensors for detection and 20 tensors for score? Could it be possible to create a dummy identity operation to allocate these results? Do they already exist in the graph but I didn't notice them?

@dvornikita I will try my best to do something about it and post any result. Let me know if you have any suggestion, correct me if I'm wrong and comment about my observations

PD: I know there is nothing wrong with the code and we dont need to freeze the model to get the results but I would like to try this model in C# and till now, the only way that I know to get results is to input a frozen graph and also to separate the outputs in independent nodes so I can run the operations (tf.identity of each output) and get the results. Something like:
Results = Run(Output[] inputs, Tensor[] inputValues, Output[] outputs, Operation[] targetOperations)

@dvornikita
Copy link
Owner

Hi @fastlater, I really appreciate your help in debugging and developing this project.
Unfortunately, this time I can't answer the question because I don't know how to help you in this situation, at least not right now :)
I'm gonna look into the graph later, probably next week, and write you.

@fastlater
Copy link
Author

fastlater commented Oct 24, 2017

@dvornikita Thank you for your honest answer. I will keep trying. I hope next week you have free time and check it and give me some ideas.

Update October 26,2017:
Using the code shown below, I can store the graph properly

 output_graph_def = graph_util.convert_variables_to_constants(sess,input_graph_def,output_node_names.split(","))
with tf.gfile.GFile(output_graph, "wb") as f:
                f.write(output_graph_def.SerializeToString())

optimize_for_inference.py remove some (just a few) nodes but the model is more clear in tensorboard and ready to use. Thus, blitznet can be exported using the optimize_for_inference tool.

Keep trying with tf.identity becuase simpler output nodes (1 for n locations and 1 for n scores) will make the outputs of this model easier to use when it is exported:

all_boxes = tf.identity(self.detection_list,name='all_boxes')
all_scores = tf.identity(self.score_list, name='all_scores')

However, these lines got me an error: InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [0] != values[4].shape = [50]. I tried with tf.concat but then all the results found are placed in a single array and not an array of arrays. I will need help with this.

@fastlater fastlater changed the title feature request: export_inference_graph feature request: export graph and invariant output names for boxes and scores Oct 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants