-
Notifications
You must be signed in to change notification settings - Fork 4k
Finding TensorFlow op kernel targets when you add new operations to the graph
When the inference graph changes and uses new ops, if said ops and their kernels are not included in the libdeepspeech.so binary, you'll get an error similar to the one below when trying to load the exported model:
$ ./deepspeech --model ../ldc93s1_export/output_graph.pb --alphabet ../data/alphabet.txt --audio ../data/ldc93s1/LDC93S1.wav
TensorFlow: v1.14.0-14-g1aad02a78e
DeepSpeech: v0.6.0-alpha.4-7-gff8f405
Warning: reading entire model file into memory. Transform model file into an mmapped graph to reduce heap usage.
2019-07-17 09:57:35.976319: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
Invalid argument: No OpKernel was registered to support Op 'Split' used by {{node cudnn_lstm/rnn/cond/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/split}}with these attrs: [T=DT_FLOAT, num_split=4]
Registered devices: [CPU]
Registered kernels:
<no registered kernels>
[[cudnn_lstm/rnn/cond/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/split]]
Could not create model.
In the case above, the missing op was "Split". First, you should double check that the change is intentional. Did you really mean to change the inference graph, and is the change correct? You can inspect the exported model with a tool like Netron to check if it looks correct with your changes.
If you really need to use the new op, you should then add the relevant dependencies to the libdeepspeech.so rule in native_client/BUILD
. You can find the appropriate targets in the TensorFlow source by searching for the op and kernel registration macros. Both the op and the kernel need to be included in libdeepspeech.so, so make sure you add both.
To find the relevant deps for the "Split" op, here's what I did (using ripgrep)[0]:
$ cd ../tensorflow
$ rg 'REGISTER_OP\("Split'
tensorflow/core/ops/array_ops.cc
564:REGISTER_OP("Split")
598:REGISTER_OP("SplitV")
$ rg 'REGISTER_KERNEL_BUILDER\(Name\("Split'
tensorflow/core/kernels/split_op.cc
399: REGISTER_KERNEL_BUILDER(Name("Split") \
413: REGISTER_KERNEL_BUILDER(Name("Split") \
429: REGISTER_KERNEL_BUILDER(Name("Split") \
tensorflow/core/kernels/split_v_op.cc
442: REGISTER_KERNEL_BUILDER(Name("SplitV") \
462: REGISTER_KERNEL_BUILDER(Name("SplitV") \
484: REGISTER_KERNEL_BUILDER(Name("SplitV") \
This means the op is defined in tensorflow/core/ops/array_ops.cc
and the kernel is defined in tensorflow/core/kernels/split_op.cc
. You can then look at the relevant BUILD files, in this case tensorflow/core/BUILD
and tensorflow/core/kernels/BUILD
, to find the needed rules.
The op is usually in what TensorFlow calls an "op lib". They're created with tf_gen_op_libs
rules. For array_ops.cc
, the generated rule ends up being array_ops_op_lib
, so I can add //tensorflow/core:array_ops_op_lib
to the libdeepspeech.so dependencies.
The kernel is usually in what TensorFlow calls a "kernel library". They're created with tf_kernel_library
rules. For split_op.cc
, the generated rule ends up being slit_op
, so I can add //tensorflow/core/kernels:split_op
to the libdeepspeech.so dependencies.
If you don't find the op or kernel registration when searching, it could be due to whitespace, for example if there's a line break between REGISTER_KERNEL_BUILDER
and Name("Split
. You can try searching the following lines:
$ cd ../tensorflow
$ find . -type f | xargs grep -A2 REGISTER_KERNEL_BUILDER | grep 'Name("Split'
(The -A2 flag will cause grep to include the next two lines in the file after the match in the output)
Don't edit this footer for questions, add them to the page with the edit button at the top.