Tf2onnx converts a TensorFlow graph to an ONNX graph.
Tf2onnx is in its early development. Mileage will vary since TensorFlow supports ~4 times the operations that the current ONNX version supports. But standard models seem to be using mostly ops that ONNX does support.
Basic net and conv nets should work. A list of models that pass tests can be found here
Install dependencies:
pip install onnx==1.1
pip install tensorflow
If you want to run unit tests against the Caffe2 onnx backend, build and install Caffe2 and onnx-caffe2:
https://github.com/caffe2/caffe2
pip install onnx-caffe2==1.0
We tested with tensorflow 1.5/1.6 and anaconda 3.5/3.6.
Once dependencies are installed, from the tf2onnx root folder call:
python setup.py install
or
python setup.py develop
To create a wheel for distribution:
python setup.py bdist_wheel
python -m tf2onnx.convert
usage: convert.py [-h] --input INPUT [--output OUTPUT] [--target TARGET] --inputs INPUTS --outputs OUTPUTS [--continue_on_error] [--verbose]
For example:
python -m tf2onnx.convert --input tests/models/fc-layers/frozen.pb --inputs X:0 --outputs output:0 --output tests/models/fc-layers/model.onnx --pretty --verbose
To convert a TensorFlow model, tf2onnx expects a frozen TensorFlow graph
and the user needs to specify inputs and outputs for the graph by passing the input and output
names with --inputs INPUTS
and --outputs OUTPUTS
.
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's summarize_graph tool, for example:
summarize_graph --in_graph=tests/models/fc-layers/frozen.pb
The TensorFlow tool to freeze the graph is here.
For example:
python -m tensorflow.python.tools.freeze_graph \
--input_graph=my_checkpoint_dir/graphdef.pb \
--input_binary=true \
--input_names=input:0 \
--output_node_names=output:0 \
--input_checkpoint=my_checkpoint_dir \
--output_graph=tests/models/fc-layers/frozen.pb
There are different onnx versions and workarounds for runtimes that can be set with --target TARGET
. The default is onnx-1.1 and caffe2 which generates a graph
that can be executed on a onnx-1.0/onnx-1.1 runtime, like caffe2 and winml.
There are 2 types of tests.
python setup.py test
python tests/run_pretrained_models.py
usage: run_pretrained_models.py [-h] [--cache CACHE] [--tests TESTS] [--backend BACKEND] [--verbose] [--debug] [--config yaml-config]
optional arguments:
-h, --help show this help message and exit
--cache CACHE pre-trained models cache dir
--tests TESTS tests to run
--backend BACKEND backend to use
--config yaml config file
--verbose verbose output
--debug dump generated graph with shape info
run_pretrained_models.py
will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model. The only practical backend to use at this time is Caffe2, and you need to install Caffe2 for this to work.
You call it for example with:
python tests/run_pretrained_models.py --backend caffe2 --config tests/run_pretrained_models.yaml
While the protobuf format of ONNX is not all that different than onnx, mileage will vary because TensorFlow supports 4x the ops compared to the current version of ONNX. The converter needs to take care of a few things:
- Convert the protobuf format. Since the format is similar this step is straight forward.
- TensorFlow types need to be mapped to their ONNX equivalent.
- For many ops TensorFlow passes parameters like shapes as inputs where ONNX wants to see them as attributes. Since we use a frozen graph, the converter will fetch the input as constant, converts it to an attribute and remove the original input.
- TensorFlow in many cases composes ops out of multiple simpler ops. The converter will need to identify the subgraph for such ops, slice the subgraph out and replace it with the ONNX equivalent. This can become fairly complex so we use a graph matching library for it. A good example of this is the tensorflow transpose op.
- TensorFlow's default data format is NHWC where ONNX requires NCHW. The converter will insert transpose ops to deal with this.
- There are some ops like relu6 that are not supported in ONNX but the converter can be composed out of other ONNX ops.
- ONNX backends are new and their implementations are not complete yet. For some ops the converter generate ops with deal with issues in existing backends.
tf2onnx starts with a frozen graph. This is because of item 3 above.
tf2onnx first does a simple convertion from the TensorFlow protobuf format to the ONNX protobuf format without looking at individual ops. We do this so we can use the ONNX graph as internal representation and write helper functions around it. The code that does the conversion is in tensorflow_to_onnx(). tensorflow_to_onnx() will return the ONNX graph and a dictionary with shape information from TensorFlow. The shape information is helpful in some cases when processing individual ops. The ONNX graph is wrapped in a Graph object and nodes in the graph are wrapped in a Node object to allow easier graph manipulations on the graph. All code that deals with nodes and graphs is in graph.py.
In the next step we apply graph matching code on the graph to re-write subgraphs for ops like transpose and lstm. For an example looks at rewrite_transpose().
In the fourth step we look at individual ops that need attention. The dictionary _OPS_MAPPING will map tensorflow op types to a method that is used to process the op. The simplest case is direct_op() where the op can be taken as is. Whenever possible we try to group ops into common processing, for example all ops that require dealing with broadcasting are mapped to broadcast_op(). For an op that composes the tensorflow op from multiple onnx ops, see relu6_op().
Once all ops are converted, we need to do a topological sort since ONNX requires it. process_tf_graph() is the method that takes care of all above steps.
If you like to contribute and add new conversions to tf2onnx, the process is something like:
- See if the op fits into one of the existing mappings. If so adding it to _OPS_MAPPING is all that is needed.
- If the new op needs extra procesing, start a new mapping function.
- If the tensorflow op is composed of multiple ops, consider using a graph re-write. While this might be a little harder initially, it works better for complex patterns.
- Add a unit test in tests/test_backend.py. The unit tests mostly create the tensorflow graph, run it and capture the output, than convert to onnx, run against a onnx backend and compare tensorflow and onnx results.
- If there are pre-trained models that use the new op, consider adding those to test/run_pretrained_models.py.
- lstm/gru support (working on this)
- more testing
- more model coverage