We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This test was run on s390x. s390x is big-endian architecture.
Failure log from pytest:
_______________________________________________ TestOptimizer.test_fuse_bn_into_conv_simple ________________________________________________ self = <optimizer_test.TestOptimizer testMethod=test_fuse_bn_into_conv_simple> def test_fuse_bn_into_conv_simple(self): # type: () -> None for (tensor_type, np_type) in [(TensorProto.FLOAT, np.float32), (TensorProto.DOUBLE, np.float64)]: conv = helper.make_node("Conv", ["X", "W", "B"], ["Y"]) bn = helper.make_node("BatchNormalization", [ "Y", "scale", "b", "mean", "var"], ["Z"]) W = np.random.randn(3, 2, 5, 5).astype(np_type) + 2 B = np.random.randn(3,).astype(np_type) + 2 scale = np.random.randn(3,).astype(np_type) + 2 b = np.random.randn(3,).astype(np_type) + 2 mean = np.random.randn(3,).astype(np_type) + 2 var = np.abs(np.random.randn(3,).astype(np_type)) + 2 initializers = [ helper.make_tensor(name, tensor_type, npa.shape, npa.tobytes(), raw=True) for name, npa in [('W', W), ('B', B), ('scale', scale), ('b', b), ('mean', mean), ('var', var)] ] graph = helper.make_graph( [conv, bn], "test", [helper.make_tensor_value_info("X", tensor_type, (5, 2, 28, 28)), helper.make_tensor_value_info("W", tensor_type, (3, 2, 5, 5)), helper.make_tensor_value_info("B", tensor_type, (3,)), helper.make_tensor_value_info("scale", tensor_type, (3,)), helper.make_tensor_value_info("b", tensor_type, (3,)), helper.make_tensor_value_info("mean", tensor_type, (3,)), helper.make_tensor_value_info("var", tensor_type, (3,))], [helper.make_tensor_value_info( "Z", tensor_type, (5, 3, 24, 24))], initializer=initializers, value_info=[ helper.make_tensor_value_info( "Y", tensor_type, (5, 3, 24, 24)) ] ) optimized_model = self._optimized(graph, ["fuse_bn_into_conv"]) self.assertEqual(len(optimized_model.graph.node), 1) self.assertEqual(optimized_model.graph.node[0].op_type, 'Conv') self.assertEqual(len(optimized_model.graph.initializer), 2) new_W = numpy_helper.to_array(optimized_model.graph.initializer[0]) new_b = numpy_helper.to_array(optimized_model.graph.initializer[1]) f = scale / np.sqrt(var + 1e-5) > np.testing.assert_almost_equal((B - mean) * f + b, new_b) E AssertionError: E Arrays are not almost equal to 7 decimals E E Mismatched elements: 3 / 3 (100%) E Max absolute difference: 2.7824624e+14 E Max relative difference: 1.0692023e+32 E x: array([-1.3619891, 2.4206262, 2.2576501], dtype=float32) E y: array([-4.7447069e-14, -2.2639551e-32, 2.7824624e+14], dtype=float32) optimizer_test.py:1509: AssertionError
The fuse_bn_into_conv.h code does not account for potentially running on a big-endian architecture machine when running the optimization routines.
fuse_bn_into_conv.h
Attached is a patch file with updated source files with a proposed fix for this optimization issue.
onnx-opt.patch.zip
A pull request can be submitted if needed.
The text was updated successfully, but these errors were encountered:
Onnx optimizer is being moved to a standalone repo under onnx tree. Transferring this issue to onnx/optimizer
Sorry, something went wrong.
No branches or pull requests
Bug Report
This test was run on s390x. s390x is big-endian architecture.
Failure log from pytest:
The
fuse_bn_into_conv.h
code does not account for potentially running on a big-endian architecture machine when running the optimization routines.Attached is a patch file with updated source files with a proposed fix for this optimization issue.
onnx-opt.patch.zip
A pull request can be submitted if needed.
The text was updated successfully, but these errors were encountered: