You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've experienced a bug regarding the data format with the following format: BatchSize x Channels x H x W. By default, tensorfi when parses a layer that relies on data format, such as conv2d, it doesn't save that variable and then it will crash when runs.
How do I implement the operator tf.layers.BatchNorm? it takes a tensor, and 4 algebraic parameters to apply normalization to the tensor.
The text was updated successfully, but these errors were encountered:
Hi, to implement BN, you should add this operator in ``class Ops'' in ifconfig.py, and then implement a fi_version BN in injectFault.py. (see ProgrammerGuide in the Manual)
Since it takes several parameters, you may also look into the modifyGraph.py module where each fi_op is created, and make sure all the varaibles¶meters are parsed to the fi_op.
I'm afraid we don't support Batch normalization as you've discovered. If you'd like to implement it, you'd need to add an "injectFault" function for this operator in injectFault.py (See below). You can look at how we've implemented some of the other TF operators in the same file. Thanks.
I've experienced a bug regarding the data format with the following format: BatchSize x Channels x H x W. By default, tensorfi when parses a layer that relies on data format, such as conv2d, it doesn't save that variable and then it will crash when runs.
How do I implement the operator tf.layers.BatchNorm? it takes a tensor, and 4 algebraic parameters to apply normalization to the tensor.
The text was updated successfully, but these errors were encountered: