Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BatchNorm and data format. #9

Open
AlessandroToschi opened this issue Sep 16, 2019 · 2 comments
Open

BatchNorm and data format. #9

AlessandroToschi opened this issue Sep 16, 2019 · 2 comments

Comments

@AlessandroToschi
Copy link

I've experienced a bug regarding the data format with the following format: BatchSize x Channels x H x W. By default, tensorfi when parses a layer that relies on data format, such as conv2d, it doesn't save that variable and then it will crash when runs.

How do I implement the operator tf.layers.BatchNorm? it takes a tensor, and 4 algebraic parameters to apply normalization to the tensor.

@zitaoc
Copy link
Collaborator

zitaoc commented Sep 16, 2019

Hi, to implement BN, you should add this operator in ``class Ops'' in ifconfig.py, and then implement a fi_version BN in injectFault.py. (see ProgrammerGuide in the Manual)

Since it takes several parameters, you may also look into the modifyGraph.py module where each fi_op is created, and make sure all the varaibles&parameters are parsed to the fi_op.

@karthikp-ubc
Copy link
Contributor

I'm afraid we don't support Batch normalization as you've discovered. If you'd like to implement it, you'd need to add an "injectFault" function for this operator in injectFault.py (See below). You can look at how we've implemented some of the other TF operators in the same file. Thanks.

https://github.com/DependableSystemsLab/TensorFI/blob/master/TensorFI/injectFault.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants