Classifier.predict() says my pretrained and accurate model is inaccurate. #1628
-
Describe the bug I included a code snippet. Basically, I use the example located at https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/examples/get_started_tensorflow.py, with some variations. Instead of training a model, I create my model and initialize with weights loaded from a pickle file. To verify my model works properly, I call classifier.predict(). Instead of getting a high percentage, i get 8%-10% accuracy, which is not in agreement with the test accuracy I find when I don't use classifier.predict(). For some reason unknown to me, classifier.predict() gives me numbers indicative of random guesses. I assume that my pretrained model is not correctly being read by the aversarial robustness toolbox framework. To Reproduce ` Step 1: Load the MNIST dataset(x_train, y_train), (x_test, y_test), min_pixel_value, max_pixel_value = load_mnist() load layer weightswith open('C:\Users\valentin\Desktop\my_name\file_adv.pkl', 'rb') as f: Step 2: Create the modelinput_ph = tf.placeholder(tf.float32, shape=[None, 28, 28, 1]) model = Sequential() #set logits equal to model output #check that my model is accurate sess = tf.Session() Step 3: Create the ART classifierclassifier = TensorFlowClassifier( Step 5: Evaluate the ART classifier on benign test examplespredictions = classifier.predict(x_test) ` Screenshots System information (please complete the following information):
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 23 replies
-
Hi @animalcroc Are you able to extend/modify this script to make correct predictions just with your model only? |
Beta Was this translation helpful? Give feedback.
-
ART_error_pretrained.txt
|
Beta Was this translation helpful? Give feedback.
Hi @animalcroc Are you able to extend/modify this script to make correct predictions just with your model only?