Skip to content

Releases: Trusted-AI/adversarial-robustness-toolbox

ART 1.6.2

20 May 22:47
Compare
Choose a tag to compare

This release of ART 1.6.2 provides updates to ART 1.6.

Added

  • Added targeted option to RobustDpatch (#1069)
  • Added option standardise_output to define provided label format (#1069)
  • Added property native_label_is_pytorch_format to object detectors to define label format expected by the model (#1069)

Changed

  • Changed Dpatch and RobustDpatch to work internally with PyTorchFasterRCNN's object detection label format and convert labels if provided in TensorFlowFasterRCNN's format accordingly using option standardise_output (#1069)
  • Change setup.py to only contain core dependencies in install_requires and added additional install options tensorflow_image, tensorflow_audio, pytorch_image, and pytorch_audio (#1116)
  • Changed check for version of torch and torchvision in AdversarialPatchPyTorch to account for suffixes like +cu102 (#1115)
  • Changed art.utils.load_iris to use sklearn.datasets.load_iris instead of download from https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data (#1097)

Removed

  • Removed unnecessary requirement for scores in labels y for TensorFlowFasterRCNN.loss_gradient and PyTorchFasterRCNN.loss_gradient (#1069)

Fixed

  • Fixed docstrings of methods predict and loss_gradient to correctly describe the expected and provided label format (#1069)
  • Fixed bug of missing transfer of tensor to device ProjectedGradientDescentPyTorch (#1076)
  • Fixed bug resulting in wrong loss gradients calculated with ScikitlearnLogisticRegression.loss_gradient (#1065)

ART 1.6.1

16 Apr 23:43
Compare
Choose a tag to compare

This release of ART 1.6.1 provides updates to ART 1.6.

Added

  • Added a notebook showing an example of Expectation over Transformation (EoT) sampling with ART to generate adversarial examples that are robust against rotation in image classification tasks. (#1051)
  • Added a check for valid combinations of stride, freq_dim and image size in SimBA attack. (#1037)
  • Added accurate gradient estimation to LFilter audio preprocessing. (#1002)
  • Added support for multiple layers to be targeted by BullseyePolytopeAttackPyTorch attack to increase effectiveness in end-to-end scenarios. (#1003)
  • Added check and ValueError to provide explanation for too large nb_parallel values in ZooAttack. (#988)

Changed

  • Changed TensorFlowV2Classifier.get_activations to accept negative layer indexes. (#1054)
  • Tested BoundaryAttack and HopSkipJump attacks with batch_size larger than 1 and changed default value to batch_size=64. (#971)

Removed

[None]

Fixed

  • Fixed bug in Dpatch attack which did not update the patch, being optimised, onto the images used for loss gradient calculation leading to iterations with the constant, initially, applied patches. (#1049)
  • Fixed bug in BullseyePolytopeAttackPyTorch attack where attacking multiple layers of the underlying model only perturbed the first of all input images. (#1046)
  • Fixed return value of TensorFlowV2Classifier.get_activations to a list of strings. (#1011)
  • Fixed bug in TensorFlowV2Classifier.loss_gradient by adding labels to application of preprocessing step to enable EoT preprocessing steps that increase the number of samples and labels. This change does not affect the accuracy of previously calculated loss gradients. (#1010)
  • Fixed bug in ElasticNet attack to apply the confidence parameter when generating adversarial examples. (#995)
  • Fixed bug in art.attacks.poisoning.perturbations.image_perturbations.insert_image to correctly transpose input images when channels_first=True. (#1009)
  • Fixed bug of missing method compute_loss in PyTorchDeepSpeech, TensorFlowFasterRCNN and BlackBoxClassifier. (#994, #1000)

ART 1.6.0

16 Mar 17:32
Compare
Choose a tag to compare

This release of ART v1.6.0 introduces with the clean-label poisoning attack Bullseye Polytope, a baseline attribute inference attack, and a PyTorch-specific implementation of Adversarial Patch attack with perspective transformation sampling, new evaluation tools in the three different threats types of poisoning, inference and evasion. Furthermore, this release contains the first set of Expectation over Transformation (EoT) preprocessing tools for image processing and natural corruptions.

Added

  • Added the Bullseye Polytope clean-label poisoning attack in art.attacks.poisoning.BullseyePolytopeAttackPyTorch (#962)
  • Added the Pointwise Differential Training Privacy (PDTP) metric measuring training data membership leakage of trained model in art.metrics.PDTP (#958)
  • Added a attribute inference base line attack art.attacks.inference.attribute_inference.AttributeInferenceBaseline defining a minimal attribute inference performance that can be achieved without access to the evaluated model (#956)
  • Added a first set of Expectation over Transformation (EoT) preprocessing in art.preprocessing.expectation_over_transformation for image processing and natural image corruptions including brightness, contrast, Gaussian noise, shot noise, and zoom blur. These EoTs enable sampling multiple transformed samples in each forward pass and are fully differentiable for accurate loss gradient calculation in PyTorch and TensorFlow v2. They can be chained together in sequence and are implemented fully framework-specific (#919)
  • Added a function for image trigger perturbations blending images (#913)
  • Added a method insert_transformed_patch to all adversarial patch attacks art.attacks.evasion.AdversarialPatch* applying adversarial patches onto a perspective transformed square defined by the coordinates of its four corners (#891)
  • Added the Adversarial Patch attack framework-specific in PyTorch in art.attacks.evasion.AdversarialPatchPyTorch with additional functionality to support sampling over perspective transformations (#876)

Changed

  • Changed handling of NaN values in loss gradients in art.attacks.evasion.FastGradientMethod and art.attacks.evasion.ProjectedGradientDescent* by replacing NaN values with 0.0 and log a warning message. This should prevent losing expensive attack runs in late iterations and still return an adversarial example, but log a warning to alert the user. (#883)
  • Changed permitted ranges for eps_step and eps in art.attacks.evasion.ProjectedGradientDescent* to allow eps_step to be larger than eps for all norms, allow eps_step=np.inf to immediately project towards the norm ball or clip_values, and support eps=0.0 to run the attack without any attack budget. The latter two changes are intended to facilitate the verification of attack setups. (#882)
  • Changed in the unit tests the marker skipMlFramework to skip_framework and the pytest argument mlFramework to framework (#961)
  • Changed art.preprocessing.standardisation_mean_std for standardisation with mean and std to provide extended support for broadcasting by automatically adapting 1-dimensional arrays for mean and std to be broadcastable on NCHW inputs (#839)
  • Changed art.estimators.object_detection.PyTorchFasterRCNN.loss_gradient to not overwrite the input label array with tensors (#954)
  • Changed and automated the setting of model states by removing method set_learning_phase from all estimators and automating setting the model into the most likely appropriate state for each operation in methods predict (eval mode, training_mode=False) , fit (train mode, training_mode=True) , loss_gradient (eval mode) , class_gradient(eval mode) , etc. The default is defined by a new method argument training_mode which can be changed for example for debugging purposes. An exception are RNN-type models in PyTorch where loss_gradient and class_gradient will run the model in train mode but freeze the model's batch-norm and dropout layers if training_mode=False. (#781)
  • Changed art.attacks.evasion.BoundaryAttack in normal (L282) and a suboptimal (L287) termination to return the adversarial example candidate with the smallest norm of the perturbation instead of returning the first adversarial example candidate in its list, this will facilitate the finding the minimum L2 perturbation adversarial examples (#948)
  • Changed art.attacks.inference.attribute_inference.AttributeInferenceBlackBox to support one-hot encoded features that have been scaled and lie in-between 0 and 1 instead of just 0 and 1 (#927)
  • Changed imports of tensorflow in TensorFlow v1 specific tools to enable backward compatibility and application with TensorFlow v2 (#880)
  • Changed optimizer of art.attacks.evasion.AdversarialPatchTensorFlowV2 from SGD to Adam for better performance (#878)
  • Changed art.attacks.evasion.BrendelBethgeAttack to include support for numba, following the reference implementation, which leads to great acceleration of the attack (#868)
  • Changed art.estimators.classification.ScikitlearnClassifier and all model specific scikit-learn estimators to provide the new argument use_logits to define returning probability or logit predictions in their methods predict (#872)
  • Changed metrics clever_t and depending on it clever and clever_u to reduce long runtimes by computing the class gradients of all samples in rand_pool before looping through the batches. To reduce the risk of ResourceExhasutedError, batching is now also applied on rand_pool to compute class gradients on smaller batches of size pool_factor (#762)

Removed

  • Removed deprecated argument and property channel_index from all estimators. channel_index has been replaced by channels_first. (#869)

Fixed

  • Fixed the criterion of targeted art.attacks.evasion.BoundaryAttack to now correctly check that adversarial predictions are different from the original image prediction during sampling instead of the same (#948)

ART 1.5.3

14 Mar 00:14
Compare
Choose a tag to compare

This release of ART 1.5.3 provides updates to ART 1.5.

Added

[None]

Changed

  • Changed argument names of art.attacks.evasion.ImperceptibleASR, art.attacks.evasion.ImperceptibleASRPyTorch and art.attacks.evasion.CarliniWagnerASR where necessary to use the same names in all three attacks. (#955, #959)
  • Changed optimisation in art.attacks.evasion.ImperceptibleASRPyTorch to use torch.float64 instead of torch.float32 to prevent NaN as loss value. (#931)
  • Changed art.attacks.evasion.ImperceptibleASR to improve the psychoacoustic model and stabilize the imperceptible loss by switching to librosa's STFT and using scalar PSD maximum. (#930)
  • Changed art.attacks.evasion.ImperceptibleASR to use periodic window for STFT instead symmetric window option. (#930)
  • Changed art.attacks.evasion.ImperceptibleASR with early stopping if loss theta < 0.05 to avoid running into gradients with NaN values. (#930)
  • Changed art.attacks.evasion.ImperceptibleASRPyTorch to reset its optimisers for each internal batch in method generate to guarantee the same optimiser performance on each batch, this is especially important for adaptive optimisers. (#917)
  • Changed art.attacks.evasion.ImperceptibleASRPyTorch to use torch.stft instead of torchaudio.transforms.Spectrogram to correctly compute the spectrogram. (#914)
  • Changed art.estimators.speech_recognition.PyTorchDeepSpeech to freeze batch-norm layers of the Deep Speech model in method loss_gradient to obtain gradients using dataset statistics instead of batch statistics and avoid changing dataset statistics of the batch-norm layers with each call. (#912)

Removed

[None]

Fixed

  • Fixed bug of missing argument model in art.estimators.object_detection.TensorFlowFasterRCNN which caused instantiation to fail. (#951)
  • Fixed bug of missing square in calculation of loss and class gradients for art.estimators.classification.ScikitlearnSVC using Radial Basis Function (RBF) kernels. (#921)
  • Fixed missing support for preprocessing=None in art.estimators.BaseEstimator. (#916)

ART 1.5.2

20 Feb 01:11
Compare
Choose a tag to compare

This release of ART 1.5.2 provides updates to ART 1.5.

Added

  • Added new method reset_patch to art.attacks.evasion.adversarial_patch.* to reset patch (#863)
  • Added passing kwargs to internal attacks of art.attacks.evasion.AutoAttack (#850)
  • Added art.estimators.classification.BlackBoxClassifierNeuralNetwork as black-box classifier for neural network models (#849)
  • Added support for channels_first=False for art.attacks.evasion.ShadowAttack in PyTorch (#848)

Changed

  • Changed Numpy requirements to be less strict to resolve conflicts in dependencies (#879)
  • Changed estimator requirements for art.attacks.evasion.SquareAttack and art.attacks.evasion.SimBA to include NeuralNetworkMixin requiring neural network models (#849)

Removed

[None]

Fixed

  • Fixed BaseEstimator.set_params to set preprocessing and preprocessing_defences correctly by accounting for art.preprocessing.standardisation_mean_std (#901)
  • Fixed support for CUDA in art.attacks.inference.membership_inference.MembershipInferenceBlackBox.infer (#899)
  • Fixed return in art.preprocessing.standardisation_mean_std.StandardisationMeanStdPyTorch to maintain correct dtype (#890)
  • Fixed type conversion in art.evaluations.security_curve.SecurityCurve to be explicit (#886)
  • Fixed dtype in art.attacks.evasion.SquareAttack for norm=2 to maintain correct type (#877)
  • Fixed missing CarliniWagnerASR in art.attacks.evasion namespace (#873)
  • Fixed support for CUDA i `art.estimators.classification.PyTorchClassifier.loss (#862)
  • Fixed bug in art.attacks.evasion.AutoProjectedGradientDescent for targeted attack to correctly detect successful iteration steps and added robust stopping criteria if loss becomes zero (#860)
  • Fixed bug in initialisation of search space in art.attacks.evasion.SaliencyMapMethod (#843)
  • Fixed bug in support for video data in art.attacks.evasion.adversarial_patch.AdversarialPatchNumpy (#838)
  • Fixed bug in logged success rate of art.attacks.evasion.ProjectedGradientDescentPyTorch and art.attacks.evasion.ProjectedGradientDescentTensorFlowV2 to use correct labels (#833)

ART 1.5.1

09 Jan 02:02
Compare
Choose a tag to compare

This release of ART 1.5.1 provides updates to ART 1.5.

Added

  • Added an option to select to probability values for model extraction attacks in addition to index labels in art.attacks.extraction.CopycatCNN and art.attacks.extraction.KnockoffNets. (#825)
  • Added a new notebook demonstrating model extraction attacks and defences. (#825)
  • Added art.attacks.evasion.CarliniWagnerASR as a special case of art.attacks.evasion.ImperceptibleASR where max_iter_stage_2=0 skipping the second stage of the ImperceptibleASR. (#784)

Changed

  • Changed method generate of art.attacks.evasion.ProjectedGradientDescentPyTorch and art.attacks.evasion.ProjectedGradientDescentTensorFlowV2 to create a copy of the input data to guard the input data from being overwritten by a model that unexpectedly overwrites its input data. This change follows the implementation of art.attacks.evasion.ProjectedGradientDescentNumpy and provides an extra layer of protection against unexpected model behavior. (#805)
  • Change numerical precision in art.attacks.evasion.Wasserstein from float to double to reduce numerical overflow in numpy.log and replace input pixel values of 0 with EPS_LOG=10^-10 to prevent division by zero in numpy.log. (#780)
  • Changed tqdm imports to use tqdm.auto to automatically run its Jupyter widgets where supported. (#799)
  • Improved documentation, argument value checks and added support for index labels in art.attacks.inference.member_ship.LabelOnlyDecisionBoundary. (#790)

Removed

[None]

Fixed

  • Fixed bug in art.estimators.classification.KerasClassifier.custom_loss_gradient() to support keras and tensorflow.keras. (#810)
  • Fixed bug in art.attacks.evasion.PixelThreshold.generate to correctly scale images in range [0, 255]. (#802)
  • Fixed bug in art.attacks.evasion.PixelThreshold to run CMA Evolution Strategy max_iter iterations instead of 1 iteration. (#802)
  • Fixed bug in art.estimators.object_detection.PyTorchFasterRCNN by adding missing argument model in super().init. (#789)

ART 1.5.0

01 Dec 01:59
Compare
Choose a tag to compare

Added

  • Added a new module art.evaluations for evaluation tools that go beyond creating adversarial examples and create insights into the robustness of machine learning models beyond adversarial accuracy and build on art.estimators and art.attacks as much as possible. The first implemented evaluation tool is art.evaluations.SecurityCurve which calculates the security curve, a popular tool to evaluate robustness against evasion, using art.attacks.evasion.ProjectedGradientDescent and provides evaluation of potential gradient masking in the evaluated model. (#654)

  • Added support for perturbation masks in art.attacks.evasion.AutoProjectedGradientDescent similar as in art.attacks.evasion.ProjectedGradientDescent and added Boolean masks for patch location sampling in Dpatch and all AdversarialPatch attacks to enable pixel masks defining regions where patch locations are sampled from during patch training or where trained patches can be applied.

  • Added preprocessing for Infinite (IIR) and Finite Impulse Response (FIR) filtering for Room Acoustics Modelling in framework-agnostic (art.preprocessing.audio.LFilter) and PyTorch-specific (art.preprocessing.audio.LFilterPyTorch) implementations as the first tool for physical environment simulation for audio data in art.preprocessing.audio. Additional tools will be added in future releases. (#744)

  • Added Expectation over Transformation (EoT) to art.preprocessing.expectation_over_transformation with a first implementation of sampling image rotation for classification tasks framework-specific for TensorFlow v2 (art.preprocessing.expectation_over_transformation.EOTImageRotationTensorFlowV2) providing full support for gradient backpropagation through EoT. Additional EoTs will be added in future releases. (#744)

  • Added support for multi-modal inputs in ProjectedGradientDescent attacks and FastGradientMethod attack with broadcastable arguments eps and eps_step as np.ndarray to enable attacks against, for example, images with multi-modal color channels. (#691)

  • Added Database Reconstruction attack in the new module art.attacks.inference.reconstruction.DatabaseReconstruction enabling evaluation of the privacy of machine learning models by reconstructing one removed sample of the training dataset. The attack is demonstrated in a new notebook on models trained non-privately and with differential privacy using the Differential Privacy Library (DiffPrivLib) as defense. (#759)

  • Added support for one-hot encoded feature definition in black-box attribute inference attacks. (#768)

  • Added a new model-specific speech recognition estimator for Lingvo ASR in art.estimators.speech_recognition.TensorFlowLingvoASR. (#584)

  • Added a framework-independent implementation of the Imperceptible ASR attack with loss support for TensorFlow and PyTorch in art.attacks.evasion.ImperceptibleASR. (#719, #760)

  • Added Clean Label Backdoor poisoning attack in art.attacks.poisoning.PoisoningAttackCleanLabelBackdoor. (#725)

  • Added Strong Intentional Perturbation (STRIP) defense against poisoning attacks in art.defences..transformer.poisoning.STRIP. (#656)

  • Added Label-only Boundary Distance Attack art.attacks.inference.membership_inference.LabelOnlyDecisionBoundary and Label-only Gap Attack art.attacks.inference.membership_inference.LabelOnlyGapAttack for membership inference attacks on classification estimators. (#720)

  • Added support for preprocessing and preprocessing defences in the PyTorch-specific implementation of the Imperceptible ASR attack in art.attacks.evasion.ImperceptibleASRPyTorch. (#763)

  • Added a robust version of evasion attack DPatch in art.attacks.evasion.RobustDPatch against object detectors by adding improvements like expectation over transformation steps, fixed patch location, etc. (#751)

  • Added optional support for Automatic Mixed Precision (AMP) in art.estimators.classification.PyTochClassifier to facilitate mix-precision computations and increase performance. (#619)

  • Added the Brendel & Bethge evasion attack in art.attacks.evasion.BrendelBethgeAttack based on the original reference implementation. (#626)

  • Added framework-agnostic support for Randomized Smoothing estimators in addition to framework-specific implementations in TensorFlow v2 and PyTorch. (#738)

  • Added an optional progress bar to art.utils.get_file to facilitate downloading large files. (#698)

  • Added support for perturbation masks in HopSkipJump evasion attack in art.attacks.evasion.HopSkipJump. (#653)

Changed

  • Changed preprocessing defenses and input standardisation with mean and standard deviation by combining all preprocessing into a single preprocessing API defined in the new module art.preprocessing. Existing preprocessing defenses remain in art.defences.preprocessor, but are treated as equal and run with the same API and code as general preprocessing tools in art.preprocessing. The standardisation is now a preprocessing tool that is implemented framework-specific for PyTorch and TensorFlow v2 in forward and backward direction. Estimators for art.estimators.classification and art.estimators.object_detection in TensorFlow v2 and PyTorch set up with all framework-specific preprocessing steps will prepend the preprocessing directly to the model to evaluate output and backpropagate gradients in a single step through the model and (chained) preprocessing instead of previously two separate steps for improved performance. Framework independent preprocessing tools will continue to be evaluated in a step separate from the model. This change also enable full support for any model-specific standardisation/normalisation functions for the model inputs and their gradients. (#629)

  • Changed Preprocessor and Postprocessor APIs to simplify them by defining reused methods and the most common property values as defaults in the API. The default for art.defences.preprocessor.preprocessor.Preprocessor.estimate_gradient in framework-agnostic preprocessing is Backward Pass Differentiable Approximation (BPDA) with identity function, which can be customized with accurate or better approximations by implementing estimate_gradient. (#752)

  • Changed random restarts in all ProjectedGradientDescent implementations to collect the successful adversarial examples of each random restart instead of previously only keeping the adversarial examples of the most successful random restart. Adversarial examples of previous random restart iterations are overwritten by adversarial examples of later random restart iterations. This leads to equal or better adversarial accuracies compared to previous releases and changes the order of processing the input samples to first complete all random restarts of a batch before processing the next batch instead of looping over all batches in each random restart. (#765)

  • Changed order of mask application and normalization of the perturbation in all ProjectedGradientDescent and FastGradientMethod attacks to now first apply the mask to the loss_gradients and subsequently normalize only the remaining, un-masked perturbation. That way the resulting perturbation can directly be compared to the attack budget eps. (#711)

  • Changed location of implementation and default values of properties channels_first, clip_values, and input_shape in art.estimators to facilitate the creation of customs estimators not present in art.estimators.

  • Changed Spectral Signature Defense by removing argument num_classes and replacing it with the estimator’s nb_classes property and renaming parameter ub_pct_poison to expected_pp_poison. (#678)

  • Changed the ART directory path for datasets and model data stored in ART_DATA_PATH to be configurable after importing ART. (#701)

  • Changed preprocessing defence art.defences.preprocessor.JpegCompression to support any number of channels in addition to the already supported inputs with 1 and 3 channels. (#700)

  • Changed calculation of perturbation and direction in art.attacks.evasion.BoundaryAttack to follow the reference implementation. These changes result in faster convergence and smaller perturbations. (#761)

Removed

[None]

Fixed

  • Fixed bug in definition and application of norm p in cost matrix in Wasserstein evasion attack art.attacks.evasion.Wasserstein present in the reference implementation. (#712)

  • Fixed handling of fractional batches in Zeroth Order Optimization (ZOO) attack in art.attacks.evasion.ZOOAttack to prevent errors caused by shape mismatches for batches smaller than batch_size. (#755)

ART 1.4.3

21 Nov 01:23
Compare
Choose a tag to compare

This release of ART v1.4.3 provides updates to ART 1.4.

Added

[None]

Changed

  • Changed argument y of method infer of art.attacks.inference.attribute_inference.AttributeInferenceBlackBox from optional to required. (#750)

Removed

[None]

Fixed

  • Fixed bug in art.data_generators.PyTorchDataGenerator and art.data_generators.MXDataGenerator where method get_batch always returned the same first batch of the dataset to return different batches for each method call by iterating over the entire dataset. (#731)
  • Fixed format of return value of method infer of art.attacks.inference.membership_inference.MembershipInferenceBlackBox for attack_model_type="nn". (#741)

ART 1.4.2

04 Nov 14:27
Compare
Choose a tag to compare

This release of ART v1.4.2 provides updates to ART 1.4.

Added

  • Added implementation of method loss for art.estimators.classification.TensorFlowClassifer. (#685)
  • Added support for variable length input to art.defences.preprocessor.MP3Compression to make it compatible with estimator art.estimators.speech_recognition.PyTorchDeepSpeech. (#684)
  • Added support for mask in non-classification tasks with art.attacks.evasion.ProjectedGradientDescent. (#682)
  • Added support for torch.Tensor as input for loss_gradient of art.estimators.object_detection.PyTorchFasterRCNN. (#679)
  • Added support for art.attacks.evasion.ProjectedGradientDescent and art.attacks.evasion.FasGradientMethod attacks on art.estimators.speech_recognition.PyTorchDeepSpeech. (#669)
  • Added exception and explanation if target labels are not provided in generate of art.attacks.evasion.ImperceptibleASRPytorch. (#677)
  • Added support for preprocessing defences in art.estimators.speech_recognition.PyTorchDeepSpeech. (#663)
  • Added support for type List in argument patch_shape of art.attacks.evasion.DPatch. (#662)
  • Added support for option verbose to all art.attacks and art.defences to adjust output of progress bars. (#647)

Changed

  • Changed art.attacks.evasion.AutoProjectedGradientDescent to to support estimators for classification of all frameworks using the estimator's loss function, to use the new method loss of the Estimator API replacing internal custom loss functions and to disable for now the loss type difference_logits_ratio for art.estimators.classification.TensorFlowClassifer (TensorFlow v1.x) because of inaccurate loss calculation. (#685)
  • Changed default format of returned values of method predict in art.estimators.speech_recognition.PyTorchDeepSpeech from a tuple of probabilities and sequence lengths to an array of transcriptions (array of predicted strings) which is the same format as labels y and the returned values of other estimators in art.estimators.speech_recognition. The former output can still be obtained with option transcription_output=False. This change also enables using PyTorchDeepSpeech with ProjectedGradientDescent and FastGradientMethod in cases where no labels are provided to their method generate and these attacks use the labels predicted by PyTorchDeepSpeech's method predict. (#689)
  • Changed art.attacks.evasion.DPatch to improve initialisation of the patch for input ranges other than [0, 255] and updated the iteration over batches. (#681)
  • Changed art.attacks.evasion.DPatch to accept the updated return format of method predict of estimators in art.estimators.object_detection. (#667)
  • Changed return format of method predict of estimators in art.estimators.object_detection to follow the format of art.estimators.object_detection.PyTorchFasterRCNN and type np.ndarray. (#660)

Removed

  • Removed unsupported argument loss_scale in art.estimators.speech_recognition.PyTorchDeepSpeech. (#642)

Fixed

  • Fixed missing setting of property targeted in art.attacks.evasion.ImperceptibleASRPytorch. (#676)
  • Fixed bug in method loss of art.estimators.classification.KerasClassifier. (#651)
  • Fixed missing attribute batch_size in art.attacks.evasion.SquareAttack. (#646)
  • Fixed missing imports in art.estimators.object_detection.TensorFlowFasterRCNN. (#648)
  • Fixed bug in art.attacks.evasion.ImperceptibleASRPytorch to correctly apply learning_rate_2nd_stage instead of learning_rate_1st_stage in the second stage. (#642)

ART 1.4.1

02 Oct 20:14
Compare
Choose a tag to compare

This release of ART v1.4.1 provides updates to ART 1.4.

Added

  • Added a notebook demonstrating the Imperceptible ASR evasion attack on the DeepSpeech model for speech recognition tasks. (#639)

Changed

  • Changed the detection of Keras type (keras vs. tensorflow.keras) in art.estimators.classification.KerasClassifier to enable customised models inheriting from the Keras base models (#631)

Removed

[None]

Fixed

  • Fixed bug in model-specific estimator for DeepSpeech art.estimators.speech_recognition.PyTorchDeepSpeech to correctly handle the case of batches of samples with identical length including the special case of a batch of a single sample. (#635)
  • Fixed bug in model-specific estimator for DeepSpeech art.estimators.speech_recognition.PyTorchDeepSpeech by adding missing imports (#621)
  • Fixed bug to make all tools of ART accessible using import art (#612)
  • Fixed bug by removing top-level imports of tool-specific dependencies and adapting default values (#613)
  • Fixed wrong progress bar description in art.attacks.evasion.projected_gradient_descent.* from iterations to batches (#611)