Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
-
Updated
May 10, 2024 - C++
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
A Python library for Secure and Explainable Machine Learning
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
A Survey of Poisoning Attacks and Defenses in Recommender Systems
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Taller de Adversarial Machine Learning
Tensorflow implementation of APT (Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR 2021)
FedAnil is a secure blockchain-enabled Federated Deep Learning Model to address non-IID data and privacy concerns. This repo hosts a simulation for FedAnil written in Python.
Hack tool for local network: Man in the middle, hosts scan, ARP poisoning, Router and DNS Poisoning
FedAnil+ is a novel lightweight, and secure Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil+ written in Python.
FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.
Poisoning attack methods against adversarial training algorithms
Indirect Invisible Poisoning Attacks on Domain Adaptation
Test tool to simulate two types of poisoning attack on AI model
Continuous Integration And Continuous Delivery Poisoning Guides
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Test tool to simulate defense from poisoning attack on AI model
Add a description, image, and links to the poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the poisoning-attacks topic, visit your repo's landing page and select "manage topics."