Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
-
Updated
May 10, 2024 - C++
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
A Python library for Secure and Explainable Machine Learning
Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
A Survey of Poisoning Attacks and Defenses in Recommender Systems
Continuous Integration And Continuous Delivery Poisoning Guides
This project uses Python and machine learning to classify plant species as poisonous or non-poisonous. It aims to provide an efficient way to identify safe and harmful plants, useful for botanists, hikers, and the agricultural sector.
Test tool to simulate two types of poisoning attack on AI model
FedAnil+ is a novel lightweight, and secure Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil+ written in Python.
FedAnil is a secure blockchain-enabled Federated Deep Learning Model to address non-IID data and privacy concerns. This repo hosts a simulation for FedAnil written in Python.
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Poisoning attack methods against adversarial training algorithms
M. Anisetti, C. A. Ardagna, A. Balestrucci, N. Bena, E. Damiani, C. Y. Yeun. "On the Robustness of Random Forest Against Data Poisoning: An Ensemble-Based Approach". In IEEE TSUSC, vol. 8 no. 4
Taller de Adversarial Machine Learning
Tensorflow implementation of APT (Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training. SIGIR 2021)
Tensorflow implementation of TrialAttack (Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. KDD 2021)
Indirect Invisible Poisoning Attacks on Domain Adaptation
FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.
Add a description, image, and links to the poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the poisoning-attacks topic, visit your repo's landing page and select "manage topics."