A list of backdoor learning resources
-
Updated
Jul 31, 2024
A list of backdoor learning resources
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.
BackdoorSim: An Educational into Remote Administration Tools
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
Fast integration of backdoor attacks in machine learning and federated learning.
Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''
[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
[ICLR2023] Distilling Cognitive Backdoor Patterns within an Image
This repository provide the studies on the security of language models for code (CodeLMs).
This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation](https://arxiv.org/abs/2204.09975) in PyTorch.
[IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
Backdoor Stuff in AI/ ML domain
Web Shell finder using grep, where it has wordlist around the world to grep inside using regex and wordlist. So Lightweight and fast!
[ECCV24] T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models
[Findings of EMNLP 2022] Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks
Implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23).
The resources are collected from various sources, including arXiv, NeurIPS, ICML, ICLR, ACL, EMNLP, AAAI, IJCAI, KDD, CVPR, ICCV, ECCV, NIPS, IEEE, ACM, Springer, ScienceDirect, Wiley, Nature, Science, and other top AI/ ML conferences and journals.
Add a description, image, and links to the backdoor-defense topic page so that developers can more easily learn about it.
To associate your repository with the backdoor-defense topic, visit your repo's landing page and select "manage topics."