Skip to content

PasaLab/PGA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

The implementation of the paper "Simple and Efficient Partial Graph Adversarial Attack: A New Perspective", under the setting of global attack, treats different nodes differently to perform more efficient adversarial attacks.

Main Structure

  • models: implementation of GNN models
  • victims: experiments for training
    • configs: configurations of models
    • models: trained models
  • attackers: implementation of attack methods
  • attack: experiments for attacking
    • configs: hyperparameter of attackers
    • perturbed_adjs: adversarial adj generated

Running Step

  1. training models
> cd victims
> python train.py --model=gcn --dataset=cora
  1. performing attacks
> cd attack
> python gen_attack.py 

PGA

  1. training models
> cd victims
> python train.py
  1. performing attack
> cd attack
> python gen_attack.py --attack=pga --dataset=cora

Evaluation (evasion attack)

> python evasion_attack.py --victim=robust --dataset=cora
> python evasion_attack.py --victim=normal --dataset=cora

Evaluation (poisoning attack)

> python poison_attack.py --victim=gcn --dataset=cora
> python poison_attack.py --victim=gat --dataset=cora

requirements

  • deeprobust
  • torch_geometry
  • torch_sparse
  • torch_scatter

About

partial attack for graph global attack

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published