Skip to content

Latest commit

 

History

History
40 lines (25 loc) · 1.04 KB

README.md

File metadata and controls

40 lines (25 loc) · 1.04 KB

Adversarial Attacks on Node Embeddings via Graph Poisoning

Preliminary reference implementation of the attack proposed in the paper:

"Adversarial Attacks on Node Embeddings via Graph Poisoning",

Aleksandar Bojchevski and Stephan Günnemann, ICML 2019.

Requirements

  • gensim
  • tensorflow
  • sklearn (only for evaluation)

Example

The notebook example.ipynb shows an example of our general attack and comparison with the baselines.

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{bojchevski2019adversarial,
  title =      {Adversarial Attacks on Node Embeddings via Graph Poisoning},
  author =      {Aleksandar Bojchevski and Stephan G{\"{u}}nnemann},
  booktitle ={Proceedings of the 36th International Conference on Machine Learning, {ICML}},
  year =      {2019},
  series =      {Proceedings of Machine Learning Research},
  publisher =      {PMLR},
}