Skip to content

AndrewZhou924/Awesome-model-inversion-attack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

79 Commits
 
 

Repository files navigation

Awesome-model-inversion-attack

PRs awesome Stars

A curated list of resources for model inversion attack (MIA).

Please star or watch this repository to keep tracking the latest updates! Contributions are welcome!

Outlines:

What is the model inversion attack?

A model inversion attack is a privacy attack where the attacker is able to reconstruct the original samples that were used to train the synthetic model from the generated synthetic data set. (Mostly.ai)

The goal of model inversion attacks is to recreate training data or sensitive attributes. (Chen et al, 2021.)

In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. (Wang et al, 2021.)

Survey

Arxiv 2022 - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. [paper]

Arxiv 2022 - Trustworthy Graph Neural Networks: Aspects, Methods and Trends. [paper]

Arxiv 2022 - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection. [paper]

Philosophical Transactions of the Royal Society A 2018. Algorithms that remember: model inversion attacks and data protection law. [paper]

(Rigaki and Garcia, 2020) A Survey of Privacy Attacks in Machine Learning [paper]

(De Cristofaro, 2020) An Overview of Privacy in Machine Learning [paper]

(Fan et al., 2020) Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks [paper]

(Liu et al., 2021) Privacy and Security Issues in Deep Learning: A Survey [paper]

(Liu et al., 2021) ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [paper]

(Hu et al., 2021) Membership Inference Attacks on Machine Learning: A Survey [paper]

(Jegorova et al., 2021) Survey: Leakage and Privacy at Inference Time [paper]

(Joud et al., 2021) A Review of Confidentiality Threats Against Embedded Neural Network Models [paper]

(Wainakh et al., 2021) Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups [paper]

(Oliynyk et al., 2022) I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences [paper]

(Dibbo, S.V., 2023) SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap [paper]

Computer vision domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2014 Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing white-box (both) USENIX Security paper
2015 Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures white-box (both) CCS paper code1, code2, code3, code4
2015 Regression model fitting under differential privacy and model inversion attack white-box (defense) IJCAI paper code
2016 A Methodology for Formalizing Model-Inversion Attacks black & white-box CSF paper
2017 Machine Learning Models that Remember Too Much white-box CCS paper code
2017 Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes white-box PST paper
2018 Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting white-box CSF paper
2019 An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack white-box arXiv Paper
2019 MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems black-box (defense) GLSVLSI paper
2019 Model inversion attacks against collaborative inference black & white-box (collaborative inference) ACSAC Paper
2019 Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment black-box CCS Paper Code
2019 GAMIN: An Adversarial Approach to Black-Box Model Inversion black-box Arxiv Paper -
2020 The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks white-box CVPR Paper Code
2020 Overlearning Reveals Sensitive Attributes white-box ICLR Paper -
2020 Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator white-box APSIPA ASC Paper -
2020 Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning black-box USENIX Security Paper -
2020 Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems black-box (collaborative inference) IoT-J Paper Code
2020 Black-Box Face Recovery from Identity Features black-box ECCV Workshop Paper -
2020 MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery white-box arXiv Paper
2020 Privacy Preserving Facial Recognition Against Model Inversion Attacks white-box (defense) Globecom Paper -
2020 Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks white-box (defense) Big Data Paper -
2020 Evaluation Indicator for Model Inversion Attack metric AdvML Paper
2021 Variational Model Inversion Attacks white-box NeurIPS Paper Code
2021 Exploiting Explanations for Model Inversion Attacks white-box ICCV Paper -
2021 Knowledge-Enriched Distributional Model Inversion Attacks white-box ICCV Paper Code
2021 Improving Robustness to Model Inversion Attacks via Mutual Information Regularization white-box (defense) AAAI Paper -
2021 Practical Defences Against Model Inversion Attacks for Split Neural Networks black-box (defense, collaborative inference) ICLR workshop Paper Code
2021 Feature inference attack on model predictions in vertical federated learning white-box (VFL) ICDE Paper Code
2021 PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems black-box (both, collaborative inference) DAC Paper -
2021 Defending Against Model Inversion Attack by Adversarial Examples black-box (defense) CSR Workshops Paper -
2021 Practical Black Box Model Inversion Attacks Against Neural Nets black-box ECML PKDD Paper -
2021 Model Inversion Attack against a Face Recognition System in a Black-Box Setting black-box APSIPA Paper -
2022 Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks white-box ICML Paper Code
2022 Label-Only Model Inversion Attacks via Boundary Repulsion black-box CVPR Paper Code
2022 ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning white-box (defense, SFL) CVPR Paper Code
2022 Bilateral Dependency Optimization: Defending Against Model-inversion Attacks white-box (defense) KDD Paper Code
2022 ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models holistic risk assessment USENIX Security Paper Code
2022 Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System white-box TIFS Paper -
2022 One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy black-box (defense) TIFS Paper
2022 Reconstructing Training Data from Diverse ML Models by Ensemble Inversion white-box WACV Paper
2022 SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination white-box ECCV Paper
2022 UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning black-box (split learnig) WPES Paper code
2022 MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity white-box NDSS Paper code
2022 Reconstructing Training Data with Informed Adversaries white-box SP Paper
2022 Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks white-box BMVC Paper
2022 Reconstructing Training Data from Trained Neural Networks white-box NeurIPS Paper
2023 Sparse Black-Box Inversion Attack with Limited Information black-box ICASSP Paper code
2023 Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack black-box CVPR Paper code
2023 Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network white-box AAAI Paper code
2023 C2FMI: Corse-to-Fine Black-box Model Inversion Attack black-box TDSC Paper
2023 Boosting Model Inversion Attacks with Adversarial Examples black-box TDSC Paper
2023 Reinforcement Learning-Based Black-Box Model Inversion Attacks black-box CVPR Paper code
2023 Re-thinking Model Inversion Attacks Against Deep Neural Networks white-box CVPR Paper code
2023 Purifier: Defending Data Inference Attacks via Transforming Confidence Scores black-box (defense) AAAI Paper -
2023 Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model black-box CCS Paper -

Graph learning domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2020 Stealing Links from Graph Neural Networks - USENIX Security Paper Code
2020 Improving Robustness to Model Inversion Attacks via Mutual Information Regularization black & white-box AAAI Paper
2020 Reducing Risk of Model Inversion Using Privacy-Guided Training black & white-box Arxiv Paper
2020 Quantifying Privacy Leakage in Graph Embedding - MobiQuitous Paper Code
2021 A Survey on Gradient Inversion: Attacks, Defenses and Future Directions white-box IJCAI Paper
2021 NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data black-box ICDE Paper code
2021 DeepWalking Backwards: From Node Embeddings Back to Graphs - ICML Paper Code
2021 GraphMI: Extracting Private Graph Data from Graph Neural Networks white-box IJCAI Paper code
2021 Node-Level Membership Inference Attacks Against Graph Neural Networks - Arxiv Paper -
2022 A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability black & white-box Arxiv Paper
2022 Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes - WWW Paper -
2022 Inference Attacks Against Graph Neural Networks - USENIX Security Paper Code
2022 Model Stealing Attacks Against Inductive Graph Neural Networks - IEEE S&P Paper Code
2022 DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS - Arxiv Paper -
2022 GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation - Arxiv Paper -
2022 SOK: DIFFERENTIAL PRIVACY ON GRAPH-STRUCTURED DATA - Arxiv Paper -
2022 Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy - Arxiv Paper -
2022 Private Graph Extraction via Feature Explanations - Arxiv Paper -
2022 Privacy and Transparency in Graph Machine Learning: A Unified Perspective - Arxiv Paper -
2022 Finding MNEMON: Reviving Memories of Node Embeddings - CCS Paper -
2022 Defense against membership inference attack in graph neural networks through graph perturbation - IJIS Paper -
2022 Model Inversion Attacks against Graph Neural Networks - TKDE Paper -
2023 On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation white-box ICML Paper Code
2023 Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks white-box SecureComm Paper -

Natural language processing domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2020 Extracting Training Data from Large Language Models black-box USENIX Security Paper code
2020 Privacy Risks of General-Purpose Language Models black & white-box S&P Paper
2020 Information Leakage in Embedding Models black & white-box CCS Paper
2021 TAG: Gradient Attack on Transformer-based Language Models white-box EMNLP Paper
2021 Dataset Reconstruction Attack against Language Models black-box CEUR workshop paper
2022 KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models black-box Arxiv paper code
2022 Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers white-box Arxiv Paper
2022 Canary Extraction in Natural Language Understanding Models white-box ACL paper
2022 Are Large Pre-Trained Language Models Leaking Your Personal Information? white-box NAACL paper code
2022 Recovering Private Text in Federated Learning of Language Models white-box NeurIPS paper code
2023 Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence black-box ACL paper code
2023 Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models white-box Arxiv Paper
2023 Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability black-box SaTML Paper -
2023 Text Embeddings Reveal (Almost) As Much As Text black-box EMNLP paper code
2024 Extracting Prompts by Inverting LLM Outputs black-box arXiv paper code)
2024 Do Membership Inference Attacks Work on Large Language Models? white-box Arxiv Paper
2024 Language Model Inversion black-box ICLR paper code

Tools

AIJack: Implementation of algorithms for AI security.

Privacy-Attacks-in-Machine-Learning: Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.

ml-attack-framework: Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.

(Trail of Bits) PrivacyRaven [GitHub]

(TensorFlow) TensorFlow Privacy [GitHub]

(NUS Data Privacy and Trustworthy Machine Learning Lab) Machine Learning Privacy Meter [GitHub]

(IQT Labs/Lab 41) CypherCat (archive-only) [GitHub]

(IBM) Adversarial Robustness Toolbox (ART) [GitHub]

Others

2019 - Uncovering a model’s secrets. [blog1] [blog2]

2019 - Model Inversion Attacks Against Collaborative Inference. [slides]

2020 - Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework. [blog]

2021 - ML and DP. [slides]

2022 - USENIX Synthetic Data – Anonymisation Groundhog Day [paper] [code]

2023 - arXiv A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data [paper] [code]

Related repositories

awesome-ml-privacy-attacks [repo]

Star History

Star History Chart

Releases

No releases published

Packages

No packages published