Skip to content

Latest commit

 

History

History
27 lines (13 loc) · 1.96 KB

File metadata and controls

27 lines (13 loc) · 1.96 KB

Adversarial Reinforcement Learning

A curated reading list for the adversarial perspective in deep reinforcement learning. The list covers topics ranging from adversarial attacks on deep reinforcement learning policies to adversarial training techniques and interpretability in deep reinforcement learning robustness to adversarial state detection algortihms for robust decision making.

Delving Into Adversarial Attacks on Deep Policies. ICLR Workshop 2017. [Link]

Adversarial Attacks on Neural Network Policies. ICLR Workshop 2017. [Link]

Robust Adversarial Reinforcement Learning. ICML 2017. [Link]

Adversarial Policies: Attacking Deep Reinforcement Learning. ICLR 2020. [Link]

Stealthy and Efficient Adversarial Attacks Against Deep Reinforcement Learning. AAAI 2020. [Link]

Nesterov Momentum Adversarial Perturbations in the Deep Reinforcement Learning Domain. ICML Workshop 2020. [Link]

Investigating Vulnerabilities of Deep Neural Policies. UAI 2021. [Link]

Deep Reinforcement Learning Policies Learn Shared Adversarial Features Across MDPs. AAAI 2022. [Link]

Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. AAAI 2023. [Link]

Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. ICML 2023. [Link]

Understanding and Diagnosing Deep Reinforcement Learning. ICML 2024. [Link]