I am Zhengyu Zhao, an Associate Professor at Xi’an Jiaotong University (XJTU), China. My general research interest is Machine Learning Security; Most of my work has concentrated on analyzing the vulnerability of deep neural networks to various attacks, e.g., (test-time) adversarial examples and (training-time) data poisons.
🐱
Machine Learning Security and Privacy
-
Xi'an Jiaotong University
- China
- https://zhengyuzhao.github.io/
- @JeremyZhaozy
Pinned Loading
-
TransferAttackEval
TransferAttackEval PublicRevisiting Transferable Adversarial Images (arXiv)
-
PerC-Adversarial
PerC-Adversarial PublicLarge yet imperceptible adversarial perturbations with perceptual color distance (CVPR 2020)
-
Targeted-Transfer
Targeted-Transfer PublicSimple yet effective targeted transferable attack (NeurIPS 2021)
-
AdvColorFilter
AdvColorFilter PublicUnrestricted adversarial images via interpretable color transformations (TIFS 2023 & BMVC 2020)
-
AI-Security-and-Privacy-Events
AI-Security-and-Privacy-Events PublicA curated list of academic events on AI Security & Privacy
-
ThuCCSLab/Awesome-LM-SSP
ThuCCSLab/Awesome-LM-SSP PublicA reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.