This repo holds data and code of the paper "DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers".
Authors: Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh
DrAttack is the first prompt-decomposing jailbreak attack. DrAttack includes three key components: (a) 'Decomposition' of the original prompt into sub-prompts, (b) 'Reconstruction' of these sub-prompts implicitly by in-context learning with semantically similar but harmless reassembling demo, and (c) a 'Synonym Search' of sub-prompts, aiming to find sub-prompts' synonyms that maintain the original intent while jailbreaking LLMs.
Prompt decomposition and reconstruction step of DrAttack to make LLM jailbreaker.
An extensive empirical study across multiple open-source and closed-source LLMs demonstrates that, with a significantly reduced number of queries, DrAttack obtains a substantial gain of success rate over prior SOTA prompt-only attackers. Notably, the success rate of 78.0% on GPT-4 with merely 15 queries surpassed previous art by 33.1%.
Attack success rate (%) of black-box baselines and DrAttack assessed by human evaluations.
Attack success rate (%) of white-box baselines and DrAttack assessed by GPT evaluations.
For more details, please refer to our project webpage and our paper.
- Our paper is mentioned in one MEDIUM blog, LLM Jailbreak: Red Teaming with ArtPrompt, Morse Code, and DrAttack.
We need the newest version of FastChat fschat==0.2.23
and please make sure to install this version. The llm-attacks
package can be installed by running the following command at the root of this repository:
pip install -e .
Please follow the instructions to download Vicuna-7B or/and LLaMA-2-7B-Chat first (we use the weights converted by HuggingFace here). Our script by default assumes models are stored in a root directory named as /DIR
. To modify the paths to your models and tokenizers, please add the following lines in experiments/configs/individual_xxx.py
(for individual experiment) and experiments/configs/transfer_xxx.py
(for multiple behaviors or transfer experiment). An example is given as follows.
config.model_paths = [
"/path/to/your/model",
... # more models
]
config.tokenizer_paths = [
"/path/to/your/model",
... # more tokenizers
]
Then, for closed-source models with API such as GPTs and Geminis. Please create two txt files in api_keys/google_api_key.txt
and api_keys/openai_key.txt
and put your API keys in it.
The experiments
folder contains code to reproduce DrAttack attack experiments on AdvBench harmful.
- To run experiments to jailbreak GPT-3.5, run the following code inside
experiments
:
cd launch_scripts
bash run_gpt.sh gpt-3.5-turbo
- To run experiments to jailbreak GPT-4, run the following code inside
experiments
:
cd launch_scripts
bash run_gpt.sh gpt-4
- To run experiments to jailbreak llama2-7b, run the following code inside
experiments
:
cd launch_scripts
bash run_llama2.sh llama2
- To run experiments to jailbreak llama2-13b, run the following code inside
experiments
:
cd launch_scripts
bash run_llama2.sh llama2-13b
The gpt_automation
folder contains code to reproduce DrAttack prompt decomposition and reconstruction on AdvBench harmful.
- To run joint steps to retrieve information about prompt decomposition and reconstruction, run the following code inside
gpt_automation
:
cd script
bash joint.sh
We include a notebook demo.ipynb
which provides an example on attacking gpt-3.5-turbo with DrAttack. You can also view this notebook on Colab. This notebook uses a minimal implementation of DrAttack so it should be only used to get familiar with the attack algorithm.
If you find this repo useful for your research, please consider citing the paper
@misc{li2024drattack,
title={DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers},
author={Xirui Li and Ruochen Wang and Minhao Cheng and Tianyi Zhou and Cho-Jui Hsieh},
year={2024},
eprint={2402.16914},
archivePrefix={arXiv},
primaryClass={cs.CR}
}