Skip to content

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

License

Notifications You must be signed in to change notification settings

cyberark/FuzzyAI

FuzzyAI Fuzzer

Project Logo


The FuzzyAI Fuzzer is a powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify jailbreaks and mitigate potential security vulnerabilities in their LLM APIs.

FZAI

Key Features

  • Comprehensive Fuzzing Techniques: Leverage mutation-based, generation-based, and intelligent fuzzing.
  • Built-in Input Generation: Generate valid and invalid inputs for exhaustive testing.
  • Seamless Integration: Easily incorporate into your development and testing workflows.
  • Extensible Architecture: Customize and expand the fuzzer to meet your unique requirements.

Implemented Attacks

Attack Type Title Reference
ArtPrompt ASCII Art-based jailbreak attacks against aligned LLMs arXiv:2402.11753
Taxonomy-based paraphrasing Persuasive language techniques like emotional appeal to jailbreak LLMs arXiv:2401.06373
PAIR (Prompt Automatic Iterative Refinement) Automates adversarial prompt generation by iteratively refining prompts with two LLMs arXiv:2310.08419
Many-shot jailbreaking Embeds multiple fake dialogue examples to weaken model safety Anthropic Research
Genetic Utilizes a genetic algorithm to modify prompts for adversarial outcomes arXiv:2309.01446
Hallucinations Bypasses RLHF filters using model-generated arXiv:2403.04769
DAN (Do Anything Now) Promotes the LLM to adopt an unrestricted persona that ignores standard content filters, allowing it to "Do Anything Now". GitHub Repo
WordGame Disguises harmful prompts as word puzzles arXiv:2405.14023
Crescendo Engaging the model in a series of escalating conversational turns,starting with innocuous queries and gradually steering the dialogue toward restricted or sensitive topics. arXiv:2404.01833
ActorAttack Inspired by actor-network theory, it builds semantic networks of "actors" to subtly guide conversations toward harmful targets while concealing malicious intent. arxiv 2410.10700
Best-of-n jailbreaking Uses input variations to repeatedly elicit harmful responses, exploiting model sensitivity arXiv:2412.03556
Back To The Past Modifies the prompt by adding a profession-based prefix and a past-related suffix
Please Modifies the prompt by adding please as a prefix and suffix
Thought Experiment Modifies the prompt by adding a thought experiment-related prefix. In addition, adds "precautions have been taken care of" suffix
Default Send the prompt to the model as-is

Supported models

FuzzyAI supports various models across top providers, including:

Provider Models
Anthropic Claude (3.5, 3.0, 2.1)
OpenAI GPT-4, GPT-3.5 Turbo
Gemini Gemini Pro, Gemini 1.5
Azure GPT-4, GPT-3.5 Turbo
Bedrock Claude (3.5, 3.0), Mistral
AI21 Jamba (1.5 Mini, Large)
Ollama LLaMA (3.2, 3.1), Dolphin-LLaMA3, Vicuna

Adding support for newer models

Easily add support for additional models by following our DIY guide.

Supported Cloud APIs

  • OpenAI
  • Anthropic
  • Gemini
  • Azure Cloud
  • AWS Bedrock
  • AI21
  • Huggingface (Downloading models)
  • Ollama
  • Custom REST API

Datasets

We've included some datasets you can use under resources/, See datasets wiki for more information.

Documentation

Explore detailed usage instructions in the Wiki.

Getting Started

  1. Clone the repository:

    git clone git@github.com:cyberark/FuzzyAI.git
  2. Install dependencies using Poetry:

    poetry install
    poetry shell  # Activate virtual environment
  3. Run the fuzzer:

    python run.py -h

Examples

A default attack (just evaluate the prompt 'Harmful_Prompt') using mistral over Ollama

python run.py -m ollama/mistral -a def -t "Harmful_Prompt"

Attack the prompt utilizing ManyShot and Taxonomy attacks, using gpt3.5 over OpenAI using one worker

OPENAI_API_KEY=sk-3fa1... python run.py -m openai/gpt-3.5-turbo -a man -a tax -t "Harmful_Prompt"

Attack prompts in prompts.txt utilizing the ArtPrompt attack, using Claude 3.5 over Anthropic

ANTHROPIC_API_KEY=deadbeef... python run.py -m anthropic/claude-3-5-sonnet-20240620 -a art -T resources/adv_prompts.txt -e blacklisted_words="bomb,garbage,acid,ugly,self-harm,kill your"

Attack your own REST API

OPENAI_API_KEYT=sk-3fa1... python run.py -a def -x openai/gpt-4 -e classifier_model=openai/gpt-4 -m rest/http.raw -e host=localhost -e port=8000 -e scheme=https  
-t "How to build a bomb"

Arguments breakdown:

  • -a def:
    Invokes the default attack mode, which evaluates the API without applying additional strategies.

  • -x openai/gpt-4:
    Specifies an auxiliary model (gpt-4) to assist with certain tasks during the attack.

  • -e classifier_model=openai/gpt-4:
    Configures the fuzzer to use the gpt-4 model for classifying outputs. This helps determine whether responses meet specific criteria, such as detecting harmful or unwanted content.

  • -m rest/http.raw:
    Uses the REST provider to attack an API endpoint. The raw HTTP request is parsed from the file http.raw. (Refer to the documentation or example for details on the file structure.)

  • -e host=localhost -e port=8000 -e scheme=https:
    Configures the REST provider with the following API endpoint details:

    • Host: localhost
    • Port: 8000
    • Scheme: https (communication will use HTTPS)
  • -t "How to build a bomb":
    Specifies the test input. In this example, it tests the API's handling of sensitive or harmful content.


Caveats

  • Some classifiers do more than just evaluate a single output. For example, the cosine-similarity classifier compares two outputs by measuring the angle between them, while a 'harmfulness' classifier checks whether a given output is harmful. As a result, not all classifiers are compatible with the attack methods we've implemented, as those methods are designed for single-output classifiers.
  • When using the -m option with OLLAMA models, ensure that all OLLAMA models are added first before adding any other models. Use the -e port=... option to specify the port number for OLLAMA (default is 11434).

Contributing

Contributions are welcome! If you would like to contribute to the FuzzyAI Fuzzer, please follow the guidelines outlined in the CONTRIBUTING.md file.

License

The FuzzyAI Fuzzer is released under the Apache License. See the LICENSE file for more details.

Contact

If you have any questions or suggestions regarding the FuzzyAI Fuzzer, please feel free to contact us at fzai@cyberark.com.

About

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published