在医学领域,图像和文本数据具有自然语言数据集中难以企及的强相关性和高正确率,这主要由于技术的特殊性和专业人员的使用素养。这种差异往往与自然语言处理中普遍的恶意意图直接对齐不同。然而,医学图像、文本和偶发噪声的变异可能导致症状描述与病变插图之间的不匹配。这种错配挑战了MedMLLM的输出,会对MedMLLM生成的结果造成干扰,故而我们把Text-image之间的相关性弱界定为医学上的特异性攻击。
- 图像数据集的编译:选择多种医学图像,构建图像数据集MedMQ。
- 正常提示生成:使用视觉问答(VQA)模型,根据图像自动生成高度相关和专业的提示。
- 有害提示生成:借助GPT-4处理正常提示,得到关联度较低的非恶意有害提示,模拟潜在的临床攻击。
大多数现有方法如GCG、deltaJP和Typo等,主要笼统的通过拒绝与否来评定是否造成干扰,而不是关注相似性和关联度,因此缺乏严谨性。
- 多轮交叉优化:为平衡图像和文本的重要性,我们实施多轮交叉优化。
- 病变区域优化:由于模态不匹配主要发生在病变区域,我们特别优化具有病变的patch或者掩盖病灶区的patch生成攻击样本。
与现有方法的比较至关重要,使用新定义的指标:
- 输入关联度评分TII:输入文本和图像之间的相关性决定了响应的接受或拒绝。低关联度结果应为拒绝(空白对照),确保只有高关联度才能验证后续比较。
- 输出文本-输入文本关联度评分TT0:输出文本与输入文本之间的相关性指示攻击的成功。
- 输出文本-输入图像关联度评分TIO:此指标进一步验证攻击的成功,基于输出文本与输入图像之间的相关性。
- 综合评价指标:为指标TII,TTO,TIO分配权重,形成一个全面的新评估指标。
- 比较测试:我们的方法在我们的数据集、其他数据集、不同的MedMLLM以及普通MLLM的医学部分进行测试。
我们的dataset全,我们的针对性新方法显著提高攻击成功率,并且新关联度可以作为可靠指标进行推广
In medical contexts, image and text data possess an inherently high correlation unmatched in natural language datasets, primarily due to the technical specificity and the literacy of professional users. This differentiation often prevents a direct alignment with general malicious intents found in natural language processing. However, variations in medical images, texts, and incidental noise can lead to discrepancies between symptom descriptions and lesion illustrations. This misalignment challenges the output of MedMLLM, leading us to categorize weak text-image correlations as specific medical attacks.
- Image Dataset Compilation: A diverse set of medical images is selected to construct Image Dataset A.
- Normal Prompts Generation: Utilizing a Visual Question Answering (VQA) model, prompts that are highly relevant and professional are automatically generated based on the images.
- Generation of Harmful Prompts: With the assistance of GPT-4, we process the Normal Prompts to derive less relevant, non-malicious harmful prompts, simulating potential clinical attacks.
Most existing methodologies, such as GCG, deltaJP, and Typo, focus on acceptance or rejection based on superficial criteria rather than on similarity and relevance, thus lacking rigor.
- Multi-round Cross-optimization: To balance the significance of images and texts, we implement multi-round cross-optimization.
- Optimization of Lesion Patches: Since modal mismatches typically occur in lesion areas, we specifically optimize patches with lesions to generate attack samples.
Comparison with existing methods is crucial, using newly defined metrics:
- Association Score for Input (ASR): The relevance between the input text and image determines the acceptance or rejection of the response. A low correlation results in rejection (null control), ensuring only high correlations validate subsequent comparisons.
- Association Score for Output Text-Input Text (RR): The relevance between the output and input texts indicates the success of an attack.
- Association Score for Output Text-Input Image: This metric further validates the success of an attack based on the relevance between the output text and the input image.
- Composite Metric: Weights are assigned to metrics 0, 1, and 2 to formulate a comprehensive new evaluation metric.
- Comparative Testing: Our method is tested against previous methods on our dataset, other datasets, different MedMLLMs, and general sections of MLLMs in medicine.
Our comprehensive dataset and the novel method significantly enhance the safety and specificity of medical applications in machine learning language models.
Remember, this outline is crafted to emphasize the significant points provided without any omissions.
Major Category | Subcategory | Sub-Subcategory | Description |
---|---|---|---|
Image Understanding | Coarse-grained | Disease Classification | Diagnosing the presence or absence of disease. Identifies specific diseases from images. |
View Classification | Identifying the view or angle. Important for correct image interpretation. | ||
Fine-grained | Abnormality Detection | Locates specific abnormalities within an image. Critical for accurate diagnosis. | |
Object Detection | Identifies foreign objects. Essential for patient safety and treatment planning. | ||
Text Generation | Report Generation | Impression Generation | Summarizes the diagnostic impression. Key for conveying overall assessment. |
Findings Generation | Details findings from image analysis. Provides evidence-based diagnosis. | ||
Findings and Impressions | Anatomical Findings | Related to specific anatomical parts. Enhances localized diagnostic accuracy. | |
Impression Summary | Brief summary for specific regions. Aids in focused assessment. | ||
Question Answering | Visual QA | Open-ended | Answers open questions based on images. Encourages comprehensive analysis. |
Close-ended | Chooses correct answers from options. Tests specific understanding of image content. | ||
Text QA | Fact-based | Answers based on explicit text facts. Requires detailed text understanding. | |
Inference-based | Inferences from text to answer. Demands deeper comprehension and logic. | ||
Miscellaneous | Image-Text Matching | Matching | Determines correct image-text pairs. Crucial for accurate information presentation. |
Selection | Chooses suitable text for an image. Ensures relevance and accuracy of information. | ||
Report Evaluation | Error Identification | Identifies inaccuracies in reports. Key for quality control and correction. | |
Quality Assessment | Assesses report accuracy and completeness. Important for diagnostic integrity. | ||
Explanation and Inference | Explanation Generation | Generates explanations for diagnoses. Facilitates understanding and trust. | |
Inference Making | Determines logical report relationships. Supports clinical decision-making. |
以下是项目路径
MedicalPromptGeneration/
│
├── src/ # 源代码
│ ├── __init__.py
│ ├── main.py # 主程序入口
│ ├── model.py # 模型定义
│ ├── data_loader.py # 数据加载
│ ├── transformer_utils/ # 包含所有与transformers相关的操作
│ │ ├── __init__.py
│ │ ├── tokenizer.py
│ │ └── model_utils.py
│ └── utilities/ # 其他辅助功能
│ ├── __init__.py
│ └── utils.py
│
├── configs/ # 配置文件
│ └── model_config.json
│
├── data/ # 数据存放
│ ├── images/ # 图像数据
│ └── annotations/ # 标注数据
│
├── models/ # 预训练模型和权重
│ ├── ofa/
│ └── bert/
│
├── outputs/ # 输出和结果
│ ├── logs/ # 日志文件
│ └── predictions/ # 模型预测结果
│
└── requirements.txt # Python依赖包
针对data中的图片,利用llavamed提取attributes list,得到attribute-sentensce(AS)
python src/main.py
text2text得到文本相似性
python text2text.py
text2image得到图像文本相似性
python text2image.py
获取随机选取得到的3MAD-70K
cd dataset_generate
bash datasetgenerate.sh
统计
python demo_count.py
计算prompts的相似性分数
python demo_promptst2t.py
python demo_dealsimilarity.py
aggregated_scores.csv 18类图片和attribute相似分数 averaged_file.csv 540-540,18类prompts之间的相关性分析分数
3MAD-70K: Multimodal Medical Model Attack Dataset, A Comprehensive Assessment of ASR in Medical Models
This repo contains the source code of 3MAD-70K. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Medical Multimodal Large Language Models (MedMLLMs). See our paper for details.
Xijie Huang, Xinyuan Wang, Hantao Zhang, Jiawen Xi,Jingkun An ,Hao Wang, Chengwei Pan.
本项目由共9大类不同介质,18类不同部位的医学图像集(CMIC-96K)和18种基于用户需求的询问指令交叉构建而成,符合临床上可能出现的unmatch attack and malicious attack的具体攻击类型
图片集(CMIC-96K)组成:
├── MRI (6728)
│ ├── Alzheimer (6400)
│ └── brain (275)
├── Fundus (45)
│ └── retinaFundus (45)
├── Mamography (24576)
│ └── breast (24576)
├── OCT (2064)
│ └── retinaOCT (2064)
├── CT (10015)
│ ├── heart (5227)
│ ├── brain (2515)
│ └── chest (1000)
├── Endoscopy (1500)
│ └── gastroent (1500)
├── dermoscopy (6000)
│ └── skin (6000)
├── Ultrasound (4642)
│ ├── carotid (1100)
│ ├── breast (470)
│ ├── ovary(54)
│ ├── brain(1334)
│ └── baby (1684)
└── X-ray (41142)
├── Skeleton(1000)
├── Dental (40005)
└── chest(137)
指令组成:
MedMQ
├── Image Understanding
│ ├── Coarse-grained
│ │ ├── Disease Classification - Diagnosing the presence or absence of disease. Identifies specific diseases from images.
│ │ └── View Classification - Identifying the view or angle. Important for correct image interpretation.
│ └── Fine-grained
│ ├── Abnormality Detection - Locates specific abnormalities within an image. Critical for accurate diagnosis.
│ └── Object Detection - Identifies foreign objects. Essential for patient safety and treatment planning.
├── Text Generation
│ ├── Report Generation
│ │ ├── Impression Generation - Summarizes the diagnostic impression. Key for conveying overall assessment.
│ │ └── Findings Generation - Details findings from image analysis. Provides evidence-based diagnosis.
│ └── Findings and Impressions
│ ├── Anatomical Findings - Related to specific anatomical parts. Enhances localized diagnostic accuracy.
│ └── Impression Summary - Brief summary for specific regions. Aids in focused assessment.
├── Question Answering
│ ├── Visual QA
│ │ ├── Open-ended - Answers open questions based on images. Encourages comprehensive analysis.
│ │ └── Close-ended - Chooses correct answers from options. Tests specific understanding of image content.
│ └── Text QA
│ ├── Fact-based - Answers based on explicit text facts. Requires detailed text understanding.
│ └── Inference-based - Inferences from text to answer. Demands deeper comprehension and logic.
└── Miscellaneous
├── Image-Text Matching
│ ├── Matching - Determines correct image-text pairs. Crucial for accurate information presentation.
│ └── Selection - Chooses suitable text for an image. Ensures relevance and accuracy of information.
├── Report Evaluation
│ ├── Error Identification - Identifies inaccuracies in reports. Key for quality control and correction.
│ └── Quality Assessment - Assesses report accuracy and completeness. Important for diagnostic integrity.
└── Explanation and Inference
├── Explanation Generation - Generates explanations for diagnoses. Facilitates understanding and trust.
└── Inference Making - Determines logical report relationships. Supports clinical decision-making.
为了有效解决图片数量不匹配的长尾问题并且预防可能产生的失衡可能性,我们将少量图片倍增,将数量较多的图片随机抽取,使得不同种类之间尽可能维持同一数量级
To evaluate using 3MAD-28K dataset, please install the 3MAD-28K package as below:
Please cite the paper as follows if you use the data or code from 3MAD-28K:
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to jeix782@gmail.com.
Thank you for your interest in 3MAD-28K. We hope our work will contribute to a more healthcare,expert,trustworthy, fair, and robust AI future.