Skip to content

Latest commit

 

History

History
272 lines (216 loc) · 17.1 KB

README_EN.md

File metadata and controls

272 lines (216 loc) · 17.1 KB

DISC-MedLLM

Generic badge license

This is the repo of DISC-MedLLM, a medical domain-specific LLM designed for conversational healthcare scenarios by Fudan-DISC lab.

The following resources have been released:

You can check this link to try our online demo.

Overview

The DISC-MedLLM is a large-scale domain-specific model designed for conversational healthcare scenarios. It can address a variety of your needs, including medical consultations and treatment inquiries, offering you high-quality health support services.

The DISC-MedLLM effectively bridges the gap between general language models and real-world medical consultations, as evidenced by experimental results.

Owing to our goal-oriented strategy and the framework that integrates both LLM and Human in the loop based on real-world doctor-patient dialogues and knowledge graphs, DISC-MedLLM boasts several features:

  • Knowledge-intensive and reliable
  • Ability of multi-turn inquiry
  • Alignment with human preferences

data-construction

Demo

Consultation

sample1

Treatment Inquiry

sample2

Dataset

To train DISC-MedLLM, we construct a high-quality dataset called DISC-Med-SFT consisting of over 470k distinct examples derived from existing medical datasets. We adopt a goal-oriented strategy by selectively reconstructing the dataset using a few deliberately chosen sources. These data sources serve the purpose of assisting LLMs in acquiring medical domain knowledge, aligning behavioral patterns with human preferences, and capturing real-world online medical dialogue distributions.


Dateset

Original Source

Size
Re-constructed AI Doctor-Patient Dialogue MedDialog 400k
cMedQA2 20k
Knowledge Graph
QA pairs
CMeKG 50k
Behavior Preference
Dataset
Manual selection 2k
Others MedMCQA 8k
MOSS-SFT 33k
Alpaca-GPT4-zh 1k

Download

We have released a total of 470k training data entries, including re-constructed dialogues and knowledge graph QA pairs. You can download the dataset via the provided link.


Deploy

The current version of DISC-MedLLM is derived from the Baichuan-13B-Base. You can directly download our model weights from the HuggingFace repository, or automatically obtain them through the demo code.

Firstly, you need to install the requirements.

pip install -r requirements.txt

Using through hugging face transformers

>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation.utils import GenerationConfig
>>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
>>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM")
>>> messages = []
>>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"})
>>> response = model.chat(tokenizer, messages)
>>> print(response)

Run CLI Demo

python cli_demo.py

Run Web Demo

streamlit run web_demo.py --server.port 8888

Additionally, since the current version uses Baichuan as the base model, you can refer to its repo for deploying with int8, int4 quantized inference. However, using quantized deployment will result in performance degradation.

Training

You can fine-tuning our model using the data same as our data schema. Our train code is derived from Firefly with the different data schema and dialogue format. We just provide the code of Full Params Fine-tuning:

deepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json

Please check the setup of sft.json before you attempt to start training.


If you want to fine-tuning our model with other training code, please use the following dialogue format.

<\b><$user_token>content<$assistant_token>content<\s><$user_token>content ...

The user_token and assistant_token we used are 195 and 196, respectively. Which is same as Baichuan-13b-Chat.

Evaluation

We assess the model's performance from two perspectives to check its capability of providing accurate answers in single-turn conversations and presenting systematical consultation in multi-turn conversations, respectively.

  • Single-turn evaluation, we construct a benchmark consisting of multiple choices questions collected from three public medical datasets and evaluate the model's accuracy.
  • For multi-turn evaluation, we first construct a small set of high quality consulting cases, and then employ GPT-3.5 play the role of the patient based on the cases, and chat with the model. We use GPT-4 to evaluate the model's proactivity, accuracy, helpfulness and linguistic quality.

You can see the evaluation set, dialogues generated by each model and scores provided by GPT-4 in eval/ folder.

Single-turn evaluation

We utilized the MLEC-QA and Western Medicine(NEEP) 306 multiple-choice question datasets for our evaluation.

Few-shot

Model MLEC-QA Clinic MLEC-QA CWM MLEC-QA PublicHealth MLEC-QA Stomatology MLEC-QA TCM NEEP 306 Average
GPT-3.5 58.63 45.9 53.51 51.52 43.47 44.81 49.64
Baichuan-13b-Chat 31.25 37.69 28.65 27.27 29.77 24.81 29.91
Huatuo(13B) 31.85 25 32.43 32.95 26.54 24.44 28.87
DISC-MedLLM 44.64 41.42 41.62 38.26 39.48 33.33 39.79

Zero-shot

Model MLEC-QA Clinic MLEC-QA CWM MLEC-QA PublicHealth MLEC-QA Stomatology MLEC-QA TCM NEEP 306 Average
GPT-3.5 47.32 33.96 48.11 39.77 38.83 33.33 40.22
Baichuan-13b-Chat 44.05 43.28 39.92 31.06 41.42 32.22 38.66
Huatuo(13B) 27.38 21.64 25.95 25.76 24.92 20.37 24.34
DISC-MedLLM 44.64 37.31 35.68 34.85 41.75 31.11 37.56

Multi-turn evaluation

Our evaluation procedure draws upon three distinct datasets: Chinese Medical Benchmark (CMB-Clin), Chinese Medical Dialogue Dataset (CMD), and Chinese Medical Intent Dataset (CMID). CMB-Clin simulates real-world consultation process, while CMD and CMID focus on the evaluation from the perspectives of departmental specialities and user intentions.

Results of CMB-clin:

Model Proactivity Accuracy Helpfulness Linguistic Quality Average
GPT3.5 4.30 4.53 4.55 5.00 4.60
GPT4 4.15 4.70 4.75 4.96 4.64
Baichuan-13b-Caht 4.30 4.58 4.73 4.95 4.64
BianQue-2 3.97 4.36 4.37 4.81 4.38
Huatuo(13B) 4.40 4.62 4.74 4.96 4.68
DISC-MedLLM 4.64 4.47 4.66 4.99 4.69

Results of CMD

cmd

Results of CMID

cmid

Acknowledgement

This project wouldn't have been possible without the support and contributions of various individuals, teams, and organizations. Special thanks go to these repositories:

Thank you also for the work that provided important assistance to the project, but limited in length.

Delcaration

Due to the inherent limitations of language models, we cannot assure the accuracy or reliability of information generated by this model. This model is designed exclusively for research and testing by individuals and academic groups. We urge users to critically assess any information or medical advice obtained through the model's output. Blindly trusting or following such information is strongly discouraged. We disclaim responsibility for any issues, risks, or adverse consequences resulting from the model's use.

Licenses

The use of the source code in this repository complies with the Apache 2.0 License.

Citation

@misc{bao2023discmedllm,
      title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation}, 
      author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei},
      year={2023},
      eprint={2308.14346},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}