Skip to content

Latest commit

 

History

History
469 lines (347 loc) · 27.2 KB

English_vision.md

File metadata and controls

469 lines (347 loc) · 27.2 KB

Medical_NLP

Summary of medical NLP evaluations/competitions, datasets, papers and pre-trained models.

中文版本 English_version

Since Cris Lee left the medical NLP field in 2021, this repo is now maintained by Xidong Wang, Ziyue Lin, Jing Tang.

Medical_NLP

1. Evaluation

1.1 Chinese Medical Benchmark Evaluation: CMB / CMExam / PromptCBLUE

1.2 English Medical Benchmark Evaluation:、

  • MultiMedBench

    • Desription: A large-scale multimodal generative model

2. Competitions

2.1 Ongoing Competitions

  • None at the moment. Additions are welcome~

2.2 Completed Competitions

2.2.1 English competitions

2.2.1 Chinese competitions

3. Datasets

3.1 Chinese

3.2 English

4. Open-source Models

4.1 Medical PLMs

  • BioBERT:
    • Website: https://github.com/naver/biobert-pretrained
    • Introduction: A language representation model for biomedical domain, especially designed for biomedical text mining tasks such as biomedical named entity recognition, relation extraction, question answering, etc.
  • BlueBERT:
    • Website: https://github.com/ncbi-nlp/BLUE_Benchmark
    • Introduction: BLUE benchmark consists of five different biomedicine text-mining tasks with ten corpora. BLUE benchmark rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks. These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges.
  • BioFLAIR:
    • Website: https://github.com/flairNLP/flair
    • Introduction: Flair is a powerful NLP library, which allows you to apply our state-of-the-art natural language processing (NLP) models to your text, such as named entity recognition (NER), sentiment analysis, part-of-speech tagging (PoS), special support for biomedical data, sense disambiguation and classification, with support for a rapidly growing number of languages. Flair is also a A text embedding library and a PyTorch NLP framework.
  • COVID-Twitter-BERT:
  • bio-lm (Biomedical and Clinical Language Models)
  • BioALBERT
    • Website: https://github.com/usmaann/BioALBERT
    • Introduction: A biomedical language representation model trained on large domain specific (biomedical) corpora for designed for biomedical text mining tasks.

4.2 Medical LLMs

4.2.1 Chinese Medical Large Language Models

  • BenTsao:
    • Website: https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese
    • Introduction: BenTsao is based on LLaMA-7B and has undergone fine-tuning with Chinese medical instructions through instruct-tuning. Researchers built a Chinese medical instruction dataset using a medical knowledge graph and the GPT3.5 API, and used this dataset as the basis for instruct-tuning LLaMA, thereby improving its question-answering capabilities in the medical field.
  • BianQue:
    • Website: https://github.com/scutcyr/BianQue
    • Introduction: A large medical conversation model fine-tuned through joint training with instructions and multi-turn inquiry dialogues. It is based on ClueAI/ChatYuan-large-v2 and fine-tuned using a blended dataset of Chinese medical question and answer instructions as well as multi-turn inquiry dialogues.
  • SoulChat:
    • Website: https://github.com/scutcyr/SoulChat
    • Introduction: SoulChat initialized with ChatGLM-6B, underwent instruct-tuning using a large-scale dataset of Chinese long-form instructions and multi-turn empathetic conversations in the field of psychological counseling. This instruct-tuning process aimed to enhance the model's empathy ability to guide users in expressing themselves, and capacity to provide thoughtful advice.
  • DoctorGLM:
    • Website: https://github.com/Kent0n-Li/ChatDoctor
    • Introduction: A large Chinese diagnostic model based on ChatGLM-6B. This model has been fine-tuned using a Chinese medical conversation dataset, incorporating various fine-tuning techniques such as Lora and P-tuningv2, and has been deployed for practical use.
  • HuatuoGPT:
    • Website: https://github.com/FreedomIntelligence/HuatuoGPT
    • Introduction: HuaTuo GPT is a GPT-like model that has undergone fine-tuning with specific medical instructions in Chinese. This model is a Chinese Language Model (LLM) designed specifically for medical consultation. Its training data includes distilled data from ChatGPT and real data from doctors. During the training process, reinforcement learning from human feedback (RLHF) has been incorporated to improve its performance.
  • HuatuoGPT-II:
    • Website: https://github.com/FreedomIntelligence/HuatuoGPT-II
    • Introduction: HuatuoGPT2 employs an innovative domain adaptation method to significantly boost its medical knowledge and dialogue proficiency. It showcases state-of-the-art performance in several medical benchmarks, especially surpassing GPT-4 in expert evaluations and the fresh medical licensing exams.

4.2.2 English Medical Large Language Models

  • GatorTron:
  • Codex-Med:
    • Website: https://github.com/vlievin/medical-reasoning
    • Introduction: Codex-Med aimed to investigate the effectiveness of GPT-3.5 models. Two multiple-choice medical exam question datasets, namely USMLE and MedMCQA, as well as a medical reading comprehension dataset called PubMedQA were utilized.
  • Galactica:
    • Website: https://galactica.org/
    • Aiming to solve the problem of information overload in the scientific field, Galactica was proposed to store, combine, and reason about scientific knowledge, including Healthcare. Galactica was trained on a large corpus of papers, reference material, and knowledge bases to potentially discover hidden connections between different research and bring insights to the surface.
  • DeID-GPT:
  • ChatDoctor:
  • MedAlpaca:
    • Website: https://github.com/kbressem/medAlpaca
    • Introduction: MedAlpaca employed an open-source policy that enables on-site implementation, aiming at mitigating privacy concerns. MedAlpaca is built upon the LLaMA with 7 and 13 billion parameters.
  • PMC-LLaMA:
    • Website: https://github.com/chaoyi-wu/PMC-LLaMA
    • Introduction: PMC-LLaMA is an open-source language model that by tuning LLaMA-7B on a total of 4.8 million biomedical academic papers for further injecting medical knowledge, enhancing its capability in the medical domain.
  • Visual Med-Alpaca:
    • Website: https://github.com/cambridgeltl/visual-med-alpaca
    • Introduction: Visual Med-Alpaca is an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the LLaMa-7B architecture, this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts.
  • GatorTronGPT:
    • Website: https://github.com/uf-hobi-informatics-lab/GatorTronGPT
    • Introduction: GatorTronGPT is a clinical generative LLM designed with a GPT-3 architecture comprising 5 or 20 billion parameters. It utilizes a vast corpus of 277 billion words, consisting of a combination of clinical and English text.
  • MedAGI:
  • LLaVA-Med:
    • Website: https://github.com/microsoft/LLaVA-Med
    • Introduction: LLaVA-Med was initialized with the general-domain LLaVA and then continuously trained in a curriculum learning fashion (first biomedical concept alignment then full-blown instruction-tuning).
  • Med-Flamingo:
    • Website: https://github.com/snap-stanford/med-flamingo
    • Introduction: Med-Flamingo is a vision language model specifically designed to handle interleaved multimodal data comprising both images and text. Building on the achievements of Flamingo, Med-Flamingo further enhances these capabilities for the medical domain by pre-training diverse multimodal knowledge sources across various medical disciplines.

5. Relevant Papers

5.1 The Post-ChatGPT Era: Helpful Papers

  1. Large Language Models Encode Clinical Knowledge. Paper Link
  2. Performance of ChatGPT on USMLE: The Potential of Large Language Models for AI-Assisted Medical Education. Paper Link
  3. Turing Test for Medical Advice from ChatGPT. Paper Link
  4. Toolformer: Language Models Can Self-Learn to Use Tools. Paper Link
  5. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automatic Feedback. Paper Link
  6. Capability of GPT-4 in Medical Challenge Questions. Paper Link

5.2 Review Articles

  1. Pretrained Language Models in Biomedical Field: A Systematic Review. Paper Link
  2. Deep Learning Guide for Healthcare. Paper Link Published in Nature Medicine.
  3. A Survey of Large Language Models for Healthcare. Paper Link

5.3 Task-Specific Articles

Articles Related to Electronic Health Records

  1. Transfer Learning from Medical Literature for Section Prediction in Electronic Health Records. Paper Link
  2. MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records. Paper Link

Medical Relation Extraction

  1. Leveraging Dependency Forest for Neural Medical Relation Extraction. Paper Link

Medical Knowledge Graph

  1. Learning a Health Knowledge Graph from Electronic Medical Records. Paper Link

Auxiliary Diagnosis

  1. Evaluation and Accurate Diagnoses of Pediatric Diseases Using Artificial Intelligence. Paper Link

Medical Entity Linking (Normalization)

  1. Medical Entity Linking Using Triplet Network. Paper Link
  2. A Generate-and-Rank Framework with Semantic Type Regularization for Biomedical Concept Normalization. Paper Link
  3. Deep Neural Models for Medical Concept Normalization in User-Generated Texts. Paper Link

5.4 Conference Index

List of Medical-Related Papers from ACL 2020

  1. A Generate-and-Rank Framework with Semantic Type Regularization for Biomedical Concept Normalization. Paper Link
  2. Biomedical Entity Representations with Synonym Marginalization. Paper Link
  3. Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain. Paper Link
  4. MIE: A Medical Information Extractor towards Medical Dialogues. Paper Link
  5. Rationalizing Medical Relation Prediction from Corpus-level Statistics. Paper Link

List of Medical NLP Related Papers from AAAI 2020

  1. On the Generation of Medical Question-Answer Pairs. Paper Link
  2. LATTE: Latent Type Modeling for Biomedical Entity Linking. Paper Link
  3. Learning Conceptual-Contextual Embeddings for Medical Text. Paper Link
  4. Understanding Medical Conversations with Scattered Keyword Attention and Weak Supervision from Responses. Paper Link
  5. Simultaneously Linking Entities and Extracting Relations from Biomedical Text without Mention-level Supervision. Paper Link
  6. Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets Have the Answer! Paper Link

List of Medical NLP Related Papers from EMNLP 2020

  1. Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text. Paper Link
  2. MedDialog: Large-scale Medical Dialogue Datasets. Paper Link
  3. COMETA: A Corpus for Medical Entity Linking in the Social Media. Paper Link
  4. Biomedical Event Extraction as Sequence Labeling. Paper Link
  5. FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction. Paper Link Paper Analysis
  6. Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition. Paper Link
  7. A Knowledge-driven Generative Model for Multi-implication Chinese Medical Procedure Entity Normalization. Paper Link
  8. BioMegatron: Larger Biomedical Domain Language Model. Paper Link
  9. Querying Across Genres for Medical Claims in News. Paper Link

6. Open-source Toolkits

  1. Tokenization tool: PKUSEG Project Link Project Description: A multi-domain Chinese tokenization tool launched by Peking University, which supports selection in the medical field.

7. Industrial Solutions

  1. Lingyi Wisdom
  2. Left Hand Doctor
  3. Yidu Cloud Research Institute - Medical Natural Language Processing
  4. Baidu - Medical Text Structuring
  5. Alibaba Cloud - Medical Natural Language Processing

8. Blog Sharing

  1. Alpaca: A Powerful Open Source Instruction Following Model
  2. Lessons Learned from Building Natural Language Processing Systems in the Medical Field
  3. Introduction to Medical Public Databases and Data Mining Techniques in the Big Data Era
  4. Looking at the Development of NLP Application in the Medical Field from ACL 2021, with Resource Download

9. Friendly Links

  1. awesome_Chinese_medical_NLP
  2. Chinese NLP Dataset Search
  3. medical-data(Large Amount of Medical Related Data)
  4. Tianchi Dataset (Includes Multiple Medical NLP Datasets)

10. reference

@misc{medical_NLP_github,
  author = {Xidong Wang, Ziyue Lin and Jing Tang},
  title = {Medical NLP},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/FreedomIntelligence/Medical_NLP}}
}