Skip to content

HDRUK/MedCAT

 
 

Repository files navigation

Medical oncept Annotation Tool

MedCAT can be used to extract information from Electronic Health Records (EHRs) and link it to biomedical ontologies like SNOMED-CT and UMLS. Preprint arXiv.

SNOMED Demo

A demo application is available at MedCAT. Please note that this was trained on MedMentions and uses SNOMED for the CDB.

Interest Group, Q&A

Please use Discussions as type of interest group, or place where to ask questions and write suggestions without opening an Issue.

Tutorial

A guide on how to use MedCAT is available in the tutorial folder. Read more about MedCAT on Towards Data Science.

Papers that use MedCAT

Related Projects

  • MedCATtrainer - an interface for building, improving and customising a given Named Entity Recognition and Linking (NER+L) model (MedCAT) for biomedical domain text.
  • MedCATservice - implements the MedCAT NLP application as a service behind a REST API.
  • iCAT - A docker container for CogStack/MedCAT/HuggingFace development in isolated environments.

Install using PIP (Requires Python 3.6.1+)

  1. Install MedCAT

pip install --upgrade medcat

  1. Get the scispacy models:

pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.2.4/en_core_sci_md-0.2.4.tar.gz

  1. Download the Vocabulary and CDB from the Models section below

  2. Quickstart:

from medcat.cat import CAT
from medcat.utils.vocab import Vocab
from medcat.cdb import CDB 

vocab = Vocab()
# Load the vocab model you downloaded
vocab.load_dict('<path to the vocab file>')

# Load the cdb model you downloaded
cdb = CDB()
cdb.load_dict('<path to the cdb file>') 

# create cat
cat = CAT(cdb=cdb, vocab=vocab)

# Test it
text = "My simple document with kidney failure"
doc_spacy = cat(text)
# Print detected entities
print(doc_spacy.ents)

# Or to get an array of entities, this will return much more information
#and usually easier to use unless you know a lot about spaCy
doc = cat.get_entities(text)
print(doc)

Models

A basic trained model is made public for the vocabulary and CDB. It is trained for the ~ 35K concepts available in MedMentions. It is quite limited so the performance might not be the best.

Vocabulary Download - Built from Wiktionary

CDB Download - Built from MedMentions

(Note: This is was compiled from MedMentions and does not have any data from NLM as that data is not publicaly available.)

SNOMED-CT and UMLS

If you have access to UMLS or SNOMED-CT and can provide some proof (a screenshot of the UMLS profile page is perfect, feel free to redact all information you do not want to share), contact us - we are happy to share the pre-built CDB and Vocab for those databases.

Alternatively, you can build the CDBs for scratch from source data. We have used the below steps to build UMLS and SNOMED-CT (UK) for our experiments

Building Concept Databases from Scratch

We provide details to build both UMLS and SNOMED-CT concept databases. In both cases CSV files containing the source data with required columns (column descriptions are provided in the tutorial. Given the CSV files the prepare_cdb.py script can be used to build a CDB.

Building a UMLS Concept Database

The UMLS can be downloaded from https://www.nlm.nih.gov/research/umls/index.html in the Rich Release Format (RRF). To make subsetting and filtering easier we import UMLS RRF into a PostgreSQL database (scripts available at here).

Once the data is in the database we can use the following SQL script to download the CSV files containing all concepts that will form our CDB.

# Selecting concepts for all the Ontologies that are used
SELECT DISTINCT umls.mrconso.cui, str, mrconso.sab, mrconso.tty, tui, sty, def 
FROM umls.mrconso 
    LEFT OUTER JOIN umls.mrsty ON umls.mrsty.cui = umls.mrconso.cui 
    LEFT OUTER JOIN umls.mrdef ON umls.mrconso.cui = umls.mrdef.cui
WHERE lat='ENG'
Building a SNOMED-CT Concept Database

We use the SNOMED-CT data provided by the NHS TRUD service https://isd.digital.nhs.uk/trud3/user/guest/group/0/pack/26. This release combines the International and UK specific concepts into a set of assets that can be parsed and loaded into a MedCAT CDB. We provide scripts for parsing the various release files and load into a MedCAT CDB instance. We provide further scripts to load accompanying SNOMED-CT Drug extension and clinical coding data (ICD / OPCS terminologies) also from the NHS TRUD service. Scripts are available at: https://github.com/tomolopolis/SNOMED-CT_Analysis

Acknowledgement

Entity extraction was trained on MedMentions In total it has ~ 35K entites from UMLS

The vocabulary was compiled from Wiktionary In total ~ 800K unique words

Powered By

A big thank you goes to spaCy and Hugging Face - who made life a million times easier.

Citation

@misc{kraljevic2020multidomain,
      title={Multi-domain Clinical Natural Language Processing with MedCAT: the Medical Concept Annotation Toolkit}, 
      author={Zeljko Kraljevic and Thomas Searle and Anthony Shek and Lukasz Roguski and Kawsar Noor and Daniel Bean and Aurelie Mascio and Leilei Zhu and Amos A Folarin and Angus Roberts and Rebecca Bendayan and Mark P Richardson and Robert Stewart and Anoop D Shah and Wai Keong Wong and Zina Ibrahim and James T Teo and Richard JB Dobson},
      year={2020},
      eprint={2010.01165},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

Medical Concept Annotation Tool

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 94.1%
  • HTML 3.5%
  • CSS 1.1%
  • Other 1.3%