Skip to content

Latest commit

 

History

History
executable file
·
29 lines (23 loc) · 1.61 KB

README.md

File metadata and controls

executable file
·
29 lines (23 loc) · 1.61 KB

chestViewSplit

This repo provides a trained model to seperate the chest x-ray into two views: front view (PA or posterior-anterior) and side view (LL or latero-lateral).

Sometimes when you download the chest x-rays from the internet, there's no text description or meta data associated with the image. If you want to only work with images from a single view, you'll have to manually seperate the images which is tedious. Here I trained a model based on resnet-50 to automate the process and hopefully it can help people facing the same issue.

Prerequistites

  • Linux or OSX
  • NVIDIA GPU
  • Python 3.6.1
  • PyTorch v0.3.1
  • Numpy

Getting Started

Installation

  • Install PyTorh and the other dependencies

  • Download the trained model from this link and put it into folder ./models (mkdir ./models)

Classification

  • Chest_xray_new contains images with different format (uint16 2140 x 1760) than chest_xray_orig and need a custom loader (SJ_loader)
  • Make sure all your chest x-rays resides in the same folder, e.g. ./chest_xray and then run the following command (-i specifies the input folder and -o specifies the output folder).
python xray_split.py -i chest_xray_new/ -o chest_xray/
  • After running the above script, the images from two views will be saved in folder ./chest_xray/front and ./chest_xray/side respectively.

Acknowlegements

Part of the code are borrowed from fine-tuning.pytorch