Skip to content

leofer-victor/RFUR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Learning-based Freehand 3D Ultrasound Reconstruction with External Camera

Introduction

3D ultrasound (US) has the potential to enhance the accuracy and speed of doctors' diagnoses by providing a volumetric perception. Compared to native 3D US, freehand 3D US reconstruction demonstrates its advantages in terms of flexibility and lightweightness. Recently, methodologies and systems primarily attempt to do the reconstruction purely based on 2D US image sequences, utilizing convolutional neural networks (CNNs) to determine sequence positioning. However, extracting out-of-plane motions solely from 2D US images has proved to be challenging and error-prone. Specifically, most of the existing systems lack the ability to perceive global motions, which limits their applications to only certain predefined scanning patterns. In this paper, we propose a deep learning-based freehand 3D ultrasound reconstruction approach combined with an external camera to capture the tendency for large-scale motion. The corresponding features from the US images are utilized to determine the fine local transformations, while the RGB images from an external camera attached to the US probe are used to provide global motion awareness for the 3D reconstruction network. The mounted camera is positioned to gain a view of the scanning target from an exterior perspective. In order to better represent the moving tendency of the probe, the optical flows of the RGB images are calculated. Two consecutive US images, together with one optical flow image from the external camera are concatenated, and a sequence of such combinations is fed into a 3D-CNN network to obtain continuous pose transformations. As the scanning progresses, we concatenate all transformations to derive the global trajectory. Notably, our work represents the first instance of fusing the optical flow of an external camera and US images for US sequences motion estimation. Experimental results demonstrate that the proposed method outperforms the baseline method.

Environment

Set your environment by anaconda and use followed command to install the required package.

pip install -r ./requirements.txt

Required Data

Our structure of datasets is as follows. You should put your optical flow images and ultrasound images into flow and us, respectively.

├── datasets
    ├── image
        ├── train
        	├── case0000
        		├── flow
        		├── us
        	...
        	├── caseN
        		├── flow
        		├── us
        ├── val
        	├── case0000
        		├── flow
        		├── us
        	...
        	├── caseN
        		├── flow
        		├── us
        ├── test
        	├── case0000
        		├── flow
        		├── us
        	...
        	├── caseN
        		├── flow
        		├── us
    ├── pose
    	├── case0000.txt
    	...
    	├── caseN.txt

Running

You can make a change to the train.sh and test.sh and run the training and testing by

./train.sh
./test.sh

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published