Skip to content

openmedlab/MIS-FM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Medical Image Segmentation Foundation Model


PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Docs Unittest Algotest deploy codecov

GitHub license

This repository provides the official implementation of "MIS-FM: Medical Image Segmentation Foundation Model Pretrained with Large-Scale Unannotated 3D Images using Volume Fusion".

Key Features

  • A new self-supervised learning method based on Volume Fusion that is a segmentation-based pretext task.
  • A new network architecture PCT-Net that combines the advantages of CNNs and Transformers.
  • A foundation model that is trained from 100k unannotated 3D CT scans.

Links

Details

The following figure shows an overview of our proposed method for pretraining with unannotated 3D medical images. We introduce a pretext task based on sudo-segmentation, where Volume Fusion is used to generate paired images and segmentation labels to pretrain the 3D segmentation model, which can better match the downstream task of segmentation than existing Self-Supervised Learning (SSL) methods.

The pretraining strategy is combined with our proposed PCT-Net to obtain a pretrained model that is applied to segmentation of different objects from 3D medical images after fine tuning with a small set of labeled data.

Datasets

We used 10k CT volumes from public datasets and 98k private CT volumes for pretraining.

Get Started

Main Requirements

torch==1.10.2
PyMIC

To use PyMIC, please download the latest code in the master branch, and add the path of pymic source code to PYTHONPATH environmental variable. See bash.sh for example.

Demo data In this demo, we show using PCT-Net for left atrial segmentation. The dataset can be downloaded from PYMIC_data.

The dataset, network and training/testing settings can be found in configuration files: demo/pctnet_scratch.cfg and demo/pctnet_pretrain.cfg for training from scratch and using the pretrained weights, respectively.

After downloading the data, edit the value of root_dir in the configuration files, and make sure the path to the images are correct.

Training

python train.py demo/pctnet_scratch.cfg

or

python train.py demo/pctnet_pretrain.cfg

Inference

python predict.py demo/pctnet_scratch.cfg

or

python predict.py demo/pctnet_pretrain.cfg

Evaluation

python $PyMIC_path/pymic/util/evaluation_seg.py -cfg demo/evaluation.cfg

You may need to edit demo/evaluation.cfg to specify the path of segmented results before evaluating the performance.

🙋‍♀️ Feedback and Contact

  • Email
  • Webpage
  • Social media

🛡️ License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

🙏 Acknowledgement

A lot of code is modified from monai.

📝 Citation

If you find this repository useful, please consider citing this paper:

@article{John2023,
  title={paper},
  author={John},
  journal={arXiv preprint arXiv:},
  year={2023}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published