The document describes a presentation on label-efficient generalizable deep learning for medical image segmentation. The presentation will discuss challenges with domain shift and label scarcity in medical image segmentation using deep learning. It will present methods that use dual cycle alignment, dual domain knowledge transfer, dual self-ensembling adversarial learning, and gradient-based meta-hallucination learning to overcome these challenges and enable label-efficient generalizable segmentation across domains. Evaluation is done on the Multi Modality Whole Heart Segmentation dataset, where the methods outperform existing unsupervised domain adaptation approaches with limited source labels.
[IAIM 2023 - Poster] Label-efficient Generalizable Deep Learning for Medical Image Segmentation
1. International Conference on AI in Medicine
August 5 – 7 2023, Singapore
Label-efficient Generalizable Deep Learning
for Medical Image Segmentation
Ziyuan Zhao
Institute for Infocomm Research (I2R), A*STAR, Singapore
School of Computer Science and Engineering (SCSE), Nanyang Technological University, Singapore
Paper ID: 22
Accurate segmentation in multi-modality medical images is important
for disease diagnosis and treatment.
While deep learning methods have achieved considerable success in
medical image segmentation, they are still hampered by
Domain Shift: Multi-modality medical images e.g., MRI & CT have
different visual appearances & distributions (Unsupervised Domain
Adaptation)
Label Scarcity: Annotating medical images is laborious, expensive,
and requires human expertise (Semi-supervised, Self-supervised, etc)
Our methods overcome these issues together, enabling label-efficient
generalizable segmentation.
Dual Cycle Alignment Module
Bridge the appearance gap across domains, synthesizing source-like
domain images and target-like domain images via adversarial learning.
Dual Domain Knowledge Transfer
Intra-domain Teacher
Synthetic and real images from the same domain maintain a similar visual
appearance Appearance Consistency
Inter-domain Teacher
Transformed images should have the same structural information as the
original ones Structural Consistency
Dual Self-ensembling Adversarial Learning
Integrate adversarial learning into our self-ensembling teacher-student
network in a mutually beneficial manner.
Gradient-based Meta-hallucination Learning
We introduce a “hallucinator” to augment the training set to narrow the
domain gap at the image level and generate useful samples for boosting the
segmentation performance
Hallucination-consistent Self-ensembling Learning
We impose the hallucination-consistent loss in the meta-test step since
we expect such regularization on unseen data for robust adaptation.
Dataset
We employ the publicly available Multi Modality Whole Heart
Segmentation (MM WHS) 2017 dataset, which contains unpaired 20
MR and 20 CT scans For UDA, MR and CT are employed as the
source and target domains
Comparison with Other Methods
Outperforming existing UDA approaches under source label scarcity (1/4
labels) on MM-WHS dataset.
We study an underexplored but valuable UDA setting and introduce two
innovative frameworks, LE-UDA and meta-hallucination, addressing domain
shift and source label scarcity in medical image segmentation.
Our proposed methods can be integrated with different models and easily
extended to various segmentation tasks and wider applications beyond
segmentation, such as PPI prediction [5] and 3D point cloud detection [4].
This work was supported by
I2R. The author would like
to thank Prof. Guan Cuntai,
Prof. S. Kevin Zhou for their
altruistic guide.
Introduction
Label-efficient UDA
Image Adaptation + Dual Teacher Learning + Adversarial Learning
Meta-hallucination
Results
Segmentation performance of different approaches Example outputs of our translation
Visual comparisons on MM-WHS dataset
Conclusion
[1] Zhao, Z., Zhou, F., Xu, K., Zeng, Z., Guan, C., & Zhou, S. K. LE-UDA: Label-efficient unsupervised
domain adaptation for medical image segmentation. IEEE Transactions on Medical Imaging 2023.
[2] Zhao, Z., Zhou, F., Zeng, Z., Guan, C., & Zhou, S. Meta-hallucinator: Towards few-shot cross-
modality cardiac image segmentation. MICCAI 2022.
[3] Zhao, Z., Xu, K., Li, S., Zeng, Z., & Guan, C. MT-UDA: Towards unsupervised cross-modality
medical image segmentation with limited source labels. MICCAI 2021.
[4] Zhao, Z., Xu, M., Qian, P., Pahwa, R. S., & Chang, R. DA-CIL: Towards Domain Adaptive Class-
Incremental 3D Object Detection. BMVC 2022.
[5] Zhao, Z., Qian, P., Yang, X., Zeng, Z., Guan, C., Tam, W. L., & Li, X. SemiGNN-PPI: Self-
Ensembling Multi-Graph Neural Network for Efficient and Generalizable Protein-Protein Interaction
Prediction. IJCAI 2023.
References Acknowledgments Contact
Prof. Guan
Cuntai
Prof. S. Kevin
Zhou
For more information, please
contact: zhaoz@i2r.a-star.edu.sg
or friend me via LinkedIn or
ResearchGate.
https://jacobzhaoziyuan.github.io/