High quality US imaging demand large number of measurements that can increase the cost, size and power requirements. Therefore, low-powered, portable and 3D ultrasound imaging system require reconstruction algorithms that can produce high quality images using fewer receive measurements. Number of model specific methods has been proposed which doesn't work under perturbation. For instance, compressive deconvolution ultrasound which provide a reasonable quality with limited measurements however, it has its own down-sides such as high computation cost and accurate estimation of point spread function (PSF). An other major limitation of conventional methods is that they require RF or base-band signal which is difficult to obtain from portable US systems. To deal with the aforementioned issues, in this study we designed a novel deep deconvolution model for image domain-based deconvolution. The proposed deep deconvolution (DeepDeconv) model can be trained in an unsupervised fashion, alleviate the need of paired high and low quality images. The model was evaluated on both the phantom and in-vivo scans for various sampling configurations. The proposed DeepDeconv significantly enhance the details of anatomical structures and using unsupervised learning on average it achieved 2.14dB, 4.96dB and 0.01 units gain in CR, PSNR and SSIM values respectively, which are comparable to the supervised method.
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unsupervised Deconvolution Neural Network for High Quality Ultrasound Imaging
1. When you do take the home pregnancy test, it doesn't quite seem real.
But when you see the baby and the heartbeat on the ultrasound, it's so incredible.
- Danica McKellar (American Actress)
Shujaat Khan, Jaeyoung Huh, Jong Chul Ye
Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
UnsupervisedDeconvolutionNeuralNetworkfor High
Quality UltrasoundImaging
1
This work was supported by the National Research Foundation of Korea under Grant NRF-2020R1A2B5B03001980.
2. 2
Introduction
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
Application Needs
Reduced number of channels for
Ultra-fast US
Portable US
3 dimensional US
High quality imaging
3. Adaptive Deconvolution Beamformer
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
3
Deconvolution Model
Point spread funct
ion (PSF)
Tissue reflectivity
function (TRF)
RF-Image
Beamformer and Deconvolution filter matrices should be spatio-temporally varying.
Extact calculation require high runtime.
Precalculating nonlinear mapping Tau requires huge memory to store
Deconvolution Ultrasound (Limitations)
Deconvolution filter
Adaptive beam former weights
RF data
Adaptive Deconvolution Beamformer
4. Deep Deconvolution Beamformer
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
Adaptive Deconvolution Beamformer Encoder-decoder CNN [1][2]
4
analysis basis
synthesis basis
Encoder Decoder
Similarity between two equation implies that
the deconvolution beamforming can be learned using an encoder-decoder CNN.
6. 6
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
ProposedUnsupervisedDeconvolutionNeuralNetwork
Loss function
ℓ
𝑮𝑮𝑮𝑮𝑮𝑮
𝑮𝑮𝑩𝑩𝑩𝑩 𝑮𝑮𝑨𝑨𝑨𝑨
Target(B) Output(A) Recon(B)
ℓ𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄
𝑮𝑮𝑨𝑨𝑨𝑨 𝑮𝑮𝑩𝑩𝑩𝑩
Output(B)
ℓ𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄
Input(A) Recon(A)
Input 64 128 256 512 1024 512 256 128 64 Output
(a) Generator network
3x3-Conv2D
1-stride (ReLu)
BatchNormalization
2x2-MaxPooling 2x2-UpSampling2D
Concatination
1x1-Conv2D (ReLu)
Concatination
Skip-connection
Forward-connection 1x1-Conv2D
3x3-Conv2D
2-stride (ReLu)
BatchNormalization
Input 256
Output
512 1024
(b) Discriminator network
Proposed network architecture
No need of PSF estimation
No need of paired high quality data
Universal model for variable sub-sampling rates provides better generalization
Cycle consistency and LS-GAN loss functions are
used alleviating the need of paired dataset,
hence it is easy to deploy.
The proposed model can generate high quality B-Mode
Image from low quality DAS images generated using
sub-sampled RF signal.
7. 7
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
Results
Axial
depth(mm)
36
27
18
9
0
DAS
(Input)
45
DeepBF
(beamformer)
DeepDeconv
(supervised)
DeepDeconv
(un-supervised)
Lateral length(mm)
0 20
10 30 38.2
Anechoic phantom Trachea Right lobe Carotid artery
(64-channels) (8-channles) (4-channels)
0 dB
-60 dB
-30 dB
(a) (b) (c)
Figure: B-Mode ultrasound images using (a) fully-sampled channel data, (b) sub-sampled channel data. (c) Performance statistics
The proposed model enhances image quality by improving the contrast and resolution of low quality DAS images.
8. 8
Bio Imaging, Signal Processing and Learning (BiSPL), KAIST
Conclusion
An unsupervised image-to-image learning-based deep deconvolution model is
proposed for medical ultrasound imaging.
The proposed model enhances image quality by improving the contrast and
resolution of low quality DAS images.
We show that a universal model can help improve visualization quality of DAS
images acquired at various sub-sampling rates.
An Unsupervised Image domain learning strategy can alleviate the need of paired
dataset, hence it is easy to deploy.
[1] S. Khan, J. Huh and J. C. Ye, "Adaptive and Compressive Beamforming Using Deep Learning for Medical Ul
trasound," in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 67, no. 8, pp. 155
8-1572, Aug. 2020, doi: 10.1109/TUFFC.2020.2977202.
[2] J. C. Ye and W. K. Sung, “Understanding geometry of encoder-decoder CNNs,” in Proceedings of the 36th I
nternational Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhur
i and R. Salakhutdinov, Eds., vol. 97. Long Beach, California, USA: PMLR, 09–15 Jun 2019, pp. 7064–7073
[3] J. Duan, H. Zhong, B. Jing, S. Zhang and M. Wan, "Increasing Axial Resolution of Ultrasonic Imaging With a
Joint Sparse Representation Model," in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Contr
ol, vol. 63, no. 12, pp. 2045-2056, Dec. 2016, doi: 10.1109/TUFFC.2016.2609141.
References