SlideShare a Scribd company logo
1 of 60
1. Introduction.
2. Problem Description.
3. Literature Reviews.
4. Objectives.
5. Contributions.
6. Future Works.
7. Research Schedule.
AGENDA
2
1- Introduction
3
1. Introduction
Brain tumor is basically growth of cancerous cells inside the brain or
around the brain. Many different categories of brain tumors exist. A
few originate in the brain itself, in which case they are characterized as
primary. Others spread to this location from somewhere else in the
body through metastasis and are characterized as secondary
4
1. Introduction
• Glioblastoma (malignant brain tumor) cells have irregular shapes that can spread into
the brain, and it is the main cause of death among cancer deaths.
• Brain tumor segmentation seeks to separate healthy tissue from tumorous
regions such as the enhancing tumor, necrosis and surrounding edema.
• Magnetic resonance imaging (MRI) provides detailed images of the brain, and is one
of the most common tests used to diagnose brain tumors ,but the big quantity of
manual segmentation data produced by MRI is time-consuming. So, automatic
techniques of segmentation are therefore needed.
The first four images from left to right show the MRI modalities used as input and the fifth image shows the ground
truth labels where, Green=edema, yellow=enhancing tumor, red=necrosis, and non-enhancing.
5
1. Introduction
Manual brain tumor segmentation is a challenging and tedious task for human
experts due to the variability of tumor appearance, unclear borders of the tumor and
the need to evaluate multiple MR images with different contrasts simultaneously. In
addition, manual segmentation is often prone to significant intra- and inter-rater
variability .So automatic or semi-automatic methods are, therefore necessary
6
2- Problem Description
7
2. Problem Description
• gliomas and glioblastomas are much more difficult to localize these
tumors are often diffused, poorly contrasted, and extend.
• Another fundamental difficulty with segmenting brain tumors is that
they can appear anywhere in the brain, in almost any shape and size
• The Brain tumor segmentation problem exhibits severe class imbalance
where the healthy voxels comprise 98% of total voxels,0.18% belongs
to necrosis ,1.1% to edema and non-enhanced and 0.38% to enhanced
tumor.
8
2. Problem Description
• In addition to these, brain tumor MRI data obtained from clinical scans or
synthetic databases are inherently complex. MRI devices and protocols used for
acquisition can vary dramatically from scan to scan imposing intensity biases
and other variations for each different slice of image in the dataset. The need for
several modalities to effectively segment tumor sub-regions even adds to this
complexity.
9
2. Problem Description
Main drawbacks or shortcomings of recent studies
• One of the limitations in brain tumor segmentation is overfitting which
refers to a model that has a good performance on the training dataset but
does not perform well on new data.
• The complexity of a neural network model is defined by both its structure
and the parameters. Therefore, we can reduce the complexity of the
network architecture by reducing the layers or parameters or focus on
methods that artificially increase the number of training data instead of
changing the network architecture.
10
2. Problem Description
• In addition, the existing models for automatic brain segmentation using CNN
faces class imbalance of labeled data to solve this challenge. For example, for a
segmentation of brain tumor or that of white matter lesion, the normal brain
region is larger than the abnormal region.
• The recent studies for brain tumor segmentation stilling suffering from a little
performance for accuracy
• Most of the proposed methods emphasised the shortcomings of working with
deep CNN models. First, there is the computational requirement. Analysing,
manipulating and processing each voxel in a volume is expensive
computationally.
Main drawbacks or shortcomings of recent studies
11
3- Literature Reviews
12
3. Literature Reviews
.A recent study on deep neural network architectures and its applications toward
medical image analysis was presented in [15]:
1. Pereira et al.[1]Suggested a 2D CNN network with a small kernel size (i.e. 3×3).
Thy train two distinct models one for HGG and another one for LGG. They also
use a max-pooling layer of stride 2 and apply a dropout to the dense layers only.
The model utilizes the activation function of Leaky rectified linear units
(LeakyReLU)[2].
2. Havaei et al. [3]construct a dual-path 2D CNN brain tumor segmentation
network, which contains local and global paths by employing different size
convolution kernels to extract different context feature information. However,
patch-wise architectures lack spatial continuity and need large storage space,
leading to low efficiency.
10/22/2023 13
13
3. Literature Reviews
3. Ronneberger et al. [4] designed asymmetric fully convolutional network
called U-Net, which consists of an contracting path that extracts image
spatial features and expanding path that generates a segmentation map
from the encoded features .U-Net has been widely used in the
segmentation of various medical images Tasks
4. Cahall el at. [5] proposed a new image segmentation framework using Inception
modules and U-Net image segmentation architecture, their framework includes
two learning regions,intra-tumoral structures and glioma sub-regions.To achieve
further improvement in performance ,Multi Inception block in each block to
increase the network capacity for learning ,and up skip connection are also
utilized to optimize the segmentation results[6]
14
3. Literature Reviews
5. The MultiResUNet recently proposed in Ibtehaz and Rahman [7] combined a U-
Net with residual Inception modules for multi-scale feature extraction; authors
applied their architecture to several multimodal medical imaging datasets.
6. Cheng el at. [8] presented a novel Memory-Efficient Cascade 3D U-Net which
achieved comparable segmentation accuracy with less memory and computation
consumption
15
4- Objectives
16
4. Objectives
Following are some key objectives on which the research will be focused on:
1. Building a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well
as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net
models. The proposed models utilize the power of U-Net, Residual Network.
2. Proposing hybrid two track U-Net and each track has a different amount of layers
and utilizes a different kernel size with hybrid loss function for brain tumor
segmentation that can solve class imbalanced data problem.
3. Proposing a novel Multi Inception Residual Nested U-Net integrates residual and
incepetion models, the encoder and de-coder are connected via a sequence of
nesting pathways to enhance brain tumor segmentation and reduce number of
parameters.
4. Proposing a novel hybrid densely connected UNet (H-DenseUNet), which consists
of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D
counterpart to solve 3D convolutions
5. Explore further studies in related area and incorporate it the current research
problem to get better results.
17
5- Contributions
• Current studies and our experimental results.
18
5. Contributions
1. HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor
Segmentation.
2. MIResU-Net++:Multi Inception Residual Nested U-Net Enhance
Brain Tumor Segmentation
19
5. Contributions
• HTTU-Net not only extracts more semantic information but also gives more consideration
to the information of small-scale brain tumors, which improves the segmentation of
brain tumors.
• HTTU-Net is also update the U-Net network by adding batch normalization at the end of
each block. Our architecture, the first track, focuses on the tumor's form and size while
the second track captures the contextual information. Each track consists of a different
number of convolution blocks and uses a different kernel size to handle the different
tumor sizes.
• We have introduced a new hybrid loss feature, combining Focal Loss and Generalized
Dice Loss functions, to mitigate the class imbalance.
• We demonstrate that the proposed strategy improves the precision of the initial U-Net
and also alleviates the issue of overfitting. We experiment with Brats 2018 dataset, and
our architecture shows superior performance.
The contributions of this wok can be summarized as follows.
20
5. Contributions
Fig.The proposed HTTU-Net architecture. The first and second tracks are shown respectively in the colors blue and red.
21
5. Contributions
TWO-TRACK U-NET ARCHITECTURE
THE FIRST TRACK
The first track's contracting part
consists of 5 convolutional blocks.
For all convolutional layers, this track
utilizes 3 kernel. The amount of
filters for the first, second, third,
fourth, and fth blocks is 64, 128, 256,
512, and 1024.
THE SECOND TRACK
The second track's contracting part
consists of 4 convolutional blocks. Each
block has two convolutional layers. 5
kernel for all layers in this track. The
amount of lters for the four blocks is 64,
128, 256, and 512.
22
5. Contributions
HYBRID LOSS
• The selection of loss functions becomes more important, especially in the
case of severe brain tumor segmentation problems
• we apply the sum of focal loss function and Generalized Dice Loss (GDL) to
approach this issue
𝐻𝐿 = GDL + 𝐹𝐿
23
5. Contributions
Model parameter setting
24
5. Contributions
Experimental Results for this work
EVALUATION METRICS
𝐷𝑆𝐶 =
2𝑇𝑃
(𝐹𝑃+2𝑇𝑃+𝐹𝑁)
(4)
Sensitivity =
𝑇𝑃
𝑇𝑃+𝐹𝑁
(5)
Specificity =
𝑇𝑃
𝑇𝑃+𝐹𝑃
(6)
ℎ(𝐴, 𝐵)=𝑚𝑎𝑥𝑎𝜖𝐴 {𝑚𝑖𝑛𝑏𝜖𝐵 {𝑑(𝑎, 𝑏) }} (7)
where a and b are the set of points in A and B, respectively and d(a,b) is Euclidean
metric between these points.
25
Performance on Brats’2018 Training Dataset
5. Contributions
In our experiments, 160 subjects from Brats training dataset is used for training and 40
subjects for validation purposes. We extract 25,000 multimodal patches from each case to
form 4,000,000 patches training set.
Fig. Sample segmentation results of four HGG cases from the BraTs’2018 training dataset. Labels are shown in different colors; Green for
edema, yellow for enhancing tumor, red for necrosis and non-enhancing
26
Performance on Brats’2018 Training Dataset
5. Contributions
Dice Sensitivity
ET WT TC ET WT TC
Mean 0.741 0.852 0.812 0.768 0.885 0.820
Std.Dev. 0.282 0.110 0.252 0.269 0.150 0.269
Median 0.801 0.879 0.878 0.859 0.908 0.874
25quantile 0.743 0.840 0.784 0.752 0.868 0.803
75quantile 0.846 0.905 0.897 0.951 0.954 0.948
Table1 Quantitative result of segmentation of BraTS’2018 training dataset
using Dice and Sensitivity metrics
Specificity Hausdorff95
ET WT TC ET WT TC
Mean 0.999 0.998 0.997 3.304 8.250 3.301
Std.Dev. 0.042 0.023 0.042 1.824 3.315 1.872
Median 0.999 0.999 1 3.211 8.401 3.251
25quantile 0.999 0.998 1 2.503 6.172 4.103
75quantile 0.999 0.999 1 4.924 10.67 4.254
Table. 2.Quantitative result of segmentation of BraTS’2018 training dataset
using Specificity and Hausdorff distance metrics
27
BraTS’2018 Testing Performance
5. Contributions
Table3. Quantitative segmentation results for Testing on BraTS 2018 t
using Dice and Sensitivity metrics.
Table4. Quantitative segmentation results for Testing on BraTS 2018
using Specificity and Hausdorf distance metrics.
Dice Sensitivity
ET WT TC ET WT TC
Mean 0.745 0.865 0.808 0.78 0.883 0.80
Std.Dev. 0.211 0.103 0.223 0.286 0.151 0.263
Median 0.815 0.883 0.895 0.868 0.894 0.882
25 quantile 0.76 0.858 0.77 0.703 0.87 0.77
75quantile 0.887 0.915 0.923 0.943 0.941 0.972
Specificity Hausdorff95
ET WT TC ET WT TC
Mean 0.999 0.999 0.998 4.43 7.53 8.811
Std.Dev. 0.053 0.031 0.032 2.441 3.461 2.961
Median 0.999 0.999 0.999 4.29 5.871 7.12
25 quantile 0.999 0.998 0.998 3.20 3.76 5.55
75quantile 1 0.999 0.999 5.09 5.09 9.05
28
BraTS’2018 Testing Performance
5. Contributions
Fig. Boxplots of DSC ,Sensitivity ,specificity and Hausdorff obtained from BraTS’2018. The ‘x’ marks the mean score,
‘●’ marks outliers.
Methods ET WT TC
Original U-Net 0.69 0.852 0.794
First path 0.739 0.850 0.80
Second path
Two-pathways
0.732
0.745
0.859
0.865
0.792
0.808
10/22/2023 29
5. Contributions
BraTS’2018 Testing Performance
Table. Comparison of our proposed model with one –pathway model
5. Contributions
Conclusion
• In this paper, we introduced an automatic approach for brain tumor
segmentation using 2D HTTU-Net architecture. The proposed technique
has been quantitatively evaluated using the BraTS'2018 dataset. It
contains two tracks; each one consists of a different number of
convolution blocks and uses a different kernel size to handle the different
tumor sizes
• We also developed a new hybrid loss function to alleviate the class
imbalance problem by combining the focal loss and Generalized Dice Loss
functions. Higher performance is achieved through HTTU-Net
architecture, which solves brain tumors segmentation problems that can
happen anywhere in the brain, in almost any type and size.
• The evaluation of the proposed approach verifies that our results are very
comparable to those obtained manually by experts
31
The major contributions of this work are:
1. We propose an end-to-end MIResU-Net++ model for brain tumor
segmentation task, MIResU-Net++ extracts more abundant semantic
information, in addition extracts information of small-scale brain tumors,
which improves the accuracy of segmentation .
5. Contributions
2. MIResUNet++ integrates residual modules and Inception modules
with U-Net architecture to make the proposed network deeper
and wider,in MIResU-Net ++the encoder and decoder sub-
networks are connected through a series of nested pathways.
32
5. Contributions
3. Experimental results on two brain tumor segmentation datasets, i.e., BraTS
2019 and BraTS 2020. Experimental results illuminate that our models with
Nested U-Net(U-Net++), Multi Inception residual U-Net(MIResU-Net), i.e., and
Multi Inception Residual Nested U-Net(MIRes U-Net++) outperform baseline of
U-Net . In addition, the proposed network is efficient and balances the tradeoff
between the number of parameters and accuracy of segmentation.
The major contributions of this work are:
33
5. Contributions
Material and Methods
34
5. Contributions
Multi Inception Residual U-Net(MIResU-Net)
we modify the UNet architecture with an inception module . Moreover,we also add a residual
connection due to its effectiveness in the segmentation of biomedical images [37]. Fig. 1(c),
shows the Inception-Res block. The inception -Res module implemented in our network is
including multiple sets of 1× 1 convolutions, 7 × 1 convolutions and 1x7 convolutions.
35
5. Contributions
Nested skip pathways(U-Net++)
we re-designed skip pathways transform the connectivity of the encoder and decoder sub-
networks we use a dense InceptionRes block to facilitate the model's capacity for accurate
brain tumor segmentation. As illustrated in Fig4.1 (b), the skip pathway between
InceptionRes𝐵𝑙𝑜𝑐𝑘0,0
and InceptionRes𝐵𝑙𝑜𝑐𝑘0,4
consists of a dense convolution block with
three InceptionRes blocks
36
5. Contributions
Multi Inception Residual Nested U-net(MIResU-Net++)
In the MIResUNet++ model, as shown in fig 4.1. (a) we replace the sequence of two
convolutional layers with the proposed Multi Inception Res block
We compute the value of W as follows:
W = α × F (3)
Here, F is the number of filters in the corresponding layer of U-Net and α is a scalar
coefficient.
model we assign α = 1.8. inside a MultiInceptionRes block, we assign [
𝑊
12
],[
𝑊
6
],[
𝑊
4
],[
𝑊
2
] to
convolutional layers respectively which similar to the U-Net architecture
37
5. Contributions
Multi Inception Residual Nested U-net(MIResU-Net++)
MultiInceptionResNested U-Net architectural details
38
5. Contributions
Experimental Results for this work
DATASETS
• In this work, we mainly adopt BraTS 2019 datase and BraTS 2020 brain tumor MRI dataset
for the performance evaluation
• Evaluation results of BraTS 2019 training dataset and BraTS 2019 validation dataset are
disseminated on the challenge 𝑙𝑒𝑎𝑑𝑒𝑟𝑏𝑜𝑎𝑟𝑑 𝑊𝑒𝑏 𝑠𝑖𝑡𝑒.
39
5. Contributions
Evaluation Results on BraTS 2019 Training Dataset
DSC Sensitivity Specificity
Whole Core Enhancing Whole Core Enhancing Whole Core Enhancing
Mean 0.888 0.876 0.819 0. 883 0.869 0.857 0.994 0.998 0.997
Std.Dev. 0.091 0.154 0.171 0.130 0.169 0.152 0.005 0.003 0.005
Median 0.92 0.918 0.869 0.926 0.920 0.903 0.997 0.999 0.998
25
quantile
0.877 0.874 0.788 0.864 0.861 0.814 0.994 0.997 0.997
75quantile 0.947 0.939 0.911 0.960 0.957 0.943 0.999 0.998 0.963
Table4.2. Evaluation results on BraTS 2019 Training Dataset
Methods Whole Core Enhancing
U − Net∗ 0.87 0.762 0.703
U − Net+ +(𝑜𝑢𝑟) 0.876 0.833 0.742
MIResU-Net(our)
MIResU-Net ++(our)
0.882
0.888
0.809
0.876
0.77
0.82
Table4.3. Compared segmentation results with baselines on BraTS 2019 Training Dataset
40
5. Contributions
Evaluation Results on BraTS 2019 Training Dataset
Table4.4. Compared segmentation results with typical methods on BraTS 2019 Training Dataset.
Methods
DSC
Whole core Enhancing
Sensitivity Specificity
Whole core Enhancing Whole core Enhancing
Cheng et al.[8]
Li et al.[11]
K. Hu et al.[12]
Zhang et al. [13]
Chen et al.[14]
Zhao et al.[15]
0.89 0.811 0.765
0.89 0.733 0.726
0.89 0.82 0.777
0.876 0.772 0.72
0.888 0.844 0.739
0.88 0.84 0.77
0.898 0.816 0.769 0.994 0.994 0.997
0.895 0.75 0.743 - - -
0.90 0.84 0.86 - - -
- - - - - -
0.880 0.832 0.786 0.994 0.997 0.997
0.86 0.82 0.80 - - -
MIResUNet++(our) 0.888 0.867 0.819 0.883 0.869 0.857 0.994 0.998 0.997
41
5. Contributions
Experiments on BraTS 2019 Validation Dataset
DSC Sensitivity Specificity
Whole Core Enhancing Whole Core Enhancing Whole Core Enhancing
Mean 0.865 0.864 0.806 0.885 0.884 0.842 0.992 0.996 0.996
Std.Dev. 0.104 0.119 0.132 0.101 0.093 0.105 0.009 0.007 0.008
Median 0.897 0.907 0.842 0.923 0.906 0.855 0.994 0.999 0.998
25 quantile 0.865 0.843 0.772 0.851 0.849 0.80 0.991 0.997 0.997
75quantile 0.922 0.934 0.89 0.951 0.95 0.917 0.997 0.999 0.999
Table4. 5. Evaluation results on BraTS 2019 Validation Dataset
Methods Whole Core Enhancing
U-Net 0.864 0.746 0.694
U-Net++(our) 0.862 0.821 0.741
MIResU-Net(our)
MIResU-Net++ (our)
0.865
0.865
0.803
0.864
0.76.3
0.806
Table4.6. Compared segmentation results with baselines on BraTS 2019 Validation Dataset
42
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Table4.7. Compared segmentation results with typical methods on BraTS 2019 Validation Dataset.
Methods
DSC
Whole core Enhancing
Sensitivity Specificity
Whole core Enhancing Whole core Enhancing
K. Hu et al[12]
Zhang et al. [13]
Hu et al.[16]
Abouelenien et al[17]
Chandra et al. [18]
MIResUNet++(our)
0.882 0.748 0.718
0.865 0.80 0.745
0.850 0.70 0.65
0.865 0.80 0.745
0.872 0.795 0.741
0.865 0.864 0.806
0.907 0.76 0.868 0.991 0.996 0.994
- - - - - -
0.83 0.79 0.65 - - -
0.883 0.80 0 .78 0.999 0.998 0.999
0.829 0.788 0.795 0.994 0.997 0.998
0.885 0.884 0.842 0.992 0.966 0.966
43
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Flair ground truth U-Net MIResU-Net++(ours)
FIGURE 2. Examples of segmentation results on the BraTS 2019 training dataset. From left to right:
Flair image, Ground Truth, U-Net and MIResU-Net++. Each color represents a tumor class: red—
necrosis and non-enhancing, green—edema and yellow—enhancing tumor
44
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Fig. 3. Boxplots of DSC Sensitivity and Specificity obtained from validation data BraTS’2019. The ‘x’ marks the mean score," " marks outliers.
45
5. Contributions
Experiments on BraTS 2020 Test Dataset
46
5. Contributions
Methods Parameters
U-Net(baseline) 7.76M
Zhouelat[19] 9.04M
Ibtehazelat[7]
Kermi elat[20]
Zhouelat[21]
Linelat[22]
MIResUNet++(our)
7.26M
10.15M
13.81M
24.62M
5.91M
5. Contributions
Conclusion
• In this paper, we presented a novel MIResU-Net++ model for the MRI brain tumor
segmentation task by modifying the U-Net architecture. First ,we embedded
inception module and residual units into U-Net in each block to help our network
to improve the segmentation performance of brain tumors.Then the encoder and
decoder sub-networks are connected through a series of nested pathways.
• The proposed method was evaluated on the BRATS 2019 and the BRATS 2020
datasets.Experiment results demonstrated that MIResU-Net++ outperformed U-
Net and other typical brain tumor segmentation methods by a large margin.
• MIRes++U-Net can achieve comparable segmentation accuracy with less number
of parameters
48
6- Future works
49
6. Future Works
1. A novel hybrid densely connected UNet (H-DenseUNet for tumor segmentation
3. Attention Residual Nested U-Net for Brain Tumor Segmentation and survival
prediction.
2. Convolution neural network to segment tumor, radiomics features for survival
prediction.
50
1. A novel hybrid densely connected UNet (H-DenseUNet for tumor
segmentation
6. Future Works
51
6. Future Works
in this work we will propose a novel hybrid densely connected UNet (H-DenseUNet),
which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a
3D counterpart for hierarchically aggregating volumetric contexts under the spirit of
the auto-context algorithm for tumor segmentation.
52
2. Convolution neural network to segment tumor, radiomics features for
survival prediction.
6. Future Works
53
6. Future Works
In this work, we will propose a convolutional neural network trained on high-
contrast images can transform the intensity distribution of brain lesions in its
internal subregions. Specifically, a generative adversarial network (GAN) is
extended to synthesize high-contrast images; followed by survival regression and
classification using these abnormal tumor tissue segments and other relevant
clinical features. The survival prediction step includes two representative survival
prediction pipelines that combine different feature selection and regression
approaches.
.
54
3.Attention Residual Nested U-Net for Brain Tumor Segmentation and
survival prediction
6. Future Works
55
6. Future Works
Firstly In this work, we will explore the effectiveness of a recent
attention module called attention gate for brain tumor segmentation
task, then we will replace skip connections with nested path .Finally,
a random forest model is trained to predict the overall survival of
patients.
56
References
1. S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using
Convolutional Neural Networks in MRI Images,” IEEE Trans. Med. Imaging, vol. 35, no. 5,
pp. 1240–1251, 2016.
2. A. L. Maas, A. Y. Hannun, and A. Y. Ng, ``Rectier nonlinearities improve neural network
acoustic models,'' in Proc. ICMLWork. Deep Learn. Audio, Speech Lang. Process., vol. 28,
p. 3, Jun. 2013.
3. M. Havaei et al., “Brain tumor segmentation with Deep Neural Networks,” Med. Image
Anal., vol. 35, pp. 18–31, 2017.
4. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical
image segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 9351, pp. 234–241, 2015.
5. D. E. Cahall, G. Rasool, and N. C. Bouaynaya, “Inception Modules Enhance Brain Tumor
Segmentation,” vol. 13, no. July, pp. 1–8, 2019.
6. H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using
improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019.
7. N. Ibtehaz and M. S. Rahman, “MultiResUNet: Rethinking the U-Net architecture for
multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87,
2020.
57
References
8. X. Cheng, Z. Jiang, Q. Sun, and J. Zhang, “Memory-efficient cascade 3d u-net for brain
tumor segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 11992 LNCS, no. December 2019, pp. 242–253, 2020.
9. T.-Y. Cl:Aire Lin, P. Goyal, R. Girshick, and K. P. He & Dollar,``Focal loss for dense object
detection,'' IEEE Trans. Pattern Anal. Mach.Intell., vol. 42, no. 2, pp. 318327, Feb. 2020, doi:
10.1109/TPAMI.2018. 2858826.
10.C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, ``Generalised dice overlap
as a deep learning loss function for highly unbalanced segmentations,'' in Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support (Lecture
Notes in Computer Science), vol. 10553. Springer, 2017, pp. 240248.
11.H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using
improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019.
12. K. A. I. Hu et al., “Brain Tumor Segmentation Using Multi-cascaded Convolutional Neural
Networks and Conditional Random Field,” IEEE Access, vol. PP, p. 1, 2019.
13. J. Zhang, Z. Jiang, J. Dong, and Y. Hou, “Attention Gate ResU-Net for automatic MRI brain
tumor segmentation,” vol. 8, pp. 1–13, 2020.
14. W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3D-UNET: Separable 3D U-Net for brain
tumor segmentation,” in Lecture Notes in Computer Science (including subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019.
58
References
15. X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan, “A deep learning model integrating
FCNNs and CRFs for brain tumor segmentation,” Med. Image Anal., vol. 43, pp. 98–111,
2018.
16. Yan Hu and Yong Xia, “3D Deep Neural Network-Based Brain Tumor Segmentation
Using Multimodality Magnetic Resonance Sequences,” Int. MICCAI Brainlesion Work.,
vol. 10670, no. December 2017, pp. 423–434, 2017.
17. A. A. A. NAGWA M. ABOELENEIN, PIAO SONGHAO, ANIS KOUBAA3, ALAM NOOR,
“HTTU-Net : Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation,” IEEE
Access, no. June, 2020.
18. M. V. S. Chandra, “Context aware 3d cnns for brain tumor segmentation,” in
International MICCAI Brainlesion Workshop, 2018, vol. 2, pp. 299–310.
19. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net
architecture for medical image segmentation,” Lect. Notes Comput. Sci. (including
Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11045 LNCS, pp. 3–11,
2018.
20. I. M. Adel Kermi, “Brain Tumor Segmentation in Multimodal 3D-MRI of BraTS’2018
Datasets using Deep Convolutional Neural Networks,” in Pre-conference proceedings
2018 International MICCAI BraTS Challenge, 2018.
59
References
21. C. Zhou, C. Ding, X. Wang, Z. Lu, and D. Tao, “One-Pass Multi-Task Networks With Cross-
Task Guided Attention for Brain Tumor Segmentation,” IEEE Trans. Image Process., vol.
29, pp. 4516–4529, 2020.
22. F. Lin, Q. Wu, J. Liu, D. Wang, and X. Kong, “Path aggregation U-Net model for brain
tumor segmentation,” Multimed. Tools Appl., 2020.
THANKS
for your attention

More Related Content

Similar to brain tumor.pptx

3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor ImagingIJAEMSJORNAL
 
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIDETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIIRJET Journal
 
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIDETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIIRJET Journal
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...IJECEIAES
 
Zeeshan.ali.presentations
Zeeshan.ali.presentationsZeeshan.ali.presentations
Zeeshan.ali.presentationsZeeshan Ali
 
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...IRJET Journal
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionIRJET Journal
 
Brain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep LearningBrain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep LearningIRJET Journal
 
Brain Tumor Detection Using Deep Learning ppt new made.pptx
Brain Tumor Detection Using Deep Learning ppt new made.pptxBrain Tumor Detection Using Deep Learning ppt new made.pptx
Brain Tumor Detection Using Deep Learning ppt new made.pptxvikyt2211
 
IRJET - Detection of Heamorrhage in Brain using Deep Learning
IRJET - Detection of Heamorrhage in Brain using Deep LearningIRJET - Detection of Heamorrhage in Brain using Deep Learning
IRJET - Detection of Heamorrhage in Brain using Deep LearningIRJET Journal
 
IRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor ImagesIRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor ImagesIRJET Journal
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkAIRCC Publishing Corporation
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkAIRCC Publishing Corporation
 
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUESBRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUESIRJET Journal
 
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATION
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATIONBATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATION
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATIONsipij
 
Bata-Unet: Deep Learning Model for Liver Segmentation
Bata-Unet: Deep Learning Model for Liver SegmentationBata-Unet: Deep Learning Model for Liver Segmentation
Bata-Unet: Deep Learning Model for Liver Segmentationsipij
 
Brain tumor detection using cnn
Brain tumor detection using cnnBrain tumor detection using cnn
Brain tumor detection using cnndrubosaha
 

Similar to brain tumor.pptx (20)

3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging
 
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIDETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
 
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PIDETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
DETECTION OF DIFFERENT TYPES OF SKIN DISEASES USING RASPBERRY PI
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...
 
Zeeshan.ali.presentations
Zeeshan.ali.presentationsZeeshan.ali.presentations
Zeeshan.ali.presentations
 
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...
Improved UNet Framework with attention for Semantic Segmentation of Tumor Reg...
 
78-ijsrr-d-2250.ebn-f.pdf
78-ijsrr-d-2250.ebn-f.pdf78-ijsrr-d-2250.ebn-f.pdf
78-ijsrr-d-2250.ebn-f.pdf
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor Detection
 
brain tumor ppt.pptx
brain tumor ppt.pptxbrain tumor ppt.pptx
brain tumor ppt.pptx
 
Brain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep LearningBrain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep Learning
 
Brain Tumor Detection Using Deep Learning ppt new made.pptx
Brain Tumor Detection Using Deep Learning ppt new made.pptxBrain Tumor Detection Using Deep Learning ppt new made.pptx
Brain Tumor Detection Using Deep Learning ppt new made.pptx
 
IRJET - Detection of Heamorrhage in Brain using Deep Learning
IRJET - Detection of Heamorrhage in Brain using Deep LearningIRJET - Detection of Heamorrhage in Brain using Deep Learning
IRJET - Detection of Heamorrhage in Brain using Deep Learning
 
IRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor ImagesIRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural Network
 
Image Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural NetworkImage Segmentation and Classification using Neural Network
Image Segmentation and Classification using Neural Network
 
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUESBRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
 
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATION
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATIONBATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATION
BATA-UNET: DEEP LEARNING MODEL FOR LIVER SEGMENTATION
 
Bata-Unet: Deep Learning Model for Liver Segmentation
Bata-Unet: Deep Learning Model for Liver SegmentationBata-Unet: Deep Learning Model for Liver Segmentation
Bata-Unet: Deep Learning Model for Liver Segmentation
 
MINI PROJECT (1).pptx
MINI PROJECT (1).pptxMINI PROJECT (1).pptx
MINI PROJECT (1).pptx
 
Brain tumor detection using cnn
Brain tumor detection using cnnBrain tumor detection using cnn
Brain tumor detection using cnn
 

More from nagwaAboElenein

Chapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxChapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxnagwaAboElenein
 
Chapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxChapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxnagwaAboElenein
 
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.nagwaAboElenein
 
研究生学位论文在线提交Electronic thesis online submission(20210527).ppt
研究生学位论文在线提交Electronic thesis online submission(20210527).ppt研究生学位论文在线提交Electronic thesis online submission(20210527).ppt
研究生学位论文在线提交Electronic thesis online submission(20210527).pptnagwaAboElenein
 
security introduction and overview lecture1 .pptx
security introduction and overview lecture1 .pptxsecurity introduction and overview lecture1 .pptx
security introduction and overview lecture1 .pptxnagwaAboElenein
 
Lec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdfLec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdfnagwaAboElenein
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdfnagwaAboElenein
 
Semantic Segmentation.pdf
Semantic Segmentation.pdfSemantic Segmentation.pdf
Semantic Segmentation.pdfnagwaAboElenein
 
Lec_4_Frequency Domain Filtering-I.pdf
Lec_4_Frequency Domain Filtering-I.pdfLec_4_Frequency Domain Filtering-I.pdf
Lec_4_Frequency Domain Filtering-I.pdfnagwaAboElenein
 
Lec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfLec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfnagwaAboElenein
 
Lec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdfLec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdfnagwaAboElenein
 
Image Segmentation Techniques for Remote Sensing Satellite Images.pdf
Image Segmentation Techniques for Remote Sensing Satellite Images.pdfImage Segmentation Techniques for Remote Sensing Satellite Images.pdf
Image Segmentation Techniques for Remote Sensing Satellite Images.pdfnagwaAboElenein
 
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdfFundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdfnagwaAboElenein
 

More from nagwaAboElenein (17)

Chapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxChapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptx
 
Chapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptxChapter 1: Computer Vision Introduction.pptx
Chapter 1: Computer Vision Introduction.pptx
 
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.
security Symmetric Key Cryptography Substitution Cipher, Transposition Cipher.
 
研究生学位论文在线提交Electronic thesis online submission(20210527).ppt
研究生学位论文在线提交Electronic thesis online submission(20210527).ppt研究生学位论文在线提交Electronic thesis online submission(20210527).ppt
研究生学位论文在线提交Electronic thesis online submission(20210527).ppt
 
security introduction and overview lecture1 .pptx
security introduction and overview lecture1 .pptxsecurity introduction and overview lecture1 .pptx
security introduction and overview lecture1 .pptx
 
Lec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdfLec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdf
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdf
 
Semantic Segmentation.pdf
Semantic Segmentation.pdfSemantic Segmentation.pdf
Semantic Segmentation.pdf
 
lecture1.pptx
lecture1.pptxlecture1.pptx
lecture1.pptx
 
Lec_4_Frequency Domain Filtering-I.pdf
Lec_4_Frequency Domain Filtering-I.pdfLec_4_Frequency Domain Filtering-I.pdf
Lec_4_Frequency Domain Filtering-I.pdf
 
Lec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdfLec_3_Image Enhancement_spatial Domain.pdf
Lec_3_Image Enhancement_spatial Domain.pdf
 
Lec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdfLec_2_Digital Image Fundamentals.pdf
Lec_2_Digital Image Fundamentals.pdf
 
Lec_1_Introduction.pdf
Lec_1_Introduction.pdfLec_1_Introduction.pdf
Lec_1_Introduction.pdf
 
Lecture3.pptx
Lecture3.pptxLecture3.pptx
Lecture3.pptx
 
Image Segmentation Techniques for Remote Sensing Satellite Images.pdf
Image Segmentation Techniques for Remote Sensing Satellite Images.pdfImage Segmentation Techniques for Remote Sensing Satellite Images.pdf
Image Segmentation Techniques for Remote Sensing Satellite Images.pdf
 
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdfFundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
 
Lec_1_Introduction.pdf
Lec_1_Introduction.pdfLec_1_Introduction.pdf
Lec_1_Introduction.pdf
 

Recently uploaded

Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 

Recently uploaded (20)

Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 

brain tumor.pptx

  • 1. 1. Introduction. 2. Problem Description. 3. Literature Reviews. 4. Objectives. 5. Contributions. 6. Future Works. 7. Research Schedule. AGENDA
  • 3. 3 1. Introduction Brain tumor is basically growth of cancerous cells inside the brain or around the brain. Many different categories of brain tumors exist. A few originate in the brain itself, in which case they are characterized as primary. Others spread to this location from somewhere else in the body through metastasis and are characterized as secondary
  • 4. 4 1. Introduction • Glioblastoma (malignant brain tumor) cells have irregular shapes that can spread into the brain, and it is the main cause of death among cancer deaths. • Brain tumor segmentation seeks to separate healthy tissue from tumorous regions such as the enhancing tumor, necrosis and surrounding edema. • Magnetic resonance imaging (MRI) provides detailed images of the brain, and is one of the most common tests used to diagnose brain tumors ,but the big quantity of manual segmentation data produced by MRI is time-consuming. So, automatic techniques of segmentation are therefore needed. The first four images from left to right show the MRI modalities used as input and the fifth image shows the ground truth labels where, Green=edema, yellow=enhancing tumor, red=necrosis, and non-enhancing.
  • 5. 5 1. Introduction Manual brain tumor segmentation is a challenging and tedious task for human experts due to the variability of tumor appearance, unclear borders of the tumor and the need to evaluate multiple MR images with different contrasts simultaneously. In addition, manual segmentation is often prone to significant intra- and inter-rater variability .So automatic or semi-automatic methods are, therefore necessary
  • 7. 7 2. Problem Description • gliomas and glioblastomas are much more difficult to localize these tumors are often diffused, poorly contrasted, and extend. • Another fundamental difficulty with segmenting brain tumors is that they can appear anywhere in the brain, in almost any shape and size • The Brain tumor segmentation problem exhibits severe class imbalance where the healthy voxels comprise 98% of total voxels,0.18% belongs to necrosis ,1.1% to edema and non-enhanced and 0.38% to enhanced tumor.
  • 8. 8 2. Problem Description • In addition to these, brain tumor MRI data obtained from clinical scans or synthetic databases are inherently complex. MRI devices and protocols used for acquisition can vary dramatically from scan to scan imposing intensity biases and other variations for each different slice of image in the dataset. The need for several modalities to effectively segment tumor sub-regions even adds to this complexity.
  • 9. 9 2. Problem Description Main drawbacks or shortcomings of recent studies • One of the limitations in brain tumor segmentation is overfitting which refers to a model that has a good performance on the training dataset but does not perform well on new data. • The complexity of a neural network model is defined by both its structure and the parameters. Therefore, we can reduce the complexity of the network architecture by reducing the layers or parameters or focus on methods that artificially increase the number of training data instead of changing the network architecture.
  • 10. 10 2. Problem Description • In addition, the existing models for automatic brain segmentation using CNN faces class imbalance of labeled data to solve this challenge. For example, for a segmentation of brain tumor or that of white matter lesion, the normal brain region is larger than the abnormal region. • The recent studies for brain tumor segmentation stilling suffering from a little performance for accuracy • Most of the proposed methods emphasised the shortcomings of working with deep CNN models. First, there is the computational requirement. Analysing, manipulating and processing each voxel in a volume is expensive computationally. Main drawbacks or shortcomings of recent studies
  • 12. 12 3. Literature Reviews .A recent study on deep neural network architectures and its applications toward medical image analysis was presented in [15]: 1. Pereira et al.[1]Suggested a 2D CNN network with a small kernel size (i.e. 3×3). Thy train two distinct models one for HGG and another one for LGG. They also use a max-pooling layer of stride 2 and apply a dropout to the dense layers only. The model utilizes the activation function of Leaky rectified linear units (LeakyReLU)[2]. 2. Havaei et al. [3]construct a dual-path 2D CNN brain tumor segmentation network, which contains local and global paths by employing different size convolution kernels to extract different context feature information. However, patch-wise architectures lack spatial continuity and need large storage space, leading to low efficiency.
  • 13. 10/22/2023 13 13 3. Literature Reviews 3. Ronneberger et al. [4] designed asymmetric fully convolutional network called U-Net, which consists of an contracting path that extracts image spatial features and expanding path that generates a segmentation map from the encoded features .U-Net has been widely used in the segmentation of various medical images Tasks 4. Cahall el at. [5] proposed a new image segmentation framework using Inception modules and U-Net image segmentation architecture, their framework includes two learning regions,intra-tumoral structures and glioma sub-regions.To achieve further improvement in performance ,Multi Inception block in each block to increase the network capacity for learning ,and up skip connection are also utilized to optimize the segmentation results[6]
  • 14. 14 3. Literature Reviews 5. The MultiResUNet recently proposed in Ibtehaz and Rahman [7] combined a U- Net with residual Inception modules for multi-scale feature extraction; authors applied their architecture to several multimodal medical imaging datasets. 6. Cheng el at. [8] presented a novel Memory-Efficient Cascade 3D U-Net which achieved comparable segmentation accuracy with less memory and computation consumption
  • 16. 16 4. Objectives Following are some key objectives on which the research will be focused on: 1. Building a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models. The proposed models utilize the power of U-Net, Residual Network. 2. Proposing hybrid two track U-Net and each track has a different amount of layers and utilizes a different kernel size with hybrid loss function for brain tumor segmentation that can solve class imbalanced data problem. 3. Proposing a novel Multi Inception Residual Nested U-Net integrates residual and incepetion models, the encoder and de-coder are connected via a sequence of nesting pathways to enhance brain tumor segmentation and reduce number of parameters. 4. Proposing a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D counterpart to solve 3D convolutions 5. Explore further studies in related area and incorporate it the current research problem to get better results.
  • 17. 17 5- Contributions • Current studies and our experimental results.
  • 18. 18 5. Contributions 1. HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation. 2. MIResU-Net++:Multi Inception Residual Nested U-Net Enhance Brain Tumor Segmentation
  • 19. 19 5. Contributions • HTTU-Net not only extracts more semantic information but also gives more consideration to the information of small-scale brain tumors, which improves the segmentation of brain tumors. • HTTU-Net is also update the U-Net network by adding batch normalization at the end of each block. Our architecture, the first track, focuses on the tumor's form and size while the second track captures the contextual information. Each track consists of a different number of convolution blocks and uses a different kernel size to handle the different tumor sizes. • We have introduced a new hybrid loss feature, combining Focal Loss and Generalized Dice Loss functions, to mitigate the class imbalance. • We demonstrate that the proposed strategy improves the precision of the initial U-Net and also alleviates the issue of overfitting. We experiment with Brats 2018 dataset, and our architecture shows superior performance. The contributions of this wok can be summarized as follows.
  • 20. 20 5. Contributions Fig.The proposed HTTU-Net architecture. The first and second tracks are shown respectively in the colors blue and red.
  • 21. 21 5. Contributions TWO-TRACK U-NET ARCHITECTURE THE FIRST TRACK The first track's contracting part consists of 5 convolutional blocks. For all convolutional layers, this track utilizes 3 kernel. The amount of filters for the first, second, third, fourth, and fth blocks is 64, 128, 256, 512, and 1024. THE SECOND TRACK The second track's contracting part consists of 4 convolutional blocks. Each block has two convolutional layers. 5 kernel for all layers in this track. The amount of lters for the four blocks is 64, 128, 256, and 512.
  • 22. 22 5. Contributions HYBRID LOSS • The selection of loss functions becomes more important, especially in the case of severe brain tumor segmentation problems • we apply the sum of focal loss function and Generalized Dice Loss (GDL) to approach this issue 𝐻𝐿 = GDL + 𝐹𝐿
  • 24. 24 5. Contributions Experimental Results for this work EVALUATION METRICS 𝐷𝑆𝐶 = 2𝑇𝑃 (𝐹𝑃+2𝑇𝑃+𝐹𝑁) (4) Sensitivity = 𝑇𝑃 𝑇𝑃+𝐹𝑁 (5) Specificity = 𝑇𝑃 𝑇𝑃+𝐹𝑃 (6) ℎ(𝐴, 𝐵)=𝑚𝑎𝑥𝑎𝜖𝐴 {𝑚𝑖𝑛𝑏𝜖𝐵 {𝑑(𝑎, 𝑏) }} (7) where a and b are the set of points in A and B, respectively and d(a,b) is Euclidean metric between these points.
  • 25. 25 Performance on Brats’2018 Training Dataset 5. Contributions In our experiments, 160 subjects from Brats training dataset is used for training and 40 subjects for validation purposes. We extract 25,000 multimodal patches from each case to form 4,000,000 patches training set. Fig. Sample segmentation results of four HGG cases from the BraTs’2018 training dataset. Labels are shown in different colors; Green for edema, yellow for enhancing tumor, red for necrosis and non-enhancing
  • 26. 26 Performance on Brats’2018 Training Dataset 5. Contributions Dice Sensitivity ET WT TC ET WT TC Mean 0.741 0.852 0.812 0.768 0.885 0.820 Std.Dev. 0.282 0.110 0.252 0.269 0.150 0.269 Median 0.801 0.879 0.878 0.859 0.908 0.874 25quantile 0.743 0.840 0.784 0.752 0.868 0.803 75quantile 0.846 0.905 0.897 0.951 0.954 0.948 Table1 Quantitative result of segmentation of BraTS’2018 training dataset using Dice and Sensitivity metrics Specificity Hausdorff95 ET WT TC ET WT TC Mean 0.999 0.998 0.997 3.304 8.250 3.301 Std.Dev. 0.042 0.023 0.042 1.824 3.315 1.872 Median 0.999 0.999 1 3.211 8.401 3.251 25quantile 0.999 0.998 1 2.503 6.172 4.103 75quantile 0.999 0.999 1 4.924 10.67 4.254 Table. 2.Quantitative result of segmentation of BraTS’2018 training dataset using Specificity and Hausdorff distance metrics
  • 27. 27 BraTS’2018 Testing Performance 5. Contributions Table3. Quantitative segmentation results for Testing on BraTS 2018 t using Dice and Sensitivity metrics. Table4. Quantitative segmentation results for Testing on BraTS 2018 using Specificity and Hausdorf distance metrics. Dice Sensitivity ET WT TC ET WT TC Mean 0.745 0.865 0.808 0.78 0.883 0.80 Std.Dev. 0.211 0.103 0.223 0.286 0.151 0.263 Median 0.815 0.883 0.895 0.868 0.894 0.882 25 quantile 0.76 0.858 0.77 0.703 0.87 0.77 75quantile 0.887 0.915 0.923 0.943 0.941 0.972 Specificity Hausdorff95 ET WT TC ET WT TC Mean 0.999 0.999 0.998 4.43 7.53 8.811 Std.Dev. 0.053 0.031 0.032 2.441 3.461 2.961 Median 0.999 0.999 0.999 4.29 5.871 7.12 25 quantile 0.999 0.998 0.998 3.20 3.76 5.55 75quantile 1 0.999 0.999 5.09 5.09 9.05
  • 28. 28 BraTS’2018 Testing Performance 5. Contributions Fig. Boxplots of DSC ,Sensitivity ,specificity and Hausdorff obtained from BraTS’2018. The ‘x’ marks the mean score, ‘●’ marks outliers.
  • 29. Methods ET WT TC Original U-Net 0.69 0.852 0.794 First path 0.739 0.850 0.80 Second path Two-pathways 0.732 0.745 0.859 0.865 0.792 0.808 10/22/2023 29 5. Contributions BraTS’2018 Testing Performance Table. Comparison of our proposed model with one –pathway model
  • 30. 5. Contributions Conclusion • In this paper, we introduced an automatic approach for brain tumor segmentation using 2D HTTU-Net architecture. The proposed technique has been quantitatively evaluated using the BraTS'2018 dataset. It contains two tracks; each one consists of a different number of convolution blocks and uses a different kernel size to handle the different tumor sizes • We also developed a new hybrid loss function to alleviate the class imbalance problem by combining the focal loss and Generalized Dice Loss functions. Higher performance is achieved through HTTU-Net architecture, which solves brain tumors segmentation problems that can happen anywhere in the brain, in almost any type and size. • The evaluation of the proposed approach verifies that our results are very comparable to those obtained manually by experts
  • 31. 31 The major contributions of this work are: 1. We propose an end-to-end MIResU-Net++ model for brain tumor segmentation task, MIResU-Net++ extracts more abundant semantic information, in addition extracts information of small-scale brain tumors, which improves the accuracy of segmentation . 5. Contributions 2. MIResUNet++ integrates residual modules and Inception modules with U-Net architecture to make the proposed network deeper and wider,in MIResU-Net ++the encoder and decoder sub- networks are connected through a series of nested pathways.
  • 32. 32 5. Contributions 3. Experimental results on two brain tumor segmentation datasets, i.e., BraTS 2019 and BraTS 2020. Experimental results illuminate that our models with Nested U-Net(U-Net++), Multi Inception residual U-Net(MIResU-Net), i.e., and Multi Inception Residual Nested U-Net(MIRes U-Net++) outperform baseline of U-Net . In addition, the proposed network is efficient and balances the tradeoff between the number of parameters and accuracy of segmentation. The major contributions of this work are:
  • 34. 34 5. Contributions Multi Inception Residual U-Net(MIResU-Net) we modify the UNet architecture with an inception module . Moreover,we also add a residual connection due to its effectiveness in the segmentation of biomedical images [37]. Fig. 1(c), shows the Inception-Res block. The inception -Res module implemented in our network is including multiple sets of 1× 1 convolutions, 7 × 1 convolutions and 1x7 convolutions.
  • 35. 35 5. Contributions Nested skip pathways(U-Net++) we re-designed skip pathways transform the connectivity of the encoder and decoder sub- networks we use a dense InceptionRes block to facilitate the model's capacity for accurate brain tumor segmentation. As illustrated in Fig4.1 (b), the skip pathway between InceptionRes𝐵𝑙𝑜𝑐𝑘0,0 and InceptionRes𝐵𝑙𝑜𝑐𝑘0,4 consists of a dense convolution block with three InceptionRes blocks
  • 36. 36 5. Contributions Multi Inception Residual Nested U-net(MIResU-Net++) In the MIResUNet++ model, as shown in fig 4.1. (a) we replace the sequence of two convolutional layers with the proposed Multi Inception Res block We compute the value of W as follows: W = α × F (3) Here, F is the number of filters in the corresponding layer of U-Net and α is a scalar coefficient. model we assign α = 1.8. inside a MultiInceptionRes block, we assign [ 𝑊 12 ],[ 𝑊 6 ],[ 𝑊 4 ],[ 𝑊 2 ] to convolutional layers respectively which similar to the U-Net architecture
  • 37. 37 5. Contributions Multi Inception Residual Nested U-net(MIResU-Net++) MultiInceptionResNested U-Net architectural details
  • 38. 38 5. Contributions Experimental Results for this work DATASETS • In this work, we mainly adopt BraTS 2019 datase and BraTS 2020 brain tumor MRI dataset for the performance evaluation • Evaluation results of BraTS 2019 training dataset and BraTS 2019 validation dataset are disseminated on the challenge 𝑙𝑒𝑎𝑑𝑒𝑟𝑏𝑜𝑎𝑟𝑑 𝑊𝑒𝑏 𝑠𝑖𝑡𝑒.
  • 39. 39 5. Contributions Evaluation Results on BraTS 2019 Training Dataset DSC Sensitivity Specificity Whole Core Enhancing Whole Core Enhancing Whole Core Enhancing Mean 0.888 0.876 0.819 0. 883 0.869 0.857 0.994 0.998 0.997 Std.Dev. 0.091 0.154 0.171 0.130 0.169 0.152 0.005 0.003 0.005 Median 0.92 0.918 0.869 0.926 0.920 0.903 0.997 0.999 0.998 25 quantile 0.877 0.874 0.788 0.864 0.861 0.814 0.994 0.997 0.997 75quantile 0.947 0.939 0.911 0.960 0.957 0.943 0.999 0.998 0.963 Table4.2. Evaluation results on BraTS 2019 Training Dataset Methods Whole Core Enhancing U − Net∗ 0.87 0.762 0.703 U − Net+ +(𝑜𝑢𝑟) 0.876 0.833 0.742 MIResU-Net(our) MIResU-Net ++(our) 0.882 0.888 0.809 0.876 0.77 0.82 Table4.3. Compared segmentation results with baselines on BraTS 2019 Training Dataset
  • 40. 40 5. Contributions Evaluation Results on BraTS 2019 Training Dataset Table4.4. Compared segmentation results with typical methods on BraTS 2019 Training Dataset. Methods DSC Whole core Enhancing Sensitivity Specificity Whole core Enhancing Whole core Enhancing Cheng et al.[8] Li et al.[11] K. Hu et al.[12] Zhang et al. [13] Chen et al.[14] Zhao et al.[15] 0.89 0.811 0.765 0.89 0.733 0.726 0.89 0.82 0.777 0.876 0.772 0.72 0.888 0.844 0.739 0.88 0.84 0.77 0.898 0.816 0.769 0.994 0.994 0.997 0.895 0.75 0.743 - - - 0.90 0.84 0.86 - - - - - - - - - 0.880 0.832 0.786 0.994 0.997 0.997 0.86 0.82 0.80 - - - MIResUNet++(our) 0.888 0.867 0.819 0.883 0.869 0.857 0.994 0.998 0.997
  • 41. 41 5. Contributions Experiments on BraTS 2019 Validation Dataset DSC Sensitivity Specificity Whole Core Enhancing Whole Core Enhancing Whole Core Enhancing Mean 0.865 0.864 0.806 0.885 0.884 0.842 0.992 0.996 0.996 Std.Dev. 0.104 0.119 0.132 0.101 0.093 0.105 0.009 0.007 0.008 Median 0.897 0.907 0.842 0.923 0.906 0.855 0.994 0.999 0.998 25 quantile 0.865 0.843 0.772 0.851 0.849 0.80 0.991 0.997 0.997 75quantile 0.922 0.934 0.89 0.951 0.95 0.917 0.997 0.999 0.999 Table4. 5. Evaluation results on BraTS 2019 Validation Dataset Methods Whole Core Enhancing U-Net 0.864 0.746 0.694 U-Net++(our) 0.862 0.821 0.741 MIResU-Net(our) MIResU-Net++ (our) 0.865 0.865 0.803 0.864 0.76.3 0.806 Table4.6. Compared segmentation results with baselines on BraTS 2019 Validation Dataset
  • 42. 42 5. Contributions Experiments on BraTS 2019 Validation Dataset Table4.7. Compared segmentation results with typical methods on BraTS 2019 Validation Dataset. Methods DSC Whole core Enhancing Sensitivity Specificity Whole core Enhancing Whole core Enhancing K. Hu et al[12] Zhang et al. [13] Hu et al.[16] Abouelenien et al[17] Chandra et al. [18] MIResUNet++(our) 0.882 0.748 0.718 0.865 0.80 0.745 0.850 0.70 0.65 0.865 0.80 0.745 0.872 0.795 0.741 0.865 0.864 0.806 0.907 0.76 0.868 0.991 0.996 0.994 - - - - - - 0.83 0.79 0.65 - - - 0.883 0.80 0 .78 0.999 0.998 0.999 0.829 0.788 0.795 0.994 0.997 0.998 0.885 0.884 0.842 0.992 0.966 0.966
  • 43. 43 5. Contributions Experiments on BraTS 2019 Validation Dataset Flair ground truth U-Net MIResU-Net++(ours) FIGURE 2. Examples of segmentation results on the BraTS 2019 training dataset. From left to right: Flair image, Ground Truth, U-Net and MIResU-Net++. Each color represents a tumor class: red— necrosis and non-enhancing, green—edema and yellow—enhancing tumor
  • 44. 44 5. Contributions Experiments on BraTS 2019 Validation Dataset Fig. 3. Boxplots of DSC Sensitivity and Specificity obtained from validation data BraTS’2019. The ‘x’ marks the mean score," " marks outliers.
  • 45. 45 5. Contributions Experiments on BraTS 2020 Test Dataset
  • 46. 46 5. Contributions Methods Parameters U-Net(baseline) 7.76M Zhouelat[19] 9.04M Ibtehazelat[7] Kermi elat[20] Zhouelat[21] Linelat[22] MIResUNet++(our) 7.26M 10.15M 13.81M 24.62M 5.91M
  • 47. 5. Contributions Conclusion • In this paper, we presented a novel MIResU-Net++ model for the MRI brain tumor segmentation task by modifying the U-Net architecture. First ,we embedded inception module and residual units into U-Net in each block to help our network to improve the segmentation performance of brain tumors.Then the encoder and decoder sub-networks are connected through a series of nested pathways. • The proposed method was evaluated on the BRATS 2019 and the BRATS 2020 datasets.Experiment results demonstrated that MIResU-Net++ outperformed U- Net and other typical brain tumor segmentation methods by a large margin. • MIRes++U-Net can achieve comparable segmentation accuracy with less number of parameters
  • 49. 49 6. Future Works 1. A novel hybrid densely connected UNet (H-DenseUNet for tumor segmentation 3. Attention Residual Nested U-Net for Brain Tumor Segmentation and survival prediction. 2. Convolution neural network to segment tumor, radiomics features for survival prediction.
  • 50. 50 1. A novel hybrid densely connected UNet (H-DenseUNet for tumor segmentation 6. Future Works
  • 51. 51 6. Future Works in this work we will propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for tumor segmentation.
  • 52. 52 2. Convolution neural network to segment tumor, radiomics features for survival prediction. 6. Future Works
  • 53. 53 6. Future Works In this work, we will propose a convolutional neural network trained on high- contrast images can transform the intensity distribution of brain lesions in its internal subregions. Specifically, a generative adversarial network (GAN) is extended to synthesize high-contrast images; followed by survival regression and classification using these abnormal tumor tissue segments and other relevant clinical features. The survival prediction step includes two representative survival prediction pipelines that combine different feature selection and regression approaches. .
  • 54. 54 3.Attention Residual Nested U-Net for Brain Tumor Segmentation and survival prediction 6. Future Works
  • 55. 55 6. Future Works Firstly In this work, we will explore the effectiveness of a recent attention module called attention gate for brain tumor segmentation task, then we will replace skip connections with nested path .Finally, a random forest model is trained to predict the overall survival of patients.
  • 56. 56 References 1. S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images,” IEEE Trans. Med. Imaging, vol. 35, no. 5, pp. 1240–1251, 2016. 2. A. L. Maas, A. Y. Hannun, and A. Y. Ng, ``Rectier nonlinearities improve neural network acoustic models,'' in Proc. ICMLWork. Deep Learn. Audio, Speech Lang. Process., vol. 28, p. 3, Jun. 2013. 3. M. Havaei et al., “Brain tumor segmentation with Deep Neural Networks,” Med. Image Anal., vol. 35, pp. 18–31, 2017. 4. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9351, pp. 234–241, 2015. 5. D. E. Cahall, G. Rasool, and N. C. Bouaynaya, “Inception Modules Enhance Brain Tumor Segmentation,” vol. 13, no. July, pp. 1–8, 2019. 6. H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019. 7. N. Ibtehaz and M. S. Rahman, “MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, 2020.
  • 57. 57 References 8. X. Cheng, Z. Jiang, Q. Sun, and J. Zhang, “Memory-efficient cascade 3d u-net for brain tumor segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11992 LNCS, no. December 2019, pp. 242–253, 2020. 9. T.-Y. Cl:Aire Lin, P. Goyal, R. Girshick, and K. P. He & Dollar,``Focal loss for dense object detection,'' IEEE Trans. Pattern Anal. Mach.Intell., vol. 42, no. 2, pp. 318327, Feb. 2020, doi: 10.1109/TPAMI.2018. 2858826. 10.C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, ``Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,'' in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (Lecture Notes in Computer Science), vol. 10553. Springer, 2017, pp. 240248. 11.H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019. 12. K. A. I. Hu et al., “Brain Tumor Segmentation Using Multi-cascaded Convolutional Neural Networks and Conditional Random Field,” IEEE Access, vol. PP, p. 1, 2019. 13. J. Zhang, Z. Jiang, J. Dong, and Y. Hou, “Attention Gate ResU-Net for automatic MRI brain tumor segmentation,” vol. 8, pp. 1–13, 2020. 14. W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3D-UNET: Separable 3D U-Net for brain tumor segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019.
  • 58. 58 References 15. X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan, “A deep learning model integrating FCNNs and CRFs for brain tumor segmentation,” Med. Image Anal., vol. 43, pp. 98–111, 2018. 16. Yan Hu and Yong Xia, “3D Deep Neural Network-Based Brain Tumor Segmentation Using Multimodality Magnetic Resonance Sequences,” Int. MICCAI Brainlesion Work., vol. 10670, no. December 2017, pp. 423–434, 2017. 17. A. A. A. NAGWA M. ABOELENEIN, PIAO SONGHAO, ANIS KOUBAA3, ALAM NOOR, “HTTU-Net : Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation,” IEEE Access, no. June, 2020. 18. M. V. S. Chandra, “Context aware 3d cnns for brain tumor segmentation,” in International MICCAI Brainlesion Workshop, 2018, vol. 2, pp. 299–310. 19. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11045 LNCS, pp. 3–11, 2018. 20. I. M. Adel Kermi, “Brain Tumor Segmentation in Multimodal 3D-MRI of BraTS’2018 Datasets using Deep Convolutional Neural Networks,” in Pre-conference proceedings 2018 International MICCAI BraTS Challenge, 2018.
  • 59. 59 References 21. C. Zhou, C. Ding, X. Wang, Z. Lu, and D. Tao, “One-Pass Multi-Task Networks With Cross- Task Guided Attention for Brain Tumor Segmentation,” IEEE Trans. Image Process., vol. 29, pp. 4516–4529, 2020. 22. F. Lin, Q. Wu, J. Liu, D. Wang, and X. Kong, “Path aggregation U-Net model for brain tumor segmentation,” Multimed. Tools Appl., 2020.

Editor's Notes

  1. Central Cloud Computing: High latency, not compatible for real-time application Security and privacy Mobile Cloudlets: provide cloud services to the mobile users within the coverage of their Wi-Fi access point. Coverage Regions. Scalability. Mobile Ad-hoc: is a set of mobile devices that are connected via wireless mobile ad hoc networks, and tasks from one mobile device can only be offloaded to other mobile devices. Finding proper computing Devices in proximity while guaranteeing that processed data will be delivered back to the source Device. Coordination among the computing devices. Security and privacy issues. Mobile Edge Computing: is servers are placed at the edge of the mobile networks(BSs) where it Provide with storage and processing capabilities within the access range of the mobile devices.
  2. Estimator: The estimator module is responsible for identifying these methods for local execution on the mobile device and remote execution on the cloud with different input sizes (stored as a sample) at the installation step. Then, the module obtains the values of execution time, memory usage, CPU utilization, and energy consumption for each annotated method for these different input sizes (minimal eye application is used to measure the energy consumption and CPU utilization). Finally, the values are communicated and sent to the profile module. Profile: The profile module obtains the values of execution time, memory usage, CPU utilization, and energy consumption from estimator module for each annotated method. Then, the module creates a new file for each method and stores these values into the file. These files are updated after each running process and used by the decision maker module as a history-based file in the offloading decision. Network and Bandwidth Monitor: This module only monitors the current status of the network and gathers cell connection state and its bandwidth, WiFi connection state and its bandwidth, and signal strength of cell and WiFi connection (get this information using programming code). Then, this information is sent to the decision maker module to support the determination the offloading decision. Decision Maker: The decision make, that is, the core module of the proposed framework, contains an integer linear programming model and decision-making algorithm that predicts at run-time where the annotated methods are executed. The goal of the model is to find an application partitioning strategy that minimizes the energy consumption, transfer data, memory usage, and CPU utilization, in smartphones, subject to certain constraints. Mobile Manager: The mobile manager module is responsible for sending a binary file containing the method code and its required libraries at the installation step. The mobile manager handles the execution of the method based on the model decision. Cloud Manager: The cloud manager module is the only module deployed on the cloud side. This module is written purely in Java. Therefore, any application can benefit from the proposed framework to offload its computation to any resource that runs the Java Virtual Machine (JVM).
  3. 𝜏 Tau Consequently, when all MUs offload their computation tasks via the wireless access channel simultaneously during a computation-offloading period, a constraint is the bandwidth limit:
  4. 𝛽 𝑖 , Beta 𝛿 𝑖 Delta Γ 𝑖 Gamma 𝜻 Zeta
  5. 𝛽 𝑖 , Beta 𝛿 𝑖 Delta Γ 𝑖 Gamma
  6. Wt and we denote the weighting parameters of execution time and energy consumption for MU i's decision making, respectively. For example, we = 1 and wt = 0 if the mobile battery of a user i is in a low state, we = 0 and wt= 1 if the MU i is running a real-time application sensitive to the delay (e.g., video streaming), or different values are set to we and wt for different objectives.