3. 3
1. Introduction
Brain tumor is basically growth of cancerous cells inside the brain or
around the brain. Many different categories of brain tumors exist. A
few originate in the brain itself, in which case they are characterized as
primary. Others spread to this location from somewhere else in the
body through metastasis and are characterized as secondary
4. 4
1. Introduction
• Glioblastoma (malignant brain tumor) cells have irregular shapes that can spread into
the brain, and it is the main cause of death among cancer deaths.
• Brain tumor segmentation seeks to separate healthy tissue from tumorous
regions such as the enhancing tumor, necrosis and surrounding edema.
• Magnetic resonance imaging (MRI) provides detailed images of the brain, and is one
of the most common tests used to diagnose brain tumors ,but the big quantity of
manual segmentation data produced by MRI is time-consuming. So, automatic
techniques of segmentation are therefore needed.
The first four images from left to right show the MRI modalities used as input and the fifth image shows the ground
truth labels where, Green=edema, yellow=enhancing tumor, red=necrosis, and non-enhancing.
5. 5
1. Introduction
Manual brain tumor segmentation is a challenging and tedious task for human
experts due to the variability of tumor appearance, unclear borders of the tumor and
the need to evaluate multiple MR images with different contrasts simultaneously. In
addition, manual segmentation is often prone to significant intra- and inter-rater
variability .So automatic or semi-automatic methods are, therefore necessary
7. 7
2. Problem Description
• gliomas and glioblastomas are much more difficult to localize these
tumors are often diffused, poorly contrasted, and extend.
• Another fundamental difficulty with segmenting brain tumors is that
they can appear anywhere in the brain, in almost any shape and size
• The Brain tumor segmentation problem exhibits severe class imbalance
where the healthy voxels comprise 98% of total voxels,0.18% belongs
to necrosis ,1.1% to edema and non-enhanced and 0.38% to enhanced
tumor.
8. 8
2. Problem Description
• In addition to these, brain tumor MRI data obtained from clinical scans or
synthetic databases are inherently complex. MRI devices and protocols used for
acquisition can vary dramatically from scan to scan imposing intensity biases
and other variations for each different slice of image in the dataset. The need for
several modalities to effectively segment tumor sub-regions even adds to this
complexity.
9. 9
2. Problem Description
Main drawbacks or shortcomings of recent studies
• One of the limitations in brain tumor segmentation is overfitting which
refers to a model that has a good performance on the training dataset but
does not perform well on new data.
• The complexity of a neural network model is defined by both its structure
and the parameters. Therefore, we can reduce the complexity of the
network architecture by reducing the layers or parameters or focus on
methods that artificially increase the number of training data instead of
changing the network architecture.
10. 10
2. Problem Description
• In addition, the existing models for automatic brain segmentation using CNN
faces class imbalance of labeled data to solve this challenge. For example, for a
segmentation of brain tumor or that of white matter lesion, the normal brain
region is larger than the abnormal region.
• The recent studies for brain tumor segmentation stilling suffering from a little
performance for accuracy
• Most of the proposed methods emphasised the shortcomings of working with
deep CNN models. First, there is the computational requirement. Analysing,
manipulating and processing each voxel in a volume is expensive
computationally.
Main drawbacks or shortcomings of recent studies
12. 12
3. Literature Reviews
.A recent study on deep neural network architectures and its applications toward
medical image analysis was presented in [15]:
1. Pereira et al.[1]Suggested a 2D CNN network with a small kernel size (i.e. 3×3).
Thy train two distinct models one for HGG and another one for LGG. They also
use a max-pooling layer of stride 2 and apply a dropout to the dense layers only.
The model utilizes the activation function of Leaky rectified linear units
(LeakyReLU)[2].
2. Havaei et al. [3]construct a dual-path 2D CNN brain tumor segmentation
network, which contains local and global paths by employing different size
convolution kernels to extract different context feature information. However,
patch-wise architectures lack spatial continuity and need large storage space,
leading to low efficiency.
13. 10/22/2023 13
13
3. Literature Reviews
3. Ronneberger et al. [4] designed asymmetric fully convolutional network
called U-Net, which consists of an contracting path that extracts image
spatial features and expanding path that generates a segmentation map
from the encoded features .U-Net has been widely used in the
segmentation of various medical images Tasks
4. Cahall el at. [5] proposed a new image segmentation framework using Inception
modules and U-Net image segmentation architecture, their framework includes
two learning regions,intra-tumoral structures and glioma sub-regions.To achieve
further improvement in performance ,Multi Inception block in each block to
increase the network capacity for learning ,and up skip connection are also
utilized to optimize the segmentation results[6]
14. 14
3. Literature Reviews
5. The MultiResUNet recently proposed in Ibtehaz and Rahman [7] combined a U-
Net with residual Inception modules for multi-scale feature extraction; authors
applied their architecture to several multimodal medical imaging datasets.
6. Cheng el at. [8] presented a novel Memory-Efficient Cascade 3D U-Net which
achieved comparable segmentation accuracy with less memory and computation
consumption
16. 16
4. Objectives
Following are some key objectives on which the research will be focused on:
1. Building a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well
as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net
models. The proposed models utilize the power of U-Net, Residual Network.
2. Proposing hybrid two track U-Net and each track has a different amount of layers
and utilizes a different kernel size with hybrid loss function for brain tumor
segmentation that can solve class imbalanced data problem.
3. Proposing a novel Multi Inception Residual Nested U-Net integrates residual and
incepetion models, the encoder and de-coder are connected via a sequence of
nesting pathways to enhance brain tumor segmentation and reduce number of
parameters.
4. Proposing a novel hybrid densely connected UNet (H-DenseUNet), which consists
of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D
counterpart to solve 3D convolutions
5. Explore further studies in related area and incorporate it the current research
problem to get better results.
19. 19
5. Contributions
• HTTU-Net not only extracts more semantic information but also gives more consideration
to the information of small-scale brain tumors, which improves the segmentation of
brain tumors.
• HTTU-Net is also update the U-Net network by adding batch normalization at the end of
each block. Our architecture, the first track, focuses on the tumor's form and size while
the second track captures the contextual information. Each track consists of a different
number of convolution blocks and uses a different kernel size to handle the different
tumor sizes.
• We have introduced a new hybrid loss feature, combining Focal Loss and Generalized
Dice Loss functions, to mitigate the class imbalance.
• We demonstrate that the proposed strategy improves the precision of the initial U-Net
and also alleviates the issue of overfitting. We experiment with Brats 2018 dataset, and
our architecture shows superior performance.
The contributions of this wok can be summarized as follows.
21. 21
5. Contributions
TWO-TRACK U-NET ARCHITECTURE
THE FIRST TRACK
The first track's contracting part
consists of 5 convolutional blocks.
For all convolutional layers, this track
utilizes 3 kernel. The amount of
filters for the first, second, third,
fourth, and fth blocks is 64, 128, 256,
512, and 1024.
THE SECOND TRACK
The second track's contracting part
consists of 4 convolutional blocks. Each
block has two convolutional layers. 5
kernel for all layers in this track. The
amount of lters for the four blocks is 64,
128, 256, and 512.
22. 22
5. Contributions
HYBRID LOSS
• The selection of loss functions becomes more important, especially in the
case of severe brain tumor segmentation problems
• we apply the sum of focal loss function and Generalized Dice Loss (GDL) to
approach this issue
𝐻𝐿 = GDL + 𝐹𝐿
24. 24
5. Contributions
Experimental Results for this work
EVALUATION METRICS
𝐷𝑆𝐶 =
2𝑇𝑃
(𝐹𝑃+2𝑇𝑃+𝐹𝑁)
(4)
Sensitivity =
𝑇𝑃
𝑇𝑃+𝐹𝑁
(5)
Specificity =
𝑇𝑃
𝑇𝑃+𝐹𝑃
(6)
ℎ(𝐴, 𝐵)=𝑚𝑎𝑥𝑎𝜖𝐴 {𝑚𝑖𝑛𝑏𝜖𝐵 {𝑑(𝑎, 𝑏) }} (7)
where a and b are the set of points in A and B, respectively and d(a,b) is Euclidean
metric between these points.
25. 25
Performance on Brats’2018 Training Dataset
5. Contributions
In our experiments, 160 subjects from Brats training dataset is used for training and 40
subjects for validation purposes. We extract 25,000 multimodal patches from each case to
form 4,000,000 patches training set.
Fig. Sample segmentation results of four HGG cases from the BraTs’2018 training dataset. Labels are shown in different colors; Green for
edema, yellow for enhancing tumor, red for necrosis and non-enhancing
26. 26
Performance on Brats’2018 Training Dataset
5. Contributions
Dice Sensitivity
ET WT TC ET WT TC
Mean 0.741 0.852 0.812 0.768 0.885 0.820
Std.Dev. 0.282 0.110 0.252 0.269 0.150 0.269
Median 0.801 0.879 0.878 0.859 0.908 0.874
25quantile 0.743 0.840 0.784 0.752 0.868 0.803
75quantile 0.846 0.905 0.897 0.951 0.954 0.948
Table1 Quantitative result of segmentation of BraTS’2018 training dataset
using Dice and Sensitivity metrics
Specificity Hausdorff95
ET WT TC ET WT TC
Mean 0.999 0.998 0.997 3.304 8.250 3.301
Std.Dev. 0.042 0.023 0.042 1.824 3.315 1.872
Median 0.999 0.999 1 3.211 8.401 3.251
25quantile 0.999 0.998 1 2.503 6.172 4.103
75quantile 0.999 0.999 1 4.924 10.67 4.254
Table. 2.Quantitative result of segmentation of BraTS’2018 training dataset
using Specificity and Hausdorff distance metrics
27. 27
BraTS’2018 Testing Performance
5. Contributions
Table3. Quantitative segmentation results for Testing on BraTS 2018 t
using Dice and Sensitivity metrics.
Table4. Quantitative segmentation results for Testing on BraTS 2018
using Specificity and Hausdorf distance metrics.
Dice Sensitivity
ET WT TC ET WT TC
Mean 0.745 0.865 0.808 0.78 0.883 0.80
Std.Dev. 0.211 0.103 0.223 0.286 0.151 0.263
Median 0.815 0.883 0.895 0.868 0.894 0.882
25 quantile 0.76 0.858 0.77 0.703 0.87 0.77
75quantile 0.887 0.915 0.923 0.943 0.941 0.972
Specificity Hausdorff95
ET WT TC ET WT TC
Mean 0.999 0.999 0.998 4.43 7.53 8.811
Std.Dev. 0.053 0.031 0.032 2.441 3.461 2.961
Median 0.999 0.999 0.999 4.29 5.871 7.12
25 quantile 0.999 0.998 0.998 3.20 3.76 5.55
75quantile 1 0.999 0.999 5.09 5.09 9.05
28. 28
BraTS’2018 Testing Performance
5. Contributions
Fig. Boxplots of DSC ,Sensitivity ,specificity and Hausdorff obtained from BraTS’2018. The ‘x’ marks the mean score,
‘●’ marks outliers.
29. Methods ET WT TC
Original U-Net 0.69 0.852 0.794
First path 0.739 0.850 0.80
Second path
Two-pathways
0.732
0.745
0.859
0.865
0.792
0.808
10/22/2023 29
5. Contributions
BraTS’2018 Testing Performance
Table. Comparison of our proposed model with one –pathway model
30. 5. Contributions
Conclusion
• In this paper, we introduced an automatic approach for brain tumor
segmentation using 2D HTTU-Net architecture. The proposed technique
has been quantitatively evaluated using the BraTS'2018 dataset. It
contains two tracks; each one consists of a different number of
convolution blocks and uses a different kernel size to handle the different
tumor sizes
• We also developed a new hybrid loss function to alleviate the class
imbalance problem by combining the focal loss and Generalized Dice Loss
functions. Higher performance is achieved through HTTU-Net
architecture, which solves brain tumors segmentation problems that can
happen anywhere in the brain, in almost any type and size.
• The evaluation of the proposed approach verifies that our results are very
comparable to those obtained manually by experts
31. 31
The major contributions of this work are:
1. We propose an end-to-end MIResU-Net++ model for brain tumor
segmentation task, MIResU-Net++ extracts more abundant semantic
information, in addition extracts information of small-scale brain tumors,
which improves the accuracy of segmentation .
5. Contributions
2. MIResUNet++ integrates residual modules and Inception modules
with U-Net architecture to make the proposed network deeper
and wider,in MIResU-Net ++the encoder and decoder sub-
networks are connected through a series of nested pathways.
32. 32
5. Contributions
3. Experimental results on two brain tumor segmentation datasets, i.e., BraTS
2019 and BraTS 2020. Experimental results illuminate that our models with
Nested U-Net(U-Net++), Multi Inception residual U-Net(MIResU-Net), i.e., and
Multi Inception Residual Nested U-Net(MIRes U-Net++) outperform baseline of
U-Net . In addition, the proposed network is efficient and balances the tradeoff
between the number of parameters and accuracy of segmentation.
The major contributions of this work are:
34. 34
5. Contributions
Multi Inception Residual U-Net(MIResU-Net)
we modify the UNet architecture with an inception module . Moreover,we also add a residual
connection due to its effectiveness in the segmentation of biomedical images [37]. Fig. 1(c),
shows the Inception-Res block. The inception -Res module implemented in our network is
including multiple sets of 1× 1 convolutions, 7 × 1 convolutions and 1x7 convolutions.
35. 35
5. Contributions
Nested skip pathways(U-Net++)
we re-designed skip pathways transform the connectivity of the encoder and decoder sub-
networks we use a dense InceptionRes block to facilitate the model's capacity for accurate
brain tumor segmentation. As illustrated in Fig4.1 (b), the skip pathway between
InceptionRes𝐵𝑙𝑜𝑐𝑘0,0
and InceptionRes𝐵𝑙𝑜𝑐𝑘0,4
consists of a dense convolution block with
three InceptionRes blocks
36. 36
5. Contributions
Multi Inception Residual Nested U-net(MIResU-Net++)
In the MIResUNet++ model, as shown in fig 4.1. (a) we replace the sequence of two
convolutional layers with the proposed Multi Inception Res block
We compute the value of W as follows:
W = α × F (3)
Here, F is the number of filters in the corresponding layer of U-Net and α is a scalar
coefficient.
model we assign α = 1.8. inside a MultiInceptionRes block, we assign [
𝑊
12
],[
𝑊
6
],[
𝑊
4
],[
𝑊
2
] to
convolutional layers respectively which similar to the U-Net architecture
38. 38
5. Contributions
Experimental Results for this work
DATASETS
• In this work, we mainly adopt BraTS 2019 datase and BraTS 2020 brain tumor MRI dataset
for the performance evaluation
• Evaluation results of BraTS 2019 training dataset and BraTS 2019 validation dataset are
disseminated on the challenge 𝑙𝑒𝑎𝑑𝑒𝑟𝑏𝑜𝑎𝑟𝑑 𝑊𝑒𝑏 𝑠𝑖𝑡𝑒.
42. 42
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Table4.7. Compared segmentation results with typical methods on BraTS 2019 Validation Dataset.
Methods
DSC
Whole core Enhancing
Sensitivity Specificity
Whole core Enhancing Whole core Enhancing
K. Hu et al[12]
Zhang et al. [13]
Hu et al.[16]
Abouelenien et al[17]
Chandra et al. [18]
MIResUNet++(our)
0.882 0.748 0.718
0.865 0.80 0.745
0.850 0.70 0.65
0.865 0.80 0.745
0.872 0.795 0.741
0.865 0.864 0.806
0.907 0.76 0.868 0.991 0.996 0.994
- - - - - -
0.83 0.79 0.65 - - -
0.883 0.80 0 .78 0.999 0.998 0.999
0.829 0.788 0.795 0.994 0.997 0.998
0.885 0.884 0.842 0.992 0.966 0.966
43. 43
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Flair ground truth U-Net MIResU-Net++(ours)
FIGURE 2. Examples of segmentation results on the BraTS 2019 training dataset. From left to right:
Flair image, Ground Truth, U-Net and MIResU-Net++. Each color represents a tumor class: red—
necrosis and non-enhancing, green—edema and yellow—enhancing tumor
44. 44
5. Contributions
Experiments on BraTS 2019 Validation Dataset
Fig. 3. Boxplots of DSC Sensitivity and Specificity obtained from validation data BraTS’2019. The ‘x’ marks the mean score," " marks outliers.
47. 5. Contributions
Conclusion
• In this paper, we presented a novel MIResU-Net++ model for the MRI brain tumor
segmentation task by modifying the U-Net architecture. First ,we embedded
inception module and residual units into U-Net in each block to help our network
to improve the segmentation performance of brain tumors.Then the encoder and
decoder sub-networks are connected through a series of nested pathways.
• The proposed method was evaluated on the BRATS 2019 and the BRATS 2020
datasets.Experiment results demonstrated that MIResU-Net++ outperformed U-
Net and other typical brain tumor segmentation methods by a large margin.
• MIRes++U-Net can achieve comparable segmentation accuracy with less number
of parameters
49. 49
6. Future Works
1. A novel hybrid densely connected UNet (H-DenseUNet for tumor segmentation
3. Attention Residual Nested U-Net for Brain Tumor Segmentation and survival
prediction.
2. Convolution neural network to segment tumor, radiomics features for survival
prediction.
50. 50
1. A novel hybrid densely connected UNet (H-DenseUNet for tumor
segmentation
6. Future Works
51. 51
6. Future Works
in this work we will propose a novel hybrid densely connected UNet (H-DenseUNet),
which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a
3D counterpart for hierarchically aggregating volumetric contexts under the spirit of
the auto-context algorithm for tumor segmentation.
52. 52
2. Convolution neural network to segment tumor, radiomics features for
survival prediction.
6. Future Works
53. 53
6. Future Works
In this work, we will propose a convolutional neural network trained on high-
contrast images can transform the intensity distribution of brain lesions in its
internal subregions. Specifically, a generative adversarial network (GAN) is
extended to synthesize high-contrast images; followed by survival regression and
classification using these abnormal tumor tissue segments and other relevant
clinical features. The survival prediction step includes two representative survival
prediction pipelines that combine different feature selection and regression
approaches.
.
55. 55
6. Future Works
Firstly In this work, we will explore the effectiveness of a recent
attention module called attention gate for brain tumor segmentation
task, then we will replace skip connections with nested path .Finally,
a random forest model is trained to predict the overall survival of
patients.
56. 56
References
1. S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using
Convolutional Neural Networks in MRI Images,” IEEE Trans. Med. Imaging, vol. 35, no. 5,
pp. 1240–1251, 2016.
2. A. L. Maas, A. Y. Hannun, and A. Y. Ng, ``Rectier nonlinearities improve neural network
acoustic models,'' in Proc. ICMLWork. Deep Learn. Audio, Speech Lang. Process., vol. 28,
p. 3, Jun. 2013.
3. M. Havaei et al., “Brain tumor segmentation with Deep Neural Networks,” Med. Image
Anal., vol. 35, pp. 18–31, 2017.
4. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical
image segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 9351, pp. 234–241, 2015.
5. D. E. Cahall, G. Rasool, and N. C. Bouaynaya, “Inception Modules Enhance Brain Tumor
Segmentation,” vol. 13, no. July, pp. 1–8, 2019.
6. H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using
improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019.
7. N. Ibtehaz and M. S. Rahman, “MultiResUNet: Rethinking the U-Net architecture for
multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87,
2020.
57. 57
References
8. X. Cheng, Z. Jiang, Q. Sun, and J. Zhang, “Memory-efficient cascade 3d u-net for brain
tumor segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 11992 LNCS, no. December 2019, pp. 242–253, 2020.
9. T.-Y. Cl:Aire Lin, P. Goyal, R. Girshick, and K. P. He & Dollar,``Focal loss for dense object
detection,'' IEEE Trans. Pattern Anal. Mach.Intell., vol. 42, no. 2, pp. 318327, Feb. 2020, doi:
10.1109/TPAMI.2018. 2858826.
10.C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, ``Generalised dice overlap
as a deep learning loss function for highly unbalanced segmentations,'' in Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support (Lecture
Notes in Computer Science), vol. 10553. Springer, 2017, pp. 240248.
11.H. Li, A. Li, and M. Wang, “A novel end-to-end brain tumor segmentation method using
improved fully convolutional networks,” vol. 108, no. August 2018, pp. 150–160, 2019.
12. K. A. I. Hu et al., “Brain Tumor Segmentation Using Multi-cascaded Convolutional Neural
Networks and Conditional Random Field,” IEEE Access, vol. PP, p. 1, 2019.
13. J. Zhang, Z. Jiang, J. Dong, and Y. Hou, “Attention Gate ResU-Net for automatic MRI brain
tumor segmentation,” vol. 8, pp. 1–13, 2020.
14. W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3D-UNET: Separable 3D U-Net for brain
tumor segmentation,” in Lecture Notes in Computer Science (including subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019.
58. 58
References
15. X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan, “A deep learning model integrating
FCNNs and CRFs for brain tumor segmentation,” Med. Image Anal., vol. 43, pp. 98–111,
2018.
16. Yan Hu and Yong Xia, “3D Deep Neural Network-Based Brain Tumor Segmentation
Using Multimodality Magnetic Resonance Sequences,” Int. MICCAI Brainlesion Work.,
vol. 10670, no. December 2017, pp. 423–434, 2017.
17. A. A. A. NAGWA M. ABOELENEIN, PIAO SONGHAO, ANIS KOUBAA3, ALAM NOOR,
“HTTU-Net : Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation,” IEEE
Access, no. June, 2020.
18. M. V. S. Chandra, “Context aware 3d cnns for brain tumor segmentation,” in
International MICCAI Brainlesion Workshop, 2018, vol. 2, pp. 299–310.
19. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net
architecture for medical image segmentation,” Lect. Notes Comput. Sci. (including
Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11045 LNCS, pp. 3–11,
2018.
20. I. M. Adel Kermi, “Brain Tumor Segmentation in Multimodal 3D-MRI of BraTS’2018
Datasets using Deep Convolutional Neural Networks,” in Pre-conference proceedings
2018 International MICCAI BraTS Challenge, 2018.
59. 59
References
21. C. Zhou, C. Ding, X. Wang, Z. Lu, and D. Tao, “One-Pass Multi-Task Networks With Cross-
Task Guided Attention for Brain Tumor Segmentation,” IEEE Trans. Image Process., vol.
29, pp. 4516–4529, 2020.
22. F. Lin, Q. Wu, J. Liu, D. Wang, and X. Kong, “Path aggregation U-Net model for brain
tumor segmentation,” Multimed. Tools Appl., 2020.
Central Cloud Computing:
High latency, not compatible for real-time application
Security and privacy
Mobile Cloudlets: provide cloud services to the mobile users within the coverage of their Wi-Fi access point.
Coverage Regions.
Scalability.
Mobile Ad-hoc: is a set of mobile devices that are connected via wireless mobile ad hoc networks, and tasks from one mobile device can only be offloaded to other mobile devices.
Finding proper computing Devices in proximity while guaranteeing that processed data will be delivered back to the source Device.
Coordination among the computing devices.
Security and privacy issues.
Mobile Edge Computing: is servers are placed at the edge of the mobile networks(BSs) where it Provide with storage and processing capabilities within the access range of the mobile devices.
Estimator: The estimator module is responsible for identifying these methods for local execution on the mobile device and remote execution on the cloud with different input sizes (stored as a sample) at the installation step. Then, the module obtains the values of execution time, memory usage, CPU utilization, and energy consumption for each annotated method for these different input sizes (minimal eye application is used to measure the energy consumption and CPU utilization). Finally, the values are communicated and sent to the profile module.
Profile: The profile module obtains the values of execution time, memory usage, CPU utilization, and energy consumption from estimator module for each annotated method. Then, the module creates a new file for each method and stores these values into the file. These files are updated after each running process and used by the decision maker module as a history-based file in the offloading decision.
Network and Bandwidth Monitor: This module only monitors the current status of the network and gathers cell connection state and its bandwidth, WiFi connection state and its bandwidth, and signal strength of cell and WiFi connection (get this information using programming code). Then, this information is sent to the decision maker module to support the determination the offloading decision.
Decision Maker: The decision make, that is, the core module of the proposed framework, contains an integer linear programming model and decision-making algorithm that predicts at run-time where the annotated methods are executed. The goal of the model is to find an application partitioning strategy that minimizes the energy consumption, transfer data, memory usage, and CPU utilization, in smartphones, subject to certain constraints.
Mobile Manager: The mobile manager module is responsible for sending a binary file containing the method code and its required libraries at the installation step. The mobile manager handles the execution of the method based on the model decision.
Cloud Manager: The cloud manager module is the only module deployed on the cloud side. This module is written purely in Java. Therefore, any application can benefit from the proposed framework to offload its computation to any resource that runs the Java Virtual Machine (JVM).
𝜏 Tau
Consequently, when all MUs offload their computation tasks via the wireless access channel simultaneously during a computation-offloading period, a constraint is the bandwidth limit:
𝛽 𝑖 , Beta
𝛿 𝑖 Delta
Γ 𝑖 Gamma
𝜻 Zeta
𝛽 𝑖 , Beta
𝛿 𝑖 Delta
Γ 𝑖 Gamma
Wt and we denote the weighting parameters of execution time and energy consumption for MU i's decision making, respectively. For example, we = 1 and wt = 0 if the mobile battery of a user i is in a low state, we = 0 and wt= 1 if the MU i is running a real-time application sensitive to the delay (e.g., video streaming), or different values are set to we and wt for different objectives.