SlideShare a Scribd company logo
TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 3, June 2020, pp. 1310~1318
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i3.14753  1310
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
UNet-VGG16 with transfer learning
for MRI-based brain tumor segmentation
Anindya Apriliyanti Pravitasari1
, Nur Iriawan2
, Mawanda Almuhayar3
, Taufik Azmi4
,
Irhamah5
, Kartika Fithriasari6
, Santi Wulan Purnami7
, Widiana Ferriastuti8
1,2,3,4,5,6,7
Department of Statistics, Institut Teknologi Sepuluh Nopember, Indonesia
1
Department of Statistics, Universitas Padjajaran, Indonesia
8
Department of Radiology, Universitas Airlangga, Indonesia
Article Info ABSTRACT
Article history:
Received Aug 15, 2019
Revised Jan 29, 2020
Accepted Feb 26, 2020
A brain tumor is one of a deadly disease that needs high accuracy in its
medical surgery. Brain tumor detection can be done through magnetic
resonance imaging (MRI). Image segmentation for the MRI brain tumor aims
to separate the tumor area (as the region of interest or ROI) with a healthy
brain and provide a clear boundary of the tumor. This study classifies the ROI and
non-ROI using fully convolutional network with new architecture, namely
UNet-VGG16. This model or architecture is a hybrid of U-Net and VGG16 with
transfer Learning to simplify the U-Net architecture. This method has a high
accuracy of about 96.1% in the learning dataset. The validation is done by
calculating the correct classification ratio (CCR) to comparing the segmentation
result with the ground truth. The CCR value shows that this UNet-VGG16 could
recognize the brain tumor area with a mean of CCR value is about 95.69%.
Keywords:
Fully convolution network
Image segmentation
Transfer learning
U-Net
VGG16 This is an open access article under the CC BY-SA license.
Corresponding Author:
Nur Iriawan,
Department of Statistics,
Institut Teknologi Sepuluh Nopember,
Kampus ITS Sukolilo-Surabaya 60111, Indonesia.
Email: nur_i@statistika.its.ac.id
1. INTRODUCTION
A brain tumor is the 15th
deadly disease in Indonesia compared to all types of cancer. According to
the WHO, there were 5,323 cases of brain and nervous system tumors in Indonesia with 4,229 mortality cases
during 2018 [1]. Due to this reason, a brain tumor is considered to be an important topic. Detection of brain
tumor could be done under the medical equipment called magnetic resonance imaging or MRI. General
Hospital (RSUD) Dr. Soetomo Surabaya provides radiological examination services using MRI 1.5 Tesla
and 3 Tesla. The greater the Tesla number, the better the image quality, but it’s also more costly. The MRI
1.5 Tesla is a favorable service, since it has a minimum cost and also covered by the Indonesian
government's social security (BPJS). However, better image quality is needed in medical treatment.
The image segmentation is one of methodology that could provide better sight of brain tumors by separating
the tumor area (as the region of interest or ROI) with the healthy brain and provide a clear boundary of
the tumor. The clear boundary helps the medical treatment, especially in surgery, to running safely without
damaging healthy parts of the brain. This study tries to segment the MRI brain tumor to give a better sight of
the MRI image from a 1.5 Tesla machine.
Many types of research had been developed for image segmentation. Several methods use clustering
as the basis of modeling, while others use classification. The aim is to get the best model that could recognize
TELKOMNIKA Telecommun Comput El Control 
UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari)
1311
the tumor area more precisely. The previous studies use the clustering as the basis of segmentation are
performed by [2] which uses the genetic algorithm and [3] which employ fuzzy clustering, Otsu method and
K-means cluster to segment the vehicle image. The model-based clustering is performed by [4-6]
in the form of a Finite mixture model to segment the MRI brain tumor image. Different from the previous
studies, this study uses the classification method with Neural Network. Neural Network is used since it can
adapt the working of human neurons in recognizing images. Previous studies done under the NN approach is
performed by [7-9] that used the convolutional neural network (CNN) and Deep Learning for image
segmentation. This study uses the Fully Convolutional Network (FCN) since it has great performance for
semantic segmentation [10, 11].
The FCN used the U-Net architecture by the recommendation of [12]. The U-Net model can achieve
very good performance on very different biomedical segmentation applications. This paper uses a real dataset
from General Hospital (RSUD) Dr. Soetomo. The limited number of brain tumor patients who come to
Dr. Soetomo, makes the number of datasets analyzed also limited. The U-Net will be hybridized with
VGG16 architecture as its encoder (contracting) layer [13] to simplify the architecture and overcome
the problem of small number data classification. On the other hand, the complexity of U-Net often spends
a lot of time in its execution and is greatly affected when the computer specifications are inadequate. In view
of these shortcomings and to give the stage of the art of this paper we perform the transfer learning in the part
of training data in the hybrid of U-Net and VGG16. The name of this proposed model or architecture is
UNet-VGG16 with Transfer Learning. Moreover, we try to compare the UNet-VGG16 with several scenarios
of the U-Net model and decide the best model with the value of loss and accuracy. The correct classification
ratio (CCR) will be calculated to compare the segmentation results from the best model with the ground truth
as the measure of Evaluation. All the data and ground truth used in this study are passed by the medical
approval from Dr. Soetomo Surabaya.
2. RESEARCH METHOD
2.1. Fully convolutional network: U-net with transfer learning
Fully convolutional network (FCN) is one of the deep learning used in semantic segmentation. FCN
proposed by [14] that examines pixel-to-pixel mapping and used the ground truth to determine the pixel
class. The FCN is the development of the classical convolutional neural network (CNN). CNN consists
of convolution, pooling, and fully-connected as the main layers, which in the FCN, the fully connected layer
is replaced by the convolution layer. FCN, therefore, can classify every pixel in the image and give them
the ability to make predictions on arbitrary-sized inputs. The input data is an image that consists of three
layers namely height (ℎ), width ( 𝑤), and depth ( 𝑑). The ℎ and 𝑤 explained the spatial dimension, while 𝑑 is
the feature or channel dimension. The image first layer has ℎ × 𝑤 dimension and 𝑑 color channel
(𝑑 = 1 when it contains only grayscale intensity, or 𝑑 = 3 when it has red, green, and blue (RGB)
intensities). Supposed we have input data vector 𝐱 𝑖𝑗, in the location ( 𝑖, 𝑗) of a particular layer. We also can
calculate the vector output y𝑖𝑗 with the following formula [14]:
y𝑖𝑗 = 𝑓𝑘𝑠({𝑥 𝑠𝑖+𝛿𝑖,𝑠𝑗+𝛿𝑗}, 0 ≤ 𝛿𝑖, 𝛿𝑗 ≤ 𝑘) (1)
where 𝑘 is the kernel size, 𝑠 is the subsampling factor, and 𝑓𝑘𝑠 specifies the type of layer used: a matrix
multiplication for convolution or average pooling, a spatial max for max pooling, or other types of layers.
Applying this layer can significantly reduce the number of parameters of the network. Moreover, the network
can learn the correlation between neighborhood pixels [15]. Figure 1 is a structural model in the FCN
method. In Figure 1, we could see that FCN can efficiently learn to make dense predictions for per-pixel
tasks like semantic segmentation [14].
Several architectures are building under FCN, this study used the U-Net architecture that firstly
recommended by [12], which is great for biomedical image segmentation. In Figure 2, we can see
the visualization of U-Net architecture. The U-shape of architecture is the reason behind its name. The letter
U contains two paths. The left path is called the encoder (contracting layer) and the right path is the decoder
(expanding layer). The encoder is a network in which the output is the feature map/ vector that holds
the information representing the input. The decoder which has the same structure but in the opposite way,
is a network that takes feature maps from the encoder and provides a similar match of the actual input or
intended output. The process in the encoder path is reducing the size input matrix by increasing the number
of the feature maps, while in the decoder path is returning the matrix to its original size by minimizing
the number of the feature maps. The segmentation results, therefore, can be compared with the ground truth
in every pixel.
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318
1312
Figure 1. Structure of fully convolutional network (sourced from [14])
Figure 2. The U-Net architecture
(example for 32x32 pixels in the lowest resolution, sourced from [12])
U-Net transmits the feature map of each contract path level at the analog level of the extension path
so that the classifier can consider features of varying scales and complexities to make its decision. Therefore,
U-Net can learn from relatively small training sets [16]. Since its complexities, the U-Net architecture often
spends a lot of time on its execution. In order to deal with the problem, this study tries to hybrid the U-Net
architecture with Transfer learning.
Transfer learning is an approach in which pre-trained models are used as a starting point for
computer vision and language processing tasks, since the development of neural network models for these
problems and due to the enormous leaps in qualifications requires extensive computing and time resources.
The aim of Transfer learning is to improve learning in the target task by using the knowledge from
the source task. Transfer learning is an effective technique for reducing training time [17]. This technique
may be related to the development of deep learning models for image classification problems.
Moreover, to simplifying the U-Net architecture, several architectures from CNN have been
considered to hybrid with the U-Net, such as LeNet [18], AlexNet [19], ZFNet [20] and VGG-Net [21].
The VGG-Net confirms that a smaller kernel size and a deep CNN can improve model performance.
The architecture of VGG-Net as shown in Figure 3 is quite similar to the U-Net, therefore, this study chooses
VGG-Net to replace the encoder path of U-Net as the hybrid between these two powerful architectures.
The new architecture will be discussed in results and analysis.
TELKOMNIKA Telecommun Comput El Control 
UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari)
1313
Figure 3. The VGG16 architecture (adapted from [21])
This study will use the loss and accuracy to measure model performance. Loss is the value
of error between predicted and actual value. The case with two classes in machine learning uses
the binary cross-entropy loss function to calculate the value of loss or error [22]. The binary cross-entropy
is provided by (2).
𝐻 𝑝( 𝑞) = −
1
𝑁
∑ 𝑧𝑖log(𝑝( 𝑧𝑖))𝑁
𝑖=1 + (1 − 𝑧𝑖)log(1 − 𝑝( 𝑧𝑖)), (2)
where 𝑁 is the number of data, 𝑧𝑖 is the class of classification which has the value of 0 or 1, and 𝑝( 𝑧𝑖) is
the probability of 𝑧𝑖. Accuracy is the closeness between predicted and actual value. In order to determine
the accuracy, we use the confusion matrix as in Table 1 [22]. The formula for calculating the accuracy is
shown by (3).
accuracy =
𝑇𝑃+𝑇𝑁
𝑇𝑃+𝐹𝑃+𝑇𝑁+𝐹𝑁
. (3)
Table 1. Confusion matrix
Actual Value
Predicted Value
True False
True 𝑇𝑃 (True Positive) 𝐹𝑁 (False Negative)
False 𝐹𝑃 (False Positive) 𝑇𝑁 (True Negative)
2.2. Training and optimization
The optimization used in this paper is Adaptive Moment Estimator (Adam) to estimate
the parameters 𝑥 𝑡. Adam was a combination of two method’s AdaGrad and RMSProp [23]. Adam utilizes
two moments’ variable 𝑣𝑡 as the first moment (the mean) and variable 𝑠𝑡 as the second moment
(the uncentered variance) of the gradients respectively. Given the hyper-parameter 0 ≤ 𝛽1 < 1 and 0 ≤ 𝛽2 < 1
and perform an exponentially-weighted moving average (EWMA), the Adam estimator can be constructed in
Algorithm 1 as follows [24]
Algorithm 1. Adam Estimator
a. Initialize the value of 𝛽1, 𝛽2 ∈ [0,1), 𝑣0, 𝑠0, learning rate 𝜂, and stochastic objective
function (𝑓(𝑥𝑡)) with parameters 𝑥𝑡.
b. Calculate the gradient of 𝑓(𝑥𝑡) with following formula
𝑔𝑡 = ∇ 𝑥 𝑓𝑡(𝑥𝑡−1). (4)
c. Update the first and second moment with equation (5) and (6)
𝑣 𝑡 = 𝛽1 𝑣 𝑡−1 + (1 − 𝛽1)𝑔𝑡, (5)
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318
1314
𝑠𝑡 = 𝛽2 𝑠𝑡−1 + (1 − 𝛽2)𝑔𝑡 ⊙ 𝑔𝑡 (6)
d. Calculate the bias correction with (7) and (8)
𝑣̂ 𝑡 =
𝑣 𝑡
1−𝛽1
𝑡 , (7)
𝑠̂ 𝑡 =
𝑠 𝑡
1−𝛽2
𝑡 . (8)
e. Re-adjust the learning rate for each element in the model parameters using element
operations with bias-corrected variables 𝑣̂ 𝑡 and 𝑠̂ 𝑡 as in following formula
𝑔′𝑡 =
𝜂𝑣̂ 𝑡
√𝑠̂ 𝑡+𝜀
, (9)
where 𝜂 is the learning rate and 𝜀 is error criterion set to 10−8
.
f. Update the parameter 𝑥𝑡 as the (10)
𝑥𝑡 = 𝑥𝑡−1 − 𝑔′𝑡, (10)
g. Do step 2 to step 6 until the parameter 𝑥𝑡 is convergence.
As in Algorithm 1, Adam used stochastic gradient descent to maintain a single learning rate for all
weight updates. This has ensured that the learning rate does not change during the training process.
The learning rate is maintained for each network weight (parameter) and adjusted separately as learning
unfolds. Besides, the algorithm calculates an exponential moving average of the gradient and the squared
gradient, with hyper-parameters 𝛽1 and 𝛽2 to control the decay rates. The initialization is set for learning rate
𝜂 = 0.0001, 𝛽1 = 0.9, and 𝛽2 = 0.999. The loss function used binary cross-entropy as a set up binary
classification. The binary cross-entropy was a combination of sigmoid activation and cross-entropy loss.
In this study, we divide data into three partitions. We do the sampling without replacement to get
the randomized data for training, validation, and testing. The three sets are divided using a ratio of 80:10:10.
The amount of data for each set is as Figure 4.
Figure 4. Division of the data
2.3. Evaluation
Correct Classification ratio (CCR) is calculated as a measure of evaluation. CCR value is obtained to
find out whether the ROI from segmentation results in accordance with the ground truth. The greater the CCR
value, the better the segmentation result and vice versa. As shown in (11) is the formula of CCR [25].
𝐶𝐶𝑅 = ∑
|𝐺𝑇 𝑗∩𝑆𝑒𝑔 𝑗|
|𝐺𝑇|
2
𝑗=1 , (11)
where 𝐺𝑇𝑗 is ground truth for non-ROI ( 𝑗 = 1) and ROI ( 𝑗 = 2). 𝑆𝑒𝑔𝑗 for 𝑗 = 1 defines pixel segmented
based on the model as non-ROI area, meanwhile for 𝑗 = 2 describes pixel segmented as ROI
and𝐺𝑇 = ⋃ 𝐺𝑇𝑗
2
𝑗=1 .
3. RESULTS AND ANALYSIS
3.1. The proposed architecture
U-Net is one of CNN architecture that is used specifically for image segmentation. The complexity
of U-Net (this research has 31,031,685 parameters) impacts the time of execution and in several computers
with the limited specification, the U-Net architecture cannot be run. To overcome this problem, we proposed
TELKOMNIKA Telecommun Comput El Control 
UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari)
1315
a new model that reduces the layer and parameter of U-Net by combining it with another architecture namely
VGG16. The choice of VGG16 is because of its similarity to U-Net's contracted layer and its number
of parameters is also less than U-Net. In addition, VGG16 already has weights from parameters that are
easily accessed, so we employ these weights to this new model.
Several models used for segmentation frequently consist of the contracting layer and the expansion
layer. In this study, we modified the VGG16 to resemble the U-Net architecture by adding an expansive layer
consisting of several upsampling layers and convolution layers at the end of the VGG16 architecture. This is
done until the overall architecture of the model is symmetrical and resembles the shape of the letter U.
Therefore, in the architecture of the UNet-VGG16 model, we will have a contracting layer, which is
the VGG16, and the expansion layer that will be added later. With this new architecture, the parameters will
be reduced to 17,040,001 with trainable parameters is about 2,324,353.
The MRI brain tumor image will be trained using the UNet-VGG16 model with the Transfer
Learning method. This method freezes the contraction layer in UNet-VGG16 so that the weighted layer is not
updated when executing training data. Instead, we use the weight of the convolution layer of the VGG16
model. The goal is to reduce the computing process and speed up the training time of the model. Figure 5
shows the architecture of the proposed model, namely UNet-VGG16 with Transfer Learning, while
Figure 6 is the process visualization of image segmentation under the new proposed architecture.
Figure 5. The architecture of UNet-VGG16 with transfer learning
Figure 6. The segmentation process under UNet-VGG16 with transfer learning
3.2. Choosing the best model
In this study, we try to compare the proposed model with the previous state of the art model. U-Net
architecture is developed with various scenarios so that several alternative models are obtained which will
then be compared for accuracy between one another and with the proposed model. These modification
scenarios of U-Net are done since the original of U-Net model cannot run in our computer with
the specification of Processor Intel Core i7, 32GB RAM, 128GB SSD, and without GPU and VRAM.
Table 2 is a breakdown of the number of convolution layers and convolution blocks in each U-Net scenario.
All the models build under the Python programming language with Tensorflow, Keras, and NumPy
libraries. In the training process, we use 100 epochs and executed in a computer with processor Intel Core i7,
32GB RAM, 128GB SSD, and without GPU and VRAM. Each epoch of the proposed model takes
80 minutes of computer processing. It is longer than the four scenarios of U-Net since the number
of parameters of the proposed model is more than the modified U-Net.
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318
1316
Table 2. Convolution scenario for U-Net
Scenario
Number of
Conv layer Conv block Parameter Time/epoch
Model 1 1 Conv @ block 5 block 3,929,985 23 min
Model 2 1 Conv @ block 4 block 980,097 20 min
Model 3 2 Conv @ block 5 block 7,862,113 34 min
Model 4 2 Conv @ block 4 block 1,962,337 32 min
Figure 7 demonstrates a learning curve of four U-Net scenarios compares with the UNet-VGG16
segmentation model with Transfer Learning. Figure 7 (a) shows the comparison of the loss value of
the U-Net and the proposed model, while Figure 7 (b) is a comparison of its accuracy. Based on this figure, it
can be seen that during the training process with 100 epochs, the proposed model provides a minimum loss
value and maximum accuracy compared to the four U-Net scenarios. Moreover, the loss and accuracy of
the proposed model are faster convergent and stable with smoother movements compared to the loss and
accuracy of U-Net models that are still volatile during the training process. Table 3 shows the overall
performance of each model.
(a) (b)
Figure 7. (a) Loss and (b) accuracy from the four U-Net scenarios
compare to UNet-VGG16 with transfer learning
From Table 3 and Figure 7, the minimum loss and maximum accuracy are reached by the proposed
model. Its loss value is 0.054 and the accuracy value is 0.961. Based on these results, the proposed model
gives better performance than the four scenarios of U-Net. Since the best model belongs to UNet-VGG16
with Transfer Learning, for the next analysis, we will use the model as the basis of MRI brain tumor
segmentation.
3.2. Segmentation results based on the best model
The segmentation is done to the 16-testing data under UNet-VGG16 with Transfer Learning.
The visualization of segmentation results is described in Figure 8. This figure has shown that segmentation
results of sample sequence could recognize the tumor area as ROI in various tumor size and location, both on
the left or right of the brain. For the testing data, we calculate the CCR as the measure of evaluation
and the summary is shown in Table 4. The segmentation results are approaching ground truth very well since
the all CCR value above 90%. The CCR grand mean for all testing data is reaching 95.69%.
Table 3. The performance comparison of each model
Model comparison Loss Accuracy
U-Net :
Model 1 0.124 0.942
Model 2 0.083 0.951
Model 3 0.085 0.953
Model 4 0.244 0.938
UNet-VGG16 with TL 0.054 0.961
Table 4. The summary of CCR for testing data
Testing
number
CCR
Testing
number
CCR
1 0.957184 9 0.957489
2 0.970047 10 0.965225
3 0.916245 11 0.961166
4 0.935806 12 0.905029
5 0.982773 13 0.933807
6 0.981598 14 0.928635
7 0.979904 15 0.985748
8 0.974136 16 0.975601
TELKOMNIKA Telecommun Comput El Control 
UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari)
1317
Input slice ground truth segment. result Input slice ground truth segment. result
Data
test 1
Data
test 9
Data
test 2
Data
test 10
Data
test 3
Data
test 11
Data
test 4
Data
test 12
Data
test 5
Data
test 13
Data
test 6
Data
test 14
Data
test 7
Data
test 15
Data
test 8
Data
test 16
]
Figure 8. The segmentation results of 16 testing data
4. CONCLUSION
Based on the results and analysis we can conclude that the proposed model namely UNet-VGG16
with Transfer Learning is running well on the computer with a processor of Intel Core i7, 32GB RAM,
128GB SSD, and without GPU and VRAM. The proposed model has great performance compared to
the U-Net model (in four scenarios) since it has the minimum value of loss and maximum value of accuracy.
The segmentation results under the proposed model tend to approach the ROI target of each brain tumor MRI
image very well. The results of segmentation from testing data were obtained by CCR value of 95.69%.
For future research, the different architecture or convolutional block scenario could be obtained to get more
alternative models. Not to mention that the optimum epoch is still demanded to get the optimal computing
time for building the training model.
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318
1318
ACKNOWLEDGEMENTS
The authors are grateful to the Directorate for Research and Community Service (DRPM) Ministry
of Research, Technology and Higher Education Indonesia which supports this research under PT research
grant no. 944/PKS/ITS/2019.
REFERENCES
[1] Globocan, “Indonesia Source: Globocan 2018,” 2019. [Online]. Avaible: http://gco.iarc.fr/today/data/factsheets/
populations/360-Indonesia-fact-sheets.pdf.
[2] V. Jaiswal, V. Sharma, S. Varma., “An Implementation of Novel Genetic-Based Clustering Algorithm for Color
Image Segmentation,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 3,
pp 1461-1467, June 2019.
[3] P. B. Prakoso, Y. Sari., “Vehicle Detection using Background Subtraction and Clustering Algorithms,”
TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 3, pp 1393-1398, June 2019.
[4] N. Iriawan, A. A. Pravitasari, K. Fithriasari, Irhamah, S. W. Purnami, W. Ferriastuti., “Comparative Study of Brain
Tumor Segmentation using Different Segmentation Techniques in Handling Noise,” 2018 International Conference
on Computer Engineering, Network and Intelligent Multimedia (CENIM), Surabaya, Indonesia, pp. 289-293, 2018.
[5] A. A. Pravitasari, M. A. I. Safa, N. Iriawan, Irhamah, K. Fithriasari, S. W. Purnami, W. Ferriastuti, “MRI-Based
Brain Tumor Segmentation using Modified Stable Student’s t from Burr Mixture Model with Bayesian Approach,”
Malaysian Journal of Mathematical Sciences, vol 13, no 3, pp. 297-310, 2019.
[6] A. A. Pravitasari, N. I. Nirmalasari, N. Iriawan, Irhamah, K. Fithriasari, S. W. Purnami, W. Ferriastuti, “Bayesian
Spatially constrained Fernandez-Steel Skew Normal Mixture model for MRI-based Brain Tumor Segmentation,”
In AIP Conference Proceedings, vol. 2194, no. 1, December 2019.
[7] M. Pathak, N. Srinivasu, V. Bairagi, “Effective Segmentation of Sclera, Iris, and Pupil in Noisy Eye
Images,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 5, pp 2346-2354,
October 2019.
[8] I. B. K. Sudiatmika, F. Rahman, Trisno, Suyoto, “Image Forgery Detection Using Error Level Analysis and Deep
Learning,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 2, pp 653-659, April 2019.
[9] E. Gibson, W. Li, C. H. Sudre, L. Fidon, D. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P.
Nachev, D. C. Barratt, S. Ourselin, M. J. Cardoso, T. Vercauteren., “Niftynet: a Deep-Learning Platform for Medical
Imaging,” Computer Methods and Programs in Biomedicine, vol. 158, pp. 113-122, May 2018.
[10] P.V. Tran, “A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI,” arXiv preprint
arXiv:1604.00494, 2016.
[11] Y. Li, H. Qi, J. Dai, X. Ji, Y. Wei, “Fully Convolutional Instance-Aware Semantic Segmentation,” 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 4438-4446, 2017.
[12] O. Ronneberger, P. Fischer, T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,”
Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, vol. 9351, pp. 234-241,
November 2015.
[13] C. Balakrishna, D. Sarshar, S. Soltaninejad, “Automatic detection of lumen and media in the IVUS images using
U-Net with VGG16 Encoder,” arXiv preprint arXiv:1806.07554, June 2018.
[14] J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 3431-3440, 2015.
[15] Y. Le Cun, Y. Bengio, “Convolutional Networks for Images, Speech, and Time Series,” Hand Brain Theory Neural
Netw, pp 3361-3371, January 1995.
[16] H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R. M Summers, “Deep Convolutional
Neural Networks for Computer-aided Detection: CNN Architectures, Dataset Characteristics, and Transfer
Learning,” in IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285-1298, May 2016.
[17] F. Chollet, “Deep Learning with Python,” Manning: Shelter Island, 2018.
[18] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based Learning Applied to Document Recognition,”
Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[19] A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,”
in Advances in neural information processing systems, pp 1097-1105, 2012.
[20] M. D. Zeiler, R. Fergus, “Visualizing and Understanding Convolutional Networks,” European conference on
computer vision, Springer, Cham, vol. 8689, pp 818-833, 2014.
[21] A. Kamilaris, F. A Prenafeta-Boldú, “A Review of The Use of Convolutional Neural Networks in Agriculture,”
The Journal of Agricultural Science, pp 1–11, 2018.
[22] A. Zhang, Z. C. Lipton, M. Li, A. J. Smola, “Dive into Deep Learning,” SAGE Publications Inc, 2019.
[23] D. P. Kingma, J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv:1412.6980, 2014.
[24] S. Reddi, S. Kale, S. Kumar, “On the Convergence of Adam and Beyond,” arXiv preprint arXiv:1904.09237, 2019.
[25] C. Nikou, N. P. Galatsanos, A. C. Likas, “A Class-adaptive Spatially Variant Mixture Model for Image
Segmentation,” in IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 1121-1130, April 2007.

More Related Content

What's hot

Introduction to object detection
Introduction to object detectionIntroduction to object detection
Introduction to object detectionBrodmann17
 
Brain Tumour Detection.pptx
Brain Tumour Detection.pptxBrain Tumour Detection.pptx
Brain Tumour Detection.pptxRevolverRaja2
 
artificial neural network
artificial neural networkartificial neural network
artificial neural networkPallavi Yadav
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketakiKetaki Patwari
 
Neural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR ImagesNeural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR ImagesAisha Kalsoom
 
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGBRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
 
Image classification using CNN
Image classification using CNNImage classification using CNN
Image classification using CNNNoura Hussein
 
Super Resolution
Super ResolutionSuper Resolution
Super Resolutionalokahuti
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Brain Tumor Detection using CNN
Brain Tumor Detection using CNNBrain Tumor Detection using CNN
Brain Tumor Detection using CNNMohammadRakib8
 
Feature Extraction
Feature ExtractionFeature Extraction
Feature Extractionskylian
 
Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for BeginnersSanghamitra Deb
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural NetworksYogendra Tamang
 
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORK
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORKMULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORK
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORKBenyamin Moadab
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkRichard Kuo
 
convolutional neural network (CNN, or ConvNet)
convolutional neural network (CNN, or ConvNet)convolutional neural network (CNN, or ConvNet)
convolutional neural network (CNN, or ConvNet)RakeshSaran5
 

What's hot (20)

brain tumor ppt.pptx
brain tumor ppt.pptxbrain tumor ppt.pptx
brain tumor ppt.pptx
 
Introduction to object detection
Introduction to object detectionIntroduction to object detection
Introduction to object detection
 
Image processing
Image processingImage processing
Image processing
 
Brain Tumour Detection.pptx
Brain Tumour Detection.pptxBrain Tumour Detection.pptx
Brain Tumour Detection.pptx
 
artificial neural network
artificial neural networkartificial neural network
artificial neural network
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
 
Neural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR ImagesNeural Network Based Brain Tumor Detection using MR Images
Neural Network Based Brain Tumor Detection using MR Images
 
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGBRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
 
Image classification using CNN
Image classification using CNNImage classification using CNN
Image classification using CNN
 
Super Resolution
Super ResolutionSuper Resolution
Super Resolution
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Brain Tumor Detection using CNN
Brain Tumor Detection using CNNBrain Tumor Detection using CNN
Brain Tumor Detection using CNN
 
Feature Extraction
Feature ExtractionFeature Extraction
Feature Extraction
 
Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for Beginners
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
 
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORK
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORKMULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORK
MULTI-CLASSIFICATION OF BRAIN TUMOR IMAGES USING DEEP NEURAL NETWORK
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural Network
 
Introduction to Deep Learning
Introduction to Deep Learning Introduction to Deep Learning
Introduction to Deep Learning
 
CNN Tutorial
CNN TutorialCNN Tutorial
CNN Tutorial
 
convolutional neural network (CNN, or ConvNet)
convolutional neural network (CNN, or ConvNet)convolutional neural network (CNN, or ConvNet)
convolutional neural network (CNN, or ConvNet)
 

Similar to UNet-VGG16 with transfer learning for MRI-based brain tumor segmentation

An improvement of Gram-negative bacteria identification using convolutional n...
An improvement of Gram-negative bacteria identification using convolutional n...An improvement of Gram-negative bacteria identification using convolutional n...
An improvement of Gram-negative bacteria identification using convolutional n...TELKOMNIKA JOURNAL
 
Overview of convolutional neural networks architectures for brain tumor segm...
Overview of convolutional neural networks architectures for  brain tumor segm...Overview of convolutional neural networks architectures for  brain tumor segm...
Overview of convolutional neural networks architectures for brain tumor segm...IJECEIAES
 
78201916
7820191678201916
78201916IJRAT
 
Brain Tumor Segmentation in MRI Images
Brain Tumor Segmentation in MRI ImagesBrain Tumor Segmentation in MRI Images
Brain Tumor Segmentation in MRI ImagesIJRAT
 
Lung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdfLung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdfjagan477830
 
Deep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor ClassificationDeep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor ClassificationIRJET Journal
 
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...IJCNCJournal
 
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...IJCNCJournal
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...IJECEIAES
 
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology Images
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology ImagesDilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology Images
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology ImagesIRJET Journal
 
Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...IJECEIAES
 
Cerebral infarction classification using multiple support vector machine with...
Cerebral infarction classification using multiple support vector machine with...Cerebral infarction classification using multiple support vector machine with...
Cerebral infarction classification using multiple support vector machine with...journalBEEI
 
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...IRJET Journal
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionIRJET Journal
 
Lightweight digital imaging and communications in medicine image encryption f...
Lightweight digital imaging and communications in medicine image encryption f...Lightweight digital imaging and communications in medicine image encryption f...
Lightweight digital imaging and communications in medicine image encryption f...TELKOMNIKA JOURNAL
 
Human activity recognition with self-attention
Human activity recognition with self-attentionHuman activity recognition with self-attention
Human activity recognition with self-attentionIJECEIAES
 
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...ijcsit
 
Chapter 5 applications of neural networks
Chapter 5           applications of neural networksChapter 5           applications of neural networks
Chapter 5 applications of neural networksPunit Saini
 

Similar to UNet-VGG16 with transfer learning for MRI-based brain tumor segmentation (20)

An improvement of Gram-negative bacteria identification using convolutional n...
An improvement of Gram-negative bacteria identification using convolutional n...An improvement of Gram-negative bacteria identification using convolutional n...
An improvement of Gram-negative bacteria identification using convolutional n...
 
Overview of convolutional neural networks architectures for brain tumor segm...
Overview of convolutional neural networks architectures for  brain tumor segm...Overview of convolutional neural networks architectures for  brain tumor segm...
Overview of convolutional neural networks architectures for brain tumor segm...
 
78201916
7820191678201916
78201916
 
Brain Tumor Segmentation in MRI Images
Brain Tumor Segmentation in MRI ImagesBrain Tumor Segmentation in MRI Images
Brain Tumor Segmentation in MRI Images
 
Lung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdfLung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdf
 
Deep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor ClassificationDeep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor Classification
 
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
 
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...
On The Performance of Intrusion Detection Systems with Hidden Multilayer Neur...
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...
 
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology Images
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology ImagesDilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology Images
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology Images
 
Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...Development of algorithm for identification of maligant growth in cancer usin...
Development of algorithm for identification of maligant growth in cancer usin...
 
Fuzzy c-means
Fuzzy c-meansFuzzy c-means
Fuzzy c-means
 
Cerebral infarction classification using multiple support vector machine with...
Cerebral infarction classification using multiple support vector machine with...Cerebral infarction classification using multiple support vector machine with...
Cerebral infarction classification using multiple support vector machine with...
 
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor Detection
 
Lightweight digital imaging and communications in medicine image encryption f...
Lightweight digital imaging and communications in medicine image encryption f...Lightweight digital imaging and communications in medicine image encryption f...
Lightweight digital imaging and communications in medicine image encryption f...
 
Human activity recognition with self-attention
Human activity recognition with self-attentionHuman activity recognition with self-attention
Human activity recognition with self-attention
 
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...
 
Custom administering attention module for segmentation of magnetic resonance...
Custom administering attention module for segmentation of  magnetic resonance...Custom administering attention module for segmentation of  magnetic resonance...
Custom administering attention module for segmentation of magnetic resonance...
 
Chapter 5 applications of neural networks
Chapter 5           applications of neural networksChapter 5           applications of neural networks
Chapter 5 applications of neural networks
 

More from TELKOMNIKA JOURNAL

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesTELKOMNIKA JOURNAL
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
 

More from TELKOMNIKA JOURNAL (20)

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antenna
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fire
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio network
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bands
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertainties
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensor
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networks
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
 

Recently uploaded

shape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptxshape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptxVishalDeshpande27
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
 
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical EngineeringIntroduction to Machine Learning Unit-4 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical EngineeringC Sai Kiran
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfPipe Restoration Solutions
 
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical SolutionsRS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical SolutionsAtif Razi
 
fundamentals of drawing and isometric and orthographic projection
fundamentals of drawing and isometric and orthographic projectionfundamentals of drawing and isometric and orthographic projection
fundamentals of drawing and isometric and orthographic projectionjeevanprasad8
 
Danfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdfDanfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdfNurvisNavarroSanchez
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdfAhmedHussein950959
 
Explosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdfExplosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdf884710SadaqatAli
 
2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edge2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edgePaco Orozco
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Electivekarthi keyan
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationRobbie Edward Sayers
 
A case study of cinema management system project report..pdf
A case study of cinema management system project report..pdfA case study of cinema management system project report..pdf
A case study of cinema management system project report..pdfKamal Acharya
 
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical EngineeringIntroduction to Machine Learning Unit-5 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical EngineeringC Sai Kiran
 
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docxThe Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docxCenterEnamel
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234AafreenAbuthahir2
 
Construction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptxConstruction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptxwendy cai
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdfKamal Acharya
 
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...Amil baba
 

Recently uploaded (20)

shape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptxshape functions of 1D and 2 D rectangular elements.pptx
shape functions of 1D and 2 D rectangular elements.pptx
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical EngineeringIntroduction to Machine Learning Unit-4 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-4 Notes for II-II Mechanical Engineering
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical SolutionsRS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
RS Khurmi Machine Design Clutch and Brake Exercise Numerical Solutions
 
fundamentals of drawing and isometric and orthographic projection
fundamentals of drawing and isometric and orthographic projectionfundamentals of drawing and isometric and orthographic projection
fundamentals of drawing and isometric and orthographic projection
 
Danfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdfDanfoss NeoCharge Technology -A Revolution in 2024.pdf
Danfoss NeoCharge Technology -A Revolution in 2024.pdf
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
 
Explosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdfExplosives Industry manufacturing process.pdf
Explosives Industry manufacturing process.pdf
 
2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edge2024 DevOps Pro Europe - Growing at the edge
2024 DevOps Pro Europe - Growing at the edge
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
 
A case study of cinema management system project report..pdf
A case study of cinema management system project report..pdfA case study of cinema management system project report..pdf
A case study of cinema management system project report..pdf
 
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical EngineeringIntroduction to Machine Learning Unit-5 Notes for II-II Mechanical Engineering
Introduction to Machine Learning Unit-5 Notes for II-II Mechanical Engineering
 
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docxThe Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
The Ultimate Guide to External Floating Roofs for Oil Storage Tanks.docx
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
 
Construction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptxConstruction method of steel structure space frame .pptx
Construction method of steel structure space frame .pptx
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdf
 
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
NO1 Pandit Amil Baba In Bahawalpur, Sargodha, Sialkot, Sheikhupura, Rahim Yar...
 

UNet-VGG16 with transfer learning for MRI-based brain tumor segmentation

  • 1. TELKOMNIKA Telecommunication, Computing, Electronics and Control Vol. 18, No. 3, June 2020, pp. 1310~1318 ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018 DOI: 10.12928/TELKOMNIKA.v18i3.14753  1310 Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA UNet-VGG16 with transfer learning for MRI-based brain tumor segmentation Anindya Apriliyanti Pravitasari1 , Nur Iriawan2 , Mawanda Almuhayar3 , Taufik Azmi4 , Irhamah5 , Kartika Fithriasari6 , Santi Wulan Purnami7 , Widiana Ferriastuti8 1,2,3,4,5,6,7 Department of Statistics, Institut Teknologi Sepuluh Nopember, Indonesia 1 Department of Statistics, Universitas Padjajaran, Indonesia 8 Department of Radiology, Universitas Airlangga, Indonesia Article Info ABSTRACT Article history: Received Aug 15, 2019 Revised Jan 29, 2020 Accepted Feb 26, 2020 A brain tumor is one of a deadly disease that needs high accuracy in its medical surgery. Brain tumor detection can be done through magnetic resonance imaging (MRI). Image segmentation for the MRI brain tumor aims to separate the tumor area (as the region of interest or ROI) with a healthy brain and provide a clear boundary of the tumor. This study classifies the ROI and non-ROI using fully convolutional network with new architecture, namely UNet-VGG16. This model or architecture is a hybrid of U-Net and VGG16 with transfer Learning to simplify the U-Net architecture. This method has a high accuracy of about 96.1% in the learning dataset. The validation is done by calculating the correct classification ratio (CCR) to comparing the segmentation result with the ground truth. The CCR value shows that this UNet-VGG16 could recognize the brain tumor area with a mean of CCR value is about 95.69%. Keywords: Fully convolution network Image segmentation Transfer learning U-Net VGG16 This is an open access article under the CC BY-SA license. Corresponding Author: Nur Iriawan, Department of Statistics, Institut Teknologi Sepuluh Nopember, Kampus ITS Sukolilo-Surabaya 60111, Indonesia. Email: nur_i@statistika.its.ac.id 1. INTRODUCTION A brain tumor is the 15th deadly disease in Indonesia compared to all types of cancer. According to the WHO, there were 5,323 cases of brain and nervous system tumors in Indonesia with 4,229 mortality cases during 2018 [1]. Due to this reason, a brain tumor is considered to be an important topic. Detection of brain tumor could be done under the medical equipment called magnetic resonance imaging or MRI. General Hospital (RSUD) Dr. Soetomo Surabaya provides radiological examination services using MRI 1.5 Tesla and 3 Tesla. The greater the Tesla number, the better the image quality, but it’s also more costly. The MRI 1.5 Tesla is a favorable service, since it has a minimum cost and also covered by the Indonesian government's social security (BPJS). However, better image quality is needed in medical treatment. The image segmentation is one of methodology that could provide better sight of brain tumors by separating the tumor area (as the region of interest or ROI) with the healthy brain and provide a clear boundary of the tumor. The clear boundary helps the medical treatment, especially in surgery, to running safely without damaging healthy parts of the brain. This study tries to segment the MRI brain tumor to give a better sight of the MRI image from a 1.5 Tesla machine. Many types of research had been developed for image segmentation. Several methods use clustering as the basis of modeling, while others use classification. The aim is to get the best model that could recognize
  • 2. TELKOMNIKA Telecommun Comput El Control  UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari) 1311 the tumor area more precisely. The previous studies use the clustering as the basis of segmentation are performed by [2] which uses the genetic algorithm and [3] which employ fuzzy clustering, Otsu method and K-means cluster to segment the vehicle image. The model-based clustering is performed by [4-6] in the form of a Finite mixture model to segment the MRI brain tumor image. Different from the previous studies, this study uses the classification method with Neural Network. Neural Network is used since it can adapt the working of human neurons in recognizing images. Previous studies done under the NN approach is performed by [7-9] that used the convolutional neural network (CNN) and Deep Learning for image segmentation. This study uses the Fully Convolutional Network (FCN) since it has great performance for semantic segmentation [10, 11]. The FCN used the U-Net architecture by the recommendation of [12]. The U-Net model can achieve very good performance on very different biomedical segmentation applications. This paper uses a real dataset from General Hospital (RSUD) Dr. Soetomo. The limited number of brain tumor patients who come to Dr. Soetomo, makes the number of datasets analyzed also limited. The U-Net will be hybridized with VGG16 architecture as its encoder (contracting) layer [13] to simplify the architecture and overcome the problem of small number data classification. On the other hand, the complexity of U-Net often spends a lot of time in its execution and is greatly affected when the computer specifications are inadequate. In view of these shortcomings and to give the stage of the art of this paper we perform the transfer learning in the part of training data in the hybrid of U-Net and VGG16. The name of this proposed model or architecture is UNet-VGG16 with Transfer Learning. Moreover, we try to compare the UNet-VGG16 with several scenarios of the U-Net model and decide the best model with the value of loss and accuracy. The correct classification ratio (CCR) will be calculated to compare the segmentation results from the best model with the ground truth as the measure of Evaluation. All the data and ground truth used in this study are passed by the medical approval from Dr. Soetomo Surabaya. 2. RESEARCH METHOD 2.1. Fully convolutional network: U-net with transfer learning Fully convolutional network (FCN) is one of the deep learning used in semantic segmentation. FCN proposed by [14] that examines pixel-to-pixel mapping and used the ground truth to determine the pixel class. The FCN is the development of the classical convolutional neural network (CNN). CNN consists of convolution, pooling, and fully-connected as the main layers, which in the FCN, the fully connected layer is replaced by the convolution layer. FCN, therefore, can classify every pixel in the image and give them the ability to make predictions on arbitrary-sized inputs. The input data is an image that consists of three layers namely height (ℎ), width ( 𝑤), and depth ( 𝑑). The ℎ and 𝑤 explained the spatial dimension, while 𝑑 is the feature or channel dimension. The image first layer has ℎ × 𝑤 dimension and 𝑑 color channel (𝑑 = 1 when it contains only grayscale intensity, or 𝑑 = 3 when it has red, green, and blue (RGB) intensities). Supposed we have input data vector 𝐱 𝑖𝑗, in the location ( 𝑖, 𝑗) of a particular layer. We also can calculate the vector output y𝑖𝑗 with the following formula [14]: y𝑖𝑗 = 𝑓𝑘𝑠({𝑥 𝑠𝑖+𝛿𝑖,𝑠𝑗+𝛿𝑗}, 0 ≤ 𝛿𝑖, 𝛿𝑗 ≤ 𝑘) (1) where 𝑘 is the kernel size, 𝑠 is the subsampling factor, and 𝑓𝑘𝑠 specifies the type of layer used: a matrix multiplication for convolution or average pooling, a spatial max for max pooling, or other types of layers. Applying this layer can significantly reduce the number of parameters of the network. Moreover, the network can learn the correlation between neighborhood pixels [15]. Figure 1 is a structural model in the FCN method. In Figure 1, we could see that FCN can efficiently learn to make dense predictions for per-pixel tasks like semantic segmentation [14]. Several architectures are building under FCN, this study used the U-Net architecture that firstly recommended by [12], which is great for biomedical image segmentation. In Figure 2, we can see the visualization of U-Net architecture. The U-shape of architecture is the reason behind its name. The letter U contains two paths. The left path is called the encoder (contracting layer) and the right path is the decoder (expanding layer). The encoder is a network in which the output is the feature map/ vector that holds the information representing the input. The decoder which has the same structure but in the opposite way, is a network that takes feature maps from the encoder and provides a similar match of the actual input or intended output. The process in the encoder path is reducing the size input matrix by increasing the number of the feature maps, while in the decoder path is returning the matrix to its original size by minimizing the number of the feature maps. The segmentation results, therefore, can be compared with the ground truth in every pixel.
  • 3.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318 1312 Figure 1. Structure of fully convolutional network (sourced from [14]) Figure 2. The U-Net architecture (example for 32x32 pixels in the lowest resolution, sourced from [12]) U-Net transmits the feature map of each contract path level at the analog level of the extension path so that the classifier can consider features of varying scales and complexities to make its decision. Therefore, U-Net can learn from relatively small training sets [16]. Since its complexities, the U-Net architecture often spends a lot of time on its execution. In order to deal with the problem, this study tries to hybrid the U-Net architecture with Transfer learning. Transfer learning is an approach in which pre-trained models are used as a starting point for computer vision and language processing tasks, since the development of neural network models for these problems and due to the enormous leaps in qualifications requires extensive computing and time resources. The aim of Transfer learning is to improve learning in the target task by using the knowledge from the source task. Transfer learning is an effective technique for reducing training time [17]. This technique may be related to the development of deep learning models for image classification problems. Moreover, to simplifying the U-Net architecture, several architectures from CNN have been considered to hybrid with the U-Net, such as LeNet [18], AlexNet [19], ZFNet [20] and VGG-Net [21]. The VGG-Net confirms that a smaller kernel size and a deep CNN can improve model performance. The architecture of VGG-Net as shown in Figure 3 is quite similar to the U-Net, therefore, this study chooses VGG-Net to replace the encoder path of U-Net as the hybrid between these two powerful architectures. The new architecture will be discussed in results and analysis.
  • 4. TELKOMNIKA Telecommun Comput El Control  UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari) 1313 Figure 3. The VGG16 architecture (adapted from [21]) This study will use the loss and accuracy to measure model performance. Loss is the value of error between predicted and actual value. The case with two classes in machine learning uses the binary cross-entropy loss function to calculate the value of loss or error [22]. The binary cross-entropy is provided by (2). 𝐻 𝑝( 𝑞) = − 1 𝑁 ∑ 𝑧𝑖log(𝑝( 𝑧𝑖))𝑁 𝑖=1 + (1 − 𝑧𝑖)log(1 − 𝑝( 𝑧𝑖)), (2) where 𝑁 is the number of data, 𝑧𝑖 is the class of classification which has the value of 0 or 1, and 𝑝( 𝑧𝑖) is the probability of 𝑧𝑖. Accuracy is the closeness between predicted and actual value. In order to determine the accuracy, we use the confusion matrix as in Table 1 [22]. The formula for calculating the accuracy is shown by (3). accuracy = 𝑇𝑃+𝑇𝑁 𝑇𝑃+𝐹𝑃+𝑇𝑁+𝐹𝑁 . (3) Table 1. Confusion matrix Actual Value Predicted Value True False True 𝑇𝑃 (True Positive) 𝐹𝑁 (False Negative) False 𝐹𝑃 (False Positive) 𝑇𝑁 (True Negative) 2.2. Training and optimization The optimization used in this paper is Adaptive Moment Estimator (Adam) to estimate the parameters 𝑥 𝑡. Adam was a combination of two method’s AdaGrad and RMSProp [23]. Adam utilizes two moments’ variable 𝑣𝑡 as the first moment (the mean) and variable 𝑠𝑡 as the second moment (the uncentered variance) of the gradients respectively. Given the hyper-parameter 0 ≤ 𝛽1 < 1 and 0 ≤ 𝛽2 < 1 and perform an exponentially-weighted moving average (EWMA), the Adam estimator can be constructed in Algorithm 1 as follows [24] Algorithm 1. Adam Estimator a. Initialize the value of 𝛽1, 𝛽2 ∈ [0,1), 𝑣0, 𝑠0, learning rate 𝜂, and stochastic objective function (𝑓(𝑥𝑡)) with parameters 𝑥𝑡. b. Calculate the gradient of 𝑓(𝑥𝑡) with following formula 𝑔𝑡 = ∇ 𝑥 𝑓𝑡(𝑥𝑡−1). (4) c. Update the first and second moment with equation (5) and (6) 𝑣 𝑡 = 𝛽1 𝑣 𝑡−1 + (1 − 𝛽1)𝑔𝑡, (5)
  • 5.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318 1314 𝑠𝑡 = 𝛽2 𝑠𝑡−1 + (1 − 𝛽2)𝑔𝑡 ⊙ 𝑔𝑡 (6) d. Calculate the bias correction with (7) and (8) 𝑣̂ 𝑡 = 𝑣 𝑡 1−𝛽1 𝑡 , (7) 𝑠̂ 𝑡 = 𝑠 𝑡 1−𝛽2 𝑡 . (8) e. Re-adjust the learning rate for each element in the model parameters using element operations with bias-corrected variables 𝑣̂ 𝑡 and 𝑠̂ 𝑡 as in following formula 𝑔′𝑡 = 𝜂𝑣̂ 𝑡 √𝑠̂ 𝑡+𝜀 , (9) where 𝜂 is the learning rate and 𝜀 is error criterion set to 10−8 . f. Update the parameter 𝑥𝑡 as the (10) 𝑥𝑡 = 𝑥𝑡−1 − 𝑔′𝑡, (10) g. Do step 2 to step 6 until the parameter 𝑥𝑡 is convergence. As in Algorithm 1, Adam used stochastic gradient descent to maintain a single learning rate for all weight updates. This has ensured that the learning rate does not change during the training process. The learning rate is maintained for each network weight (parameter) and adjusted separately as learning unfolds. Besides, the algorithm calculates an exponential moving average of the gradient and the squared gradient, with hyper-parameters 𝛽1 and 𝛽2 to control the decay rates. The initialization is set for learning rate 𝜂 = 0.0001, 𝛽1 = 0.9, and 𝛽2 = 0.999. The loss function used binary cross-entropy as a set up binary classification. The binary cross-entropy was a combination of sigmoid activation and cross-entropy loss. In this study, we divide data into three partitions. We do the sampling without replacement to get the randomized data for training, validation, and testing. The three sets are divided using a ratio of 80:10:10. The amount of data for each set is as Figure 4. Figure 4. Division of the data 2.3. Evaluation Correct Classification ratio (CCR) is calculated as a measure of evaluation. CCR value is obtained to find out whether the ROI from segmentation results in accordance with the ground truth. The greater the CCR value, the better the segmentation result and vice versa. As shown in (11) is the formula of CCR [25]. 𝐶𝐶𝑅 = ∑ |𝐺𝑇 𝑗∩𝑆𝑒𝑔 𝑗| |𝐺𝑇| 2 𝑗=1 , (11) where 𝐺𝑇𝑗 is ground truth for non-ROI ( 𝑗 = 1) and ROI ( 𝑗 = 2). 𝑆𝑒𝑔𝑗 for 𝑗 = 1 defines pixel segmented based on the model as non-ROI area, meanwhile for 𝑗 = 2 describes pixel segmented as ROI and𝐺𝑇 = ⋃ 𝐺𝑇𝑗 2 𝑗=1 . 3. RESULTS AND ANALYSIS 3.1. The proposed architecture U-Net is one of CNN architecture that is used specifically for image segmentation. The complexity of U-Net (this research has 31,031,685 parameters) impacts the time of execution and in several computers with the limited specification, the U-Net architecture cannot be run. To overcome this problem, we proposed
  • 6. TELKOMNIKA Telecommun Comput El Control  UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari) 1315 a new model that reduces the layer and parameter of U-Net by combining it with another architecture namely VGG16. The choice of VGG16 is because of its similarity to U-Net's contracted layer and its number of parameters is also less than U-Net. In addition, VGG16 already has weights from parameters that are easily accessed, so we employ these weights to this new model. Several models used for segmentation frequently consist of the contracting layer and the expansion layer. In this study, we modified the VGG16 to resemble the U-Net architecture by adding an expansive layer consisting of several upsampling layers and convolution layers at the end of the VGG16 architecture. This is done until the overall architecture of the model is symmetrical and resembles the shape of the letter U. Therefore, in the architecture of the UNet-VGG16 model, we will have a contracting layer, which is the VGG16, and the expansion layer that will be added later. With this new architecture, the parameters will be reduced to 17,040,001 with trainable parameters is about 2,324,353. The MRI brain tumor image will be trained using the UNet-VGG16 model with the Transfer Learning method. This method freezes the contraction layer in UNet-VGG16 so that the weighted layer is not updated when executing training data. Instead, we use the weight of the convolution layer of the VGG16 model. The goal is to reduce the computing process and speed up the training time of the model. Figure 5 shows the architecture of the proposed model, namely UNet-VGG16 with Transfer Learning, while Figure 6 is the process visualization of image segmentation under the new proposed architecture. Figure 5. The architecture of UNet-VGG16 with transfer learning Figure 6. The segmentation process under UNet-VGG16 with transfer learning 3.2. Choosing the best model In this study, we try to compare the proposed model with the previous state of the art model. U-Net architecture is developed with various scenarios so that several alternative models are obtained which will then be compared for accuracy between one another and with the proposed model. These modification scenarios of U-Net are done since the original of U-Net model cannot run in our computer with the specification of Processor Intel Core i7, 32GB RAM, 128GB SSD, and without GPU and VRAM. Table 2 is a breakdown of the number of convolution layers and convolution blocks in each U-Net scenario. All the models build under the Python programming language with Tensorflow, Keras, and NumPy libraries. In the training process, we use 100 epochs and executed in a computer with processor Intel Core i7, 32GB RAM, 128GB SSD, and without GPU and VRAM. Each epoch of the proposed model takes 80 minutes of computer processing. It is longer than the four scenarios of U-Net since the number of parameters of the proposed model is more than the modified U-Net.
  • 7.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318 1316 Table 2. Convolution scenario for U-Net Scenario Number of Conv layer Conv block Parameter Time/epoch Model 1 1 Conv @ block 5 block 3,929,985 23 min Model 2 1 Conv @ block 4 block 980,097 20 min Model 3 2 Conv @ block 5 block 7,862,113 34 min Model 4 2 Conv @ block 4 block 1,962,337 32 min Figure 7 demonstrates a learning curve of four U-Net scenarios compares with the UNet-VGG16 segmentation model with Transfer Learning. Figure 7 (a) shows the comparison of the loss value of the U-Net and the proposed model, while Figure 7 (b) is a comparison of its accuracy. Based on this figure, it can be seen that during the training process with 100 epochs, the proposed model provides a minimum loss value and maximum accuracy compared to the four U-Net scenarios. Moreover, the loss and accuracy of the proposed model are faster convergent and stable with smoother movements compared to the loss and accuracy of U-Net models that are still volatile during the training process. Table 3 shows the overall performance of each model. (a) (b) Figure 7. (a) Loss and (b) accuracy from the four U-Net scenarios compare to UNet-VGG16 with transfer learning From Table 3 and Figure 7, the minimum loss and maximum accuracy are reached by the proposed model. Its loss value is 0.054 and the accuracy value is 0.961. Based on these results, the proposed model gives better performance than the four scenarios of U-Net. Since the best model belongs to UNet-VGG16 with Transfer Learning, for the next analysis, we will use the model as the basis of MRI brain tumor segmentation. 3.2. Segmentation results based on the best model The segmentation is done to the 16-testing data under UNet-VGG16 with Transfer Learning. The visualization of segmentation results is described in Figure 8. This figure has shown that segmentation results of sample sequence could recognize the tumor area as ROI in various tumor size and location, both on the left or right of the brain. For the testing data, we calculate the CCR as the measure of evaluation and the summary is shown in Table 4. The segmentation results are approaching ground truth very well since the all CCR value above 90%. The CCR grand mean for all testing data is reaching 95.69%. Table 3. The performance comparison of each model Model comparison Loss Accuracy U-Net : Model 1 0.124 0.942 Model 2 0.083 0.951 Model 3 0.085 0.953 Model 4 0.244 0.938 UNet-VGG16 with TL 0.054 0.961 Table 4. The summary of CCR for testing data Testing number CCR Testing number CCR 1 0.957184 9 0.957489 2 0.970047 10 0.965225 3 0.916245 11 0.961166 4 0.935806 12 0.905029 5 0.982773 13 0.933807 6 0.981598 14 0.928635 7 0.979904 15 0.985748 8 0.974136 16 0.975601
  • 8. TELKOMNIKA Telecommun Comput El Control  UNet-VGG16 with transfer learning for MRI-based brain tumor … (Anindya Apriliyanti Pravitasari) 1317 Input slice ground truth segment. result Input slice ground truth segment. result Data test 1 Data test 9 Data test 2 Data test 10 Data test 3 Data test 11 Data test 4 Data test 12 Data test 5 Data test 13 Data test 6 Data test 14 Data test 7 Data test 15 Data test 8 Data test 16 ] Figure 8. The segmentation results of 16 testing data 4. CONCLUSION Based on the results and analysis we can conclude that the proposed model namely UNet-VGG16 with Transfer Learning is running well on the computer with a processor of Intel Core i7, 32GB RAM, 128GB SSD, and without GPU and VRAM. The proposed model has great performance compared to the U-Net model (in four scenarios) since it has the minimum value of loss and maximum value of accuracy. The segmentation results under the proposed model tend to approach the ROI target of each brain tumor MRI image very well. The results of segmentation from testing data were obtained by CCR value of 95.69%. For future research, the different architecture or convolutional block scenario could be obtained to get more alternative models. Not to mention that the optimum epoch is still demanded to get the optimal computing time for building the training model.
  • 9.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1310 - 1318 1318 ACKNOWLEDGEMENTS The authors are grateful to the Directorate for Research and Community Service (DRPM) Ministry of Research, Technology and Higher Education Indonesia which supports this research under PT research grant no. 944/PKS/ITS/2019. REFERENCES [1] Globocan, “Indonesia Source: Globocan 2018,” 2019. [Online]. Avaible: http://gco.iarc.fr/today/data/factsheets/ populations/360-Indonesia-fact-sheets.pdf. [2] V. Jaiswal, V. Sharma, S. Varma., “An Implementation of Novel Genetic-Based Clustering Algorithm for Color Image Segmentation,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 3, pp 1461-1467, June 2019. [3] P. B. Prakoso, Y. Sari., “Vehicle Detection using Background Subtraction and Clustering Algorithms,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 3, pp 1393-1398, June 2019. [4] N. Iriawan, A. A. Pravitasari, K. Fithriasari, Irhamah, S. W. Purnami, W. Ferriastuti., “Comparative Study of Brain Tumor Segmentation using Different Segmentation Techniques in Handling Noise,” 2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM), Surabaya, Indonesia, pp. 289-293, 2018. [5] A. A. Pravitasari, M. A. I. Safa, N. Iriawan, Irhamah, K. Fithriasari, S. W. Purnami, W. Ferriastuti, “MRI-Based Brain Tumor Segmentation using Modified Stable Student’s t from Burr Mixture Model with Bayesian Approach,” Malaysian Journal of Mathematical Sciences, vol 13, no 3, pp. 297-310, 2019. [6] A. A. Pravitasari, N. I. Nirmalasari, N. Iriawan, Irhamah, K. Fithriasari, S. W. Purnami, W. Ferriastuti, “Bayesian Spatially constrained Fernandez-Steel Skew Normal Mixture model for MRI-based Brain Tumor Segmentation,” In AIP Conference Proceedings, vol. 2194, no. 1, December 2019. [7] M. Pathak, N. Srinivasu, V. Bairagi, “Effective Segmentation of Sclera, Iris, and Pupil in Noisy Eye Images,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 5, pp 2346-2354, October 2019. [8] I. B. K. Sudiatmika, F. Rahman, Trisno, Suyoto, “Image Forgery Detection Using Error Level Analysis and Deep Learning,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol 17, no 2, pp 653-659, April 2019. [9] E. Gibson, W. Li, C. H. Sudre, L. Fidon, D. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P. Nachev, D. C. Barratt, S. Ourselin, M. J. Cardoso, T. Vercauteren., “Niftynet: a Deep-Learning Platform for Medical Imaging,” Computer Methods and Programs in Biomedicine, vol. 158, pp. 113-122, May 2018. [10] P.V. Tran, “A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI,” arXiv preprint arXiv:1604.00494, 2016. [11] Y. Li, H. Qi, J. Dai, X. Ji, Y. Wei, “Fully Convolutional Instance-Aware Semantic Segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 4438-4446, 2017. [12] O. Ronneberger, P. Fischer, T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, vol. 9351, pp. 234-241, November 2015. [13] C. Balakrishna, D. Sarshar, S. Soltaninejad, “Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder,” arXiv preprint arXiv:1806.07554, June 2018. [14] J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp. 3431-3440, 2015. [15] Y. Le Cun, Y. Bengio, “Convolutional Networks for Images, Speech, and Time Series,” Hand Brain Theory Neural Netw, pp 3361-3371, January 1995. [16] H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R. M Summers, “Deep Convolutional Neural Networks for Computer-aided Detection: CNN Architectures, Dataset Characteristics, and Transfer Learning,” in IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285-1298, May 2016. [17] F. Chollet, “Deep Learning with Python,” Manning: Shelter Island, 2018. [18] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998. [19] A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” in Advances in neural information processing systems, pp 1097-1105, 2012. [20] M. D. Zeiler, R. Fergus, “Visualizing and Understanding Convolutional Networks,” European conference on computer vision, Springer, Cham, vol. 8689, pp 818-833, 2014. [21] A. Kamilaris, F. A Prenafeta-Boldú, “A Review of The Use of Convolutional Neural Networks in Agriculture,” The Journal of Agricultural Science, pp 1–11, 2018. [22] A. Zhang, Z. C. Lipton, M. Li, A. J. Smola, “Dive into Deep Learning,” SAGE Publications Inc, 2019. [23] D. P. Kingma, J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv:1412.6980, 2014. [24] S. Reddi, S. Kale, S. Kumar, “On the Convergence of Adam and Beyond,” arXiv preprint arXiv:1904.09237, 2019. [25] C. Nikou, N. P. Galatsanos, A. C. Likas, “A Class-adaptive Spatially Variant Mixture Model for Image Segmentation,” in IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 1121-1130, April 2007.