SlideShare a Scribd company logo
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 6, December 2022, pp. 6736~6743
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i6.pp6736-6743  6736
Journal homepage: http://ijece.iaescore.com
Deep learning for cancer tumor classification using transfer
learning and feature concatenation
Abdallah Mohamed Hassan1
, Mohamed Bakry El-Mashade1
, Ashraf Aboshosha2
1
Electrical Engineering Department, Faculty of Engineering, Al-Azhar University, Cairo, Egypt
2
NCRRT, Egyptian Atomic Energy Authority, Cairo, Egypt
Article Info ABSTRACT
Article history:
Received Sep 6, 2021
Revised Jun 19, 2022
Accepted Jul 15, 2022
Deep convolutional neural networks (CNNs) represent one of the
state-of-the-art methods for image classification in a variety of fields.
Because the number of training dataset images in biomedical image
classification is limited, transfer learning with CNNs is frequently applied.
Breast cancer is one of most common types of cancer that causes death in
women. Early detection and treatment of breast cancer are vital for
improving survival rates. In this paper, we propose a deep neural network
framework based on the transfer learning concept for detecting and
classifying breast cancer histopathology images. In the proposed framework,
we extract features from images using three pre-trained CNN architectures:
VGG-16, ResNet50, and Inception-v3, and concatenate their extracted
features, and then feed them into a fully connected (FC) layer to classify
benign and malignant tumor cells in the histopathology images of the breast
cancer. In comparison to the other CNN architectures that use a single CNN
and many conventional classification methods, the proposed framework
outperformed all other deep learning architectures and achieved an average
accuracy of 98.76%.
Keywords:
Breast cancer
Cancer tumor
Classification
Deep learning
Feature concatenation
Transfer learning
This is an open access article under the CC BY-SA license.
Corresponding Author:
Abdallah Mohamed Hassan
Electrical Engineering Department, Faculty of Engineering, Al-Azhar University
Cairo, Egypt
Email: abdallah.mohamed@azhar.edu.eg
1. INTRODUCTION
Analysis of the microscopic images that represent various human tissues has developed as one of the
most vital fields of biomedical research, as it aids in the understanding of a variety of biological processes.
Different applications of microscopic images classification have been developed, which include identifying
simple patient conditions and studying complex cell processes. Tissue image classification is extremely
important. After lung cancer, breast cancer is considering the most frequent cancer type studied and the most
prevalent type of cancer in women has the highest death rate in the world [1]. The radiologist uses
microscopic images of the breast to detect cancer indications in women at an early stage, and the rate of
survival will be increased if detected early. Pathologists use a microscope to analyze a sample of microscopic
images of breast tissue to detect and classify the types of cancer tumors, which are categorized into benign
and malignant tumor. The benign tumor is harmless, and the majority of this type is unable to become a
breast cancer source, while the malignant tumor is characterized by abnormal divisions and irregular growth.
Because manual classification of microscopic images is time-consuming and expensive, there is a
growing demand for automated systems as the rate of breast cancer rises and diagnosis differs. As a result,
computer-aided-diagnosis (CAD) system is required to decrease a specialist’s workload by increasing the
efficiency of diagnostic and reducing classification subjectivity. Various applications have been developed
Int J Elec & Comp Eng ISSN: 2088-8708 
Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan)
6737
for microscopic image classification. Traditional automated classification techniques such as local binary
patterns (LBP) [2] that use hand-crafted features extractors, support vector machines (SVM) [3] as a linear
classifier, clustering-based algorithms segmentation and classification of nuclei [4], [5], hybrid SVM-ANN
[6]. Although these methods produced some acceptable results in classification, the accuracy might be
improved.
Deep convolutional neural networks were used to overcome the accuracy limits in traditional
machine learning techniques and have developed as one of the most advanced methods in the classification
process [7]. Deep CNN systems as nuclei detection and classification [8], tumor detection [9], skin disease
classification [10], detection and classification of lymph nodes metastasis [11]. On large datasets, CNN
systems perform well, but it fails on small datasets to achieve high gains.
The principle of transfer learning is used to exploit deep neural networks in small datasets to
enhance the CNN structure’s performance by combining their knowledge to reduce computing costs and
achieve high accuracy. CNN architecture is learned on a generic large dataset of nature images and then
employed as a features extractor using the pre-trained CNN structure in transfer learning. The generic
features extracted from the CNN can apply to various datasets [12], [13]. To improve transfer learning
performance, the use of a combination of multiple CNNs structures has been introduced and could eventually
replace the usage of a single CNN model. VGG16, ResNet50, and Inception-v3 networks have developed an
accurate and fast model for image classification [14]–[16], which are pre-trained on ImageNet.
In the suggested framework, we use transfer learning and a combination of extracted from multiple
CNN architectures to overcome the shortcomings in cancer tumor detection and classification in existing
systems. We can summarize the contributions in this research in the following: i) provide a framework for
detecting and classifying breast cancer tumor that use CNN architectures, ii) apply the transfer learning
concept and provide a comparative analysis of accuracy for three different deep CNN architectures, and iii)
using a combination of extracted features from various networks to improve classification accuracy.
2. PROPOSED METHOD
In this paper, we suggest a framework by using three different deep CNN architectures: VGG16,
ResNet50, and Inception, these CNNs were pre-trained on ImageNet dataset [17]. We used them for the
breast cancer tumor detection and classification in histopathology images. The suggested model combined
various low-level features that were separately extracted from various CNN architectures, and then fed it into
a fully connected (FC) layer to classify the benign and malignant tumor.
2.1. Pre-trained CNN architectures for features extraction
In this section, three different CNN models are used for feature extraction of the proposed method,
VGG-16 [18], ResNet50 [19], and Inception-v3 [20]. These models are concatenated into the FC layer which
is used to classify breast cancer tumor. The ImageNet dataset, which contains multiple generic image
descriptors, was used to pre-train these CNNs [21], and then feature extraction is performed using transfer
learning concept. The structures for each CNN architectures are described briefly:
2.1.1. VGG-16 architecture
VGG-16 is made up of 16 layers, containing 13 convolution layers, pooling layers, and three FC
layers [18]. The number of channels in convolution layers is 64 channels in the first layer and rises after each
pooling layer by a factor of two until it reaches 512. A 3x3 window size filter and a 2x2 pooling network are
used in the convolution network. VGG-16 is a convolutional network similar to the model AlexNet but
contains more convolution layers. Because of its simple architecture, it outperforms AlexNet. VGG-16’s
basic architecture is shown in Figure 1.
2.1.2. ResNet50 architecture
Residual networks (ResNet) [19] are a group of deep neural networks that have architectures that are
similar but varying depths that perform well at classification tasks on ImageNet [22]. To deal with the
degradation problem of deep neural networks, the residual learning unit is a structure introduced by ResNet
[19]. The merit of this structure is it enhances classification accuracy without raising model complexity.
ResNet50’s basic architecture is depicted in Figure 2.
2.1.3. Inception-v3 architecture
Inception-v3 [20] is an enhanced version of the GoogLeNet architecture [23], which uses transfer
learning in biomedical applications to achieve high classification performance [24], [25]. Inception suggested
a model that combines many convolutional filters of varying sizes into a single one. As a result of this design,
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743
6738
the computational complexity and the number of trained parameters are reduced. Inception-v3’s basic
architecture is depicted in Figure 3.
Figure 1. The VGG-16 CNN architecture [18]
Figure 2. The ResNet50 CNN architecture [19]
Figure 3. Inception-v3 CNN architecture [20]
2.2. Data augmentation
Because CNN’s performance weakens when used with small datasets due to overfitting [26], which
gives unwell results on test data despite it achieving good performance in the training data, it requires large
data sets to attain higher accuracy. In this paper, to reduce overfitting issues and expand the dataset, a data
augmentation technique is used [26]. The number of data at training is increased by using image processing
methods to apply geometric transformations to image datasets at data augmentation technique. During the
training stage, using flipping samples vertical and horizontal, scaling, translation, and rotation, the training
data is increased. Because microscopic images rotationally are invariant, cancer tumor microscopic images
can be easily analyzed from various positions without affecting the diagnosis [27].
Input
3x3 Conv, 64
3x3 Conv, 64
2x2 Pooling
3x3 Conv, 128
3x3 Conv, 128
2x2 Pooling
3x3 Conv, 256
3x3 Conv, 256
3x3 Conv, 256
2x2 Pooling
3x3 Conv, 512
3x3 Conv, 512
3x3 Conv, 512
2x2 Pooling
3x3 Conv, 512
3x3 Conv, 512
3x3 Conv, 512
2x2 Pooling
FC - 4096
FC - 4096
FC - 1000
Output
Conv1
Patch: 7x7
Stride: 2
3 x Conv2_x
Conv: 1x1, 64
Conv: 3x3, 64
Conv: 1x1, 256
Pool
Patch: 3x3
Stride: 2
4 x Conv3_x
Conv: 1x1, 128
Conv: 3x3, 128
Conv: 1x1, 512
6 x Conv4_x
Conv: 1x1, 256
Conv: 3x3, 256
Conv: 1x1, 1024
3 x Conv5_x
Conv: 1x1, 512
Conv: 3x3, 512
Conv: 1x1, 2048
Conv
Patch: 3x3
Stride: 2
Conv padded
Patch: 3x3
Stride: 1
Conv
Patch: 3x3
Stride: 1
Pool
Patch: 3x3
Stride: 2
Conv
Patch: 3x3
Stride: 1
Conv
Patch: 3x3
Stride: 2
Conv
Patch: 3x3
Stride: 1
3 x Inception
Model 1
5 x Inception
Model 2
2 x Inception
Model 3
Pool
Patch: 8x8
Stride: 0
Linear
Logits
Int J Elec & Comp Eng ISSN: 2088-8708 
Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan)
6739
2.3. Transfer learning
To achieve high accuracy and train a model from scratch, it needs a large amount of data, but getting
a large dataset of relevant problems can be difficult in some cases. As a result, the term “transfer learning”
has been introduced. The CNN model structure is first trained for a task using a large image dataset related to
that task and then transferred to a wanted task which is trained on a small dataset [28].
The similarity between the source training dataset and the target dataset and selection of pre-trained
model are two steps in the process of transfer learning process. If the size of the used dataset is small and
related to the original training dataset, there is high overfitting probability. If the target dataset size is large
and different from the source training dataset, there is low overfitting probability [16], and in this case all that
is required for the pre-trained model is fine-tuning.
2.4. The proposed network structure
First, the three CNN architectures VGG16, ResNet50, and Inception-v3 are trained on a dataset of
general images from 1,000 categories using ImageNet dataset [17], after which a transfer learning method
can be used, allowing CNN architectures to learn generic characteristics from other image datasets without
the need to train models from scratch. The CNN model’s transfer learning architecture is shown in Figure 4,
the pre-trained network acts as a features extractor for general features of image, and add FC layers for
classification. The details of the extracted features from the CNN architecture can be summarized as the
following: i) VGG-16: 512 feature is extracted from the last layer as shown in Figure 1; ii) ResNet50: 2048
feature is extracted from the last layer as shown in Figure 2; iii) Inception-v3: 2048 feature is extracted from
the last logits layer as shown in Figure 3.
The extracted features from the per-trained models are concatenated to form 4,612-dimensional
feature. The concatenated features are then fed into the FC layer using average pooling for classification of
the benign and malignant tumor. Figure 5 shows the structure of the proposed feature concatenation scheme.
Figure 4. The CNN model’s transfer learning architecture
Figure 5. The proposed feature concatenation structure
3. EXPERIMENTAL RESULTS AND DISCUSSION
3.1. Dataset description
The proposed framework is evaluated using the breast cancer histopathology images (BreakHis)
dataset [29]. The dataset consists of 7,909 breast cancer histopathology images using various factors of
magnification (40X, 100X, 200X, and 400X) which were collected from patients and contain
5,429 malignant and 2,480 benign samples. The details of the dataset used are illustrated in Table 1.
Table 1. The breast cancer histopathological images dataset
Magnification Malignant Benign Total
40X 1,370 625 1,995
100X 1,437 644 2,081
200X 1,390 623 2,013
400X 1,232 588 1,820
Total 5,429 2480 7,909
Image FC layers
Pre-trained CNN
model
Output
Image
Features
concatenation
Pre-trained
VGG-16
FC layers
Pre-trained
ResNet50
Pre-trained
Inception-v3
Output
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743
6740
3.2. Results and discussion
We split the dataset into two parts: the first one contains 80% of the dataset as a training set
(6,328 image) for training the models and the other contains 20% as a testing set (1,581 image) for testing the
models. The accuracy of classification for the proposed architecture is compared to three single transfer
learning network architectures: VGG16, ResNet50, and Inception individually. Table 2 displays the
classification accuracy results obtained by the proposed model and other used architectures in different
magnification factor and then the average magnification accuracy is calculated for each model.
As the results shown in Table 2, it can be noticed that the proposed framework achieved the highest
accuracy of 99.732%, 98.947%, 99.328%, and 98.626% at magnification factor 40X, 100X, 200X, and 400X
respectively. The VGG-16, ResNet50, and Inception-v3 architectures give an average magnification accuracy
of 95.25%, 90.19%, and 97.02%, respectively. Meanwhile the suggested model achieves 99.16% as an
average accuracy. The results shown indicate that the suggested architecture outperforms the three single
architectures and achieves high accuracy in the cancer tumor classification.
Table 2. The accuracy of the proposed model and other CNN models based on different magnification factor
CNN model Magnification accuracy (%) Average magnification accuracy (%)
40X 100X 200X 400X
VGG-16 96.242 96.447 96.102 92.214 95.25
ResNet50 89.262 92.368 88.441 90.687 90.19
Inception-v3 97.572 96.586 97.172 96.748 97.02
Proposed Framework 99.732 98.947 99.328 98.626 99.16
In this situation, to get more accurate results for the different models, the entire dataset is divided
using multiple splitting ratio procedures into training and testing parts, such as 90-10%, 80-20%, and 70-30%
ratios. The 90-10% splitting ratio indicates that 90% of the data are used when train the model, while the
remaining 10% are used to test the model. Table 3 and Figure 6 compare the proposed framework
architecture to the other CNN models based on different splitting ratios. The "Class Type" represents the type
of tumor, where B denotes benign and M is malignant cancer in Table 3. It shows the precision of each class
type and the accuracy of each splitting ratio. In addition, it provides the average accuracy based on splitting
ratios of each CNN models. Figure 6 shows the accuracy of CNN architectures at different splitting ratios and
an average accuracy for the architectures. As shown in Table 3 and Figure 6, the proposed framework
architecture achieves the highest accuracy compared to single architectures in the classification of the cancer
tumor. The VGG-16, ResNet50, and Inception-v3 architectures achieve an average accuracy 95.68%,
88.43%, and 96.49% respectively, while the suggested model achieves an average accuracy 98.76%.
Table 3. Comparative analysis of accuracy based on different splitting ratios for proposed model with other
CNN models
CNN model Splitting ratio Class type Precision Accuracy of splitting ratio (%) Average accuracy (%)
VGG-16 90%-10% B
M
95.05
96.73
96.20
80%-20% B
M
93.72
96.22
95.44 95.68
70%-30% B
M
95.82
95.20
95.40
ResNet50 90%-10% B
M
84.55
93.35
90.60
80%-20% B
M
72.87
95.20
88.22 88.43
70%-30% B
M
79.92
89.49
86.49
Inception-v3 90%-10% B
M
98.37
96.31
96.95
80%-20% B
M
94.94
96.96
96.32 96.49
70%-30% B
M
97.77
95.48
96.2
Proposed Framework 90%-10% B
M
97.96
99.81
99.23
80%-20% B
M
97.58
99.17
98.67 98.76
70%-30% B
M
97.98
98.58
98.39
Int J Elec & Comp Eng ISSN: 2088-8708 
Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan)
6741
Figure 6. Comparative analysis of accuracy for CNN models
3.3. Comparison between the proposed framework and other methods
In this section, it is preferable to compare the results achieved from our proposed framework
technique with those achieved using various conventional classification methods as indicated in Table 4. It
can be indicated that the performed structures in [13]–[16] have an accuracy 92.63%, 97%, 97.08%, and
97.52% respectively, while our proposed scenario has the highest accuracy of all four procedures at 98.76%.
These results demonstrate the suggested framework's superiority over other similar methodologies.
Table 4. Comparison between the proposed framework and other methods
Method Accuracy (%)
Nguyen [13] 92.63
Kensert [14] 97.00
Vesal [15] 97.08
Khan [16] 97.52
Proposed Framework 98.76
4. CONCLUSION
In this study, we suggest a deep learning framework based on the transfer learning principle for
detection and classification of breast cancer Histopathological images. In this framework, by using three
different deep CNN models (VGG-16, ResNet50, and Inception-v3), the features from breast cancer images
are extracted, and then concatenated to improve classification accuracy. Data augmentation is a method for
increasing the dataset size to minimize over-fitting issues and improve the efficiency of CNN architecture.
The work presented here shows how transfer learning and features concatenation of multiple CNN
architectures can improve classification accuracy when compared to single CNN networks and achieves
excellent classification accuracy. It is also compared the proposed framework's performance to that of
different existing classification methods, and it is found that the proposed model achieves 98.76% as an
average accuracy.
REFERENCES
[1] R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2019,” CA: A Cancer Journal for Clinicians, vol. 69, no. 1, pp. 7–34,
Jan. 2019, doi: 10.3322/caac.21551.
[2] D. Liu, S. Wang, D. Huang, G. Deng, F. Zeng, and H. Chen, “Medical image classification using spatial adjacent histogram based
on adaptive local binary patterns,” Computers in Biology and Medicine, vol. 72, pp. 185–200, May 2016, doi:
10.1016/j.compbiomed.2016.03.010.
[3] D. Lin, Z. Lin, S. Sothiharan, L. Lei, and J. Zhang, “An SVM based scoring evaluation system for fluorescence microscopic
image classification,” in 2015 IEEE International Conference on Digital Signal Processing (DSP), Jul. 2015, pp. 543–547, doi:
10.1109/ICDSP.2015.7251932.
[4] M. Kowal, A. F. P. Obuchowicz, J. Korbicz, and R. Monczak, “Computer-aided diagnosis of breast cancer based on fine needle
biopsy microscopic images,” Computers in Biology and Medicine, vol. 43, no. 10, pp. 1563–1572, Oct. 2013, doi:
10.1016/j.compbiomed.2013.08.003.
[5] Y. M. George, H. H. Zayed, M. I. Roushdy, and B. M. Elbagoury, “Remote computer-aided breast cancer detection and diagnosis
system based on cytological images,” IEEE Systems Journal, vol. 8, no. 3, pp. 949–964, Sep. 2014, doi:
10.1109/JSYST.2013.2279415.
[6] T. S. Lim, K. G. Tay, A. Huong, and X. Y. Lim, “Breast cancer diagnosis system using hybrid support vector machine-artificial
neural network,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 4, pp. 3059–3069, Aug.
2021, doi: 10.11591/ijece.v11i4.pp3059-3069.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743
6742
[7] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi:
10.1038/nature14539.
[8] K. Sirinukunwattana, S. E. A. Raza, Y.-W. Tsang, D. R. J. Snead, I. A. Cree, and N. M. Rajpoot, “Locality sensitive deep learning
for detection and classification of nuclei in routine colon cancer histology images,” IEEE Transactions on Medical Imaging,
vol. 35, no. 5, pp. 1196–1206, May 2016, doi: 10.1109/TMI.2016.2525803.
[9] A. A. Cruz-Roa, J. E. Arevalo Ovalle, A. Madabhushi, and F. A. González Osorio, “A deep learning architecture for image
representation, visual interpretability and automated basal-cell carcinoma cancer detection,” in Advanced Information Systems
Engineering, Springer Berlin Heidelberg, 2013, pp. 403–410, doi: 10.1007/978-3-642-40763-5_50.
[10] A. Esteva, B. Kuprel, and S. Thrun, “Deep networks for early stage skin disease and skin cancer classification,” Project Report,
Stanford University, 2015.
[11] B. E. Bejnordi et al., “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with
breast cancer,” JAMA, vol. 318, no. 22, pp. 2199–2210, Dec. 2017, doi: 10.1001/jama.2017.14585.
[12] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” in
2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2014, pp. 512–519, doi:
10.1109/CVPRW.2014.131.
[13] L. D. Nguyen, D. Lin, Z. Lin, and J. Cao, “Deep CNNs for microscopic image classification by exploiting transfer learning and
feature concatenation,” in 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018, pp. 1–5, doi:
10.1109/ISCAS.2018.8351550.
[14] A. Kensert, P. J. Harrison, and O. Spjuth, “Transfer learning with deep convolutional neural networks for classifying cellular
morphological changes,” SLAS Discovery, vol. 24, no. 4, pp. 466–475, Apr. 2019, doi: 10.1177/2472555218818756.
[15] S. Vesal, N. Ravikumar, A. Davari, S. Ellmann, and A. Maier, “Classification of breast cancer histology images using transfer
learning,” in Lecture Notes in Computer Science, Springer International Publishing, 2018, pp. 812–819, doi: 10.1007/978-3-319-
93000-8_92.
[16] S. Khan, N. Islam, Z. Jan, I. Ud Din, and J. J. P. C. Rodrigues, “A novel deep learning based framework for the detection and
classification of breast cancer using transfer learning,” Pattern Recognition Letters, vol. 125, pp. 1–6, Jul. 2019, doi:
10.1016/j.patrec.2019.03.022.
[17] J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009
IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp. 248–255, doi: 10.1109/CVPR.2009.5206848.
[18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint
arXiv:1409.1556, Sep. 2014.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
[20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2818–2826, doi:
10.1109/CVPR.2016.308.
[21] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” Advances in neural
information processing systems, Nov. 2014
[22] Y. Yu et al., “Modality classification for medical images using multiple deep convolutional neural networks,” Journal of
Computational Information Systems, vol. 11, no. 15, pp. 5403–5413, 2015
[23] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, Jun. 2015, pp. 1–9, doi: 10.1109/CVPR.2015.7298594.
[24] H.-C. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics
and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, May 2016, doi:
10.1109/TMI.2016.2528162.
[25] A. Kumar, J. Kim, D. Lyndon, M. Fulham, and D. Feng, “An ensemble of fine-tuned convolutional neural networks for medical
image classification,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 31–40, Jan. 2017, doi:
10.1109/JBHI.2016.2635663.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,”
Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
[27] D. C. Cireşan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Mitosis detection in breast cancer histology images with deep
neural networks,” in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2013, Springer Berlin Heidelberg,
2013, pp. 411–418, doi: 10.1007/978-3-642-40763-5_51.
[28] L. Yang, S. Hanneke, and J. Carbonell, “A theory of transfer learning with applications to active learning,” Machine Learning,
vol. 90, no. 2, pp. 161–189, Feb. 2013, doi: 10.1007/s10994-012-5310-y.
[29] F. A. Spanhol, L. S. Oliveira, C. Petitjean, and L. Heutte, “A dataset for breast cancer histopathological image classification,”
IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455–1462, Jul. 2016, doi: 10.1109/TBME.2015.2496264.
BIOGRAPHIES OF AUTHORS
Abdallah Mohamed Hassan received the B.Sc. and M.Sc. degrees in Electronics
and Communications Engineering from the Faculty of Engineering, Al-Azhar University,
Cairo, Egypt, in 2012 and 2018 respectively. He is an Assistant Lecturer in the Faculty of
Engineering, Al-Azhar University, Cairo, Egypt. He is currently a Ph.D. student at Faculty of
Engineering, Al-Azhar university, Cairo. His research activities are within artificial intelligent
and deep learning. He can be contacted at email: abdallah.mohamed@azhar.edu.eg.
Int J Elec & Comp Eng ISSN: 2088-8708 
Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan)
6743
Mohamed Bakry El-Mashade received the B.Sc. degree in electrical
engineering from Al-Azhar University, Cairo, in 1978, the M.Sc. degree in the theory of
communications from Cairo University, in 1982, Le D.E.A. d’Electronique (Spécialité:
Traitment du Signal), and Le Diploma de Doctorat (Spécialité: Composants, Signaux et
Systems) in optical communications, from USTL, L’Academie de Montpellier, Montpellier,
France, in 1985 and 1987 respectively. He serves on the Editorial Board of several
International Journals. He has also served as a reviewer for many international journals. He
was the author of more than 60 peer-reviewed journal articles and the coauthor of more than
60 journal technical papers as well as three international book chapters. He serves on the
Editorial Board of International Journal of Communications, Networks and System Sciences
IJCNS. He has organized a special issue on Recent Trends of Wireless Communication
Networks for International Journal of Communications, Network and Systems sciences
IJCNS. He received the best research paper award from International Journal of
Semiconductor Science and Technology in 2014 for his work on “Noise Modeling Circuit of
Quantum Structure Type of Infrared Photodetectors”. He won the Egyptian Encouraging
Award, in Engineering Science, two times (1998 and 2004). He was included in the American
Society ‘Marquis Who’s Who’ as a ‘Distinguishable Scientist’ in 2004, and in the
International Biographkal Centre of Cambridge, England as an ‘Outstanding Scientist’ in
2005. He has been named an official listed in the 2020 edition of ‘Marquis Who’s Who in the
World®’
. His research interests include statistical signal processing, digital and optical signal
processing, free space optical communications, fiber Bragg grating, quantum structure family
of optical devices, SDR, cognitive radio, and software defined radar and SAR. He can be
contacted at email: elmashade@yahoo.com.
Ashraf Aboshosha received the B.Sc. in industrial electronics from Menoufia
University, Egypt in 1990. Since 1992 he is a researcher at the (NCRRT/EAEA), where he
served as a junior member of the instrumentation and control committee. In 1997 he received
his M.Sc. in automatic control and measurement engineering. From 1997 to 1998 he was guest
researcher at research centre Jülich (FZJ), Germany. From 2000 to 2004 he was a doctoral
student (DAAD-scholarship) at Wilhelm Schickard Institute for informatics (WSI), Eberhard-
Karls-University, Tübingen, Germany. Where he received his Doctoral degree (Dr. rer. nat.) in
2004. He is the E-i-C of ICGST LLC, Delaware, USA. He is the author of more than 7 Books
(4 in English and 3 in Arabic) and about 50 academic articles. He supervised more than 20
academic M.Sc. & Ph.D. theses. He can be contacted at email: editor@icgdt.com.

More Related Content

Similar to Deep learning for cancer tumor classification using transfer learning and feature concatenation

Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
Deep segmentation of the liver and the hepatic tumors from abdomen tomography...Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
IJECEIAES
 
Preliminary Lung Cancer Detection using Deep Neural Networks
Preliminary Lung Cancer Detection using Deep Neural NetworksPreliminary Lung Cancer Detection using Deep Neural Networks
Preliminary Lung Cancer Detection using Deep Neural Networks
IRJET Journal
 
Classification of mammograms based on features extraction techniques using su...
Classification of mammograms based on features extraction techniques using su...Classification of mammograms based on features extraction techniques using su...
Classification of mammograms based on features extraction techniques using su...
CSITiaesprime
 
Melanoma Skin Cancer Detection using Deep Learning
Melanoma Skin Cancer Detection using Deep LearningMelanoma Skin Cancer Detection using Deep Learning
Melanoma Skin Cancer Detection using Deep Learning
IRJET Journal
 
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET Journal
 
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdfresearchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
AvijitChaudhuri3
 
Deep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor ClassificationDeep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor Classification
IRJET Journal
 
Skin Disease Detection using Convolutional Neural Network
Skin Disease Detection using Convolutional Neural NetworkSkin Disease Detection using Convolutional Neural Network
Skin Disease Detection using Convolutional Neural Network
IRJET Journal
 
Twin support vector machine using kernel function for colorectal cancer detec...
Twin support vector machine using kernel function for colorectal cancer detec...Twin support vector machine using kernel function for colorectal cancer detec...
Twin support vector machine using kernel function for colorectal cancer detec...
journalBEEI
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor Detection
IRJET Journal
 
Breast cancer histological images nuclei segmentation and optimized classifi...
Breast cancer histological images nuclei segmentation and  optimized classifi...Breast cancer histological images nuclei segmentation and  optimized classifi...
Breast cancer histological images nuclei segmentation and optimized classifi...
IJECEIAES
 
A deep convolutional structure-based approach for accurate recognition of ski...
A deep convolutional structure-based approach for accurate recognition of ski...A deep convolutional structure-based approach for accurate recognition of ski...
A deep convolutional structure-based approach for accurate recognition of ski...
IJECEIAES
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...
IJECEIAES
 
Lung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdfLung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdf
jagan477830
 
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
ijtsrd
 
Updated proposal powerpoint.pptx
Updated proposal powerpoint.pptxUpdated proposal powerpoint.pptx
Updated proposal powerpoint.pptx
AriyoAgbajeGbeminiyi
 
A new model for large dataset dimensionality reduction based on teaching lear...
A new model for large dataset dimensionality reduction based on teaching lear...A new model for large dataset dimensionality reduction based on teaching lear...
A new model for large dataset dimensionality reduction based on teaching lear...
TELKOMNIKA JOURNAL
 
Computer Aided System for Detection and Classification of Breast Cancer
Computer Aided System for Detection and Classification of Breast CancerComputer Aided System for Detection and Classification of Breast Cancer
Computer Aided System for Detection and Classification of Breast Cancer
IJITCA Journal
 
A deep learning framework for accurate diagnosis of colorectal cancer using h...
A deep learning framework for accurate diagnosis of colorectal cancer using h...A deep learning framework for accurate diagnosis of colorectal cancer using h...
A deep learning framework for accurate diagnosis of colorectal cancer using h...
IJECEIAES
 
IRJET - Classification of Cancer Images using Deep Learning
IRJET -  	  Classification of Cancer Images using Deep LearningIRJET -  	  Classification of Cancer Images using Deep Learning
IRJET - Classification of Cancer Images using Deep Learning
IRJET Journal
 

Similar to Deep learning for cancer tumor classification using transfer learning and feature concatenation (20)

Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
Deep segmentation of the liver and the hepatic tumors from abdomen tomography...Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
Deep segmentation of the liver and the hepatic tumors from abdomen tomography...
 
Preliminary Lung Cancer Detection using Deep Neural Networks
Preliminary Lung Cancer Detection using Deep Neural NetworksPreliminary Lung Cancer Detection using Deep Neural Networks
Preliminary Lung Cancer Detection using Deep Neural Networks
 
Classification of mammograms based on features extraction techniques using su...
Classification of mammograms based on features extraction techniques using su...Classification of mammograms based on features extraction techniques using su...
Classification of mammograms based on features extraction techniques using su...
 
Melanoma Skin Cancer Detection using Deep Learning
Melanoma Skin Cancer Detection using Deep LearningMelanoma Skin Cancer Detection using Deep Learning
Melanoma Skin Cancer Detection using Deep Learning
 
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
 
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdfresearchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
researchpaper_2023_Skin_Csdbjsjvnvsdnfvancer.pdf
 
Deep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor ClassificationDeep Learning based Multi-class Brain Tumor Classification
Deep Learning based Multi-class Brain Tumor Classification
 
Skin Disease Detection using Convolutional Neural Network
Skin Disease Detection using Convolutional Neural NetworkSkin Disease Detection using Convolutional Neural Network
Skin Disease Detection using Convolutional Neural Network
 
Twin support vector machine using kernel function for colorectal cancer detec...
Twin support vector machine using kernel function for colorectal cancer detec...Twin support vector machine using kernel function for colorectal cancer detec...
Twin support vector machine using kernel function for colorectal cancer detec...
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor Detection
 
Breast cancer histological images nuclei segmentation and optimized classifi...
Breast cancer histological images nuclei segmentation and  optimized classifi...Breast cancer histological images nuclei segmentation and  optimized classifi...
Breast cancer histological images nuclei segmentation and optimized classifi...
 
A deep convolutional structure-based approach for accurate recognition of ski...
A deep convolutional structure-based approach for accurate recognition of ski...A deep convolutional structure-based approach for accurate recognition of ski...
A deep convolutional structure-based approach for accurate recognition of ski...
 
Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...Glioblastomas brain tumour segmentation based on convolutional neural network...
Glioblastomas brain tumour segmentation based on convolutional neural network...
 
Lung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdfLung Cancer Detection using transfer learning.pptx.pdf
Lung Cancer Detection using transfer learning.pptx.pdf
 
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
 
Updated proposal powerpoint.pptx
Updated proposal powerpoint.pptxUpdated proposal powerpoint.pptx
Updated proposal powerpoint.pptx
 
A new model for large dataset dimensionality reduction based on teaching lear...
A new model for large dataset dimensionality reduction based on teaching lear...A new model for large dataset dimensionality reduction based on teaching lear...
A new model for large dataset dimensionality reduction based on teaching lear...
 
Computer Aided System for Detection and Classification of Breast Cancer
Computer Aided System for Detection and Classification of Breast CancerComputer Aided System for Detection and Classification of Breast Cancer
Computer Aided System for Detection and Classification of Breast Cancer
 
A deep learning framework for accurate diagnosis of colorectal cancer using h...
A deep learning framework for accurate diagnosis of colorectal cancer using h...A deep learning framework for accurate diagnosis of colorectal cancer using h...
A deep learning framework for accurate diagnosis of colorectal cancer using h...
 
IRJET - Classification of Cancer Images using Deep Learning
IRJET -  	  Classification of Cancer Images using Deep LearningIRJET -  	  Classification of Cancer Images using Deep Learning
IRJET - Classification of Cancer Images using Deep Learning
 

More from IJECEIAES

Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
IJECEIAES
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
Neural network optimizer of proportional-integral-differential controller par...
Neural network optimizer of proportional-integral-differential controller par...Neural network optimizer of proportional-integral-differential controller par...
Neural network optimizer of proportional-integral-differential controller par...
IJECEIAES
 
An improved modulation technique suitable for a three level flying capacitor ...
An improved modulation technique suitable for a three level flying capacitor ...An improved modulation technique suitable for a three level flying capacitor ...
An improved modulation technique suitable for a three level flying capacitor ...
IJECEIAES
 
A review on features and methods of potential fishing zone
A review on features and methods of potential fishing zoneA review on features and methods of potential fishing zone
A review on features and methods of potential fishing zone
IJECEIAES
 
Electrical signal interference minimization using appropriate core material f...
Electrical signal interference minimization using appropriate core material f...Electrical signal interference minimization using appropriate core material f...
Electrical signal interference minimization using appropriate core material f...
IJECEIAES
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
Bibliometric analysis highlighting the role of women in addressing climate ch...
Bibliometric analysis highlighting the role of women in addressing climate ch...Bibliometric analysis highlighting the role of women in addressing climate ch...
Bibliometric analysis highlighting the role of women in addressing climate ch...
IJECEIAES
 
Voltage and frequency control of microgrid in presence of micro-turbine inter...
Voltage and frequency control of microgrid in presence of micro-turbine inter...Voltage and frequency control of microgrid in presence of micro-turbine inter...
Voltage and frequency control of microgrid in presence of micro-turbine inter...
IJECEIAES
 
Enhancing battery system identification: nonlinear autoregressive modeling fo...
Enhancing battery system identification: nonlinear autoregressive modeling fo...Enhancing battery system identification: nonlinear autoregressive modeling fo...
Enhancing battery system identification: nonlinear autoregressive modeling fo...
IJECEIAES
 
Smart grid deployment: from a bibliometric analysis to a survey
Smart grid deployment: from a bibliometric analysis to a surveySmart grid deployment: from a bibliometric analysis to a survey
Smart grid deployment: from a bibliometric analysis to a survey
IJECEIAES
 
Use of analytical hierarchy process for selecting and prioritizing islanding ...
Use of analytical hierarchy process for selecting and prioritizing islanding ...Use of analytical hierarchy process for selecting and prioritizing islanding ...
Use of analytical hierarchy process for selecting and prioritizing islanding ...
IJECEIAES
 
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
IJECEIAES
 
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
IJECEIAES
 
Adaptive synchronous sliding control for a robot manipulator based on neural ...
Adaptive synchronous sliding control for a robot manipulator based on neural ...Adaptive synchronous sliding control for a robot manipulator based on neural ...
Adaptive synchronous sliding control for a robot manipulator based on neural ...
IJECEIAES
 
Remote field-programmable gate array laboratory for signal acquisition and de...
Remote field-programmable gate array laboratory for signal acquisition and de...Remote field-programmable gate array laboratory for signal acquisition and de...
Remote field-programmable gate array laboratory for signal acquisition and de...
IJECEIAES
 
Detecting and resolving feature envy through automated machine learning and m...
Detecting and resolving feature envy through automated machine learning and m...Detecting and resolving feature envy through automated machine learning and m...
Detecting and resolving feature envy through automated machine learning and m...
IJECEIAES
 
Smart monitoring technique for solar cell systems using internet of things ba...
Smart monitoring technique for solar cell systems using internet of things ba...Smart monitoring technique for solar cell systems using internet of things ba...
Smart monitoring technique for solar cell systems using internet of things ba...
IJECEIAES
 
An efficient security framework for intrusion detection and prevention in int...
An efficient security framework for intrusion detection and prevention in int...An efficient security framework for intrusion detection and prevention in int...
An efficient security framework for intrusion detection and prevention in int...
IJECEIAES
 

More from IJECEIAES (20)

Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
Neural network optimizer of proportional-integral-differential controller par...
Neural network optimizer of proportional-integral-differential controller par...Neural network optimizer of proportional-integral-differential controller par...
Neural network optimizer of proportional-integral-differential controller par...
 
An improved modulation technique suitable for a three level flying capacitor ...
An improved modulation technique suitable for a three level flying capacitor ...An improved modulation technique suitable for a three level flying capacitor ...
An improved modulation technique suitable for a three level flying capacitor ...
 
A review on features and methods of potential fishing zone
A review on features and methods of potential fishing zoneA review on features and methods of potential fishing zone
A review on features and methods of potential fishing zone
 
Electrical signal interference minimization using appropriate core material f...
Electrical signal interference minimization using appropriate core material f...Electrical signal interference minimization using appropriate core material f...
Electrical signal interference minimization using appropriate core material f...
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
Bibliometric analysis highlighting the role of women in addressing climate ch...
Bibliometric analysis highlighting the role of women in addressing climate ch...Bibliometric analysis highlighting the role of women in addressing climate ch...
Bibliometric analysis highlighting the role of women in addressing climate ch...
 
Voltage and frequency control of microgrid in presence of micro-turbine inter...
Voltage and frequency control of microgrid in presence of micro-turbine inter...Voltage and frequency control of microgrid in presence of micro-turbine inter...
Voltage and frequency control of microgrid in presence of micro-turbine inter...
 
Enhancing battery system identification: nonlinear autoregressive modeling fo...
Enhancing battery system identification: nonlinear autoregressive modeling fo...Enhancing battery system identification: nonlinear autoregressive modeling fo...
Enhancing battery system identification: nonlinear autoregressive modeling fo...
 
Smart grid deployment: from a bibliometric analysis to a survey
Smart grid deployment: from a bibliometric analysis to a surveySmart grid deployment: from a bibliometric analysis to a survey
Smart grid deployment: from a bibliometric analysis to a survey
 
Use of analytical hierarchy process for selecting and prioritizing islanding ...
Use of analytical hierarchy process for selecting and prioritizing islanding ...Use of analytical hierarchy process for selecting and prioritizing islanding ...
Use of analytical hierarchy process for selecting and prioritizing islanding ...
 
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
 
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
 
Adaptive synchronous sliding control for a robot manipulator based on neural ...
Adaptive synchronous sliding control for a robot manipulator based on neural ...Adaptive synchronous sliding control for a robot manipulator based on neural ...
Adaptive synchronous sliding control for a robot manipulator based on neural ...
 
Remote field-programmable gate array laboratory for signal acquisition and de...
Remote field-programmable gate array laboratory for signal acquisition and de...Remote field-programmable gate array laboratory for signal acquisition and de...
Remote field-programmable gate array laboratory for signal acquisition and de...
 
Detecting and resolving feature envy through automated machine learning and m...
Detecting and resolving feature envy through automated machine learning and m...Detecting and resolving feature envy through automated machine learning and m...
Detecting and resolving feature envy through automated machine learning and m...
 
Smart monitoring technique for solar cell systems using internet of things ba...
Smart monitoring technique for solar cell systems using internet of things ba...Smart monitoring technique for solar cell systems using internet of things ba...
Smart monitoring technique for solar cell systems using internet of things ba...
 
An efficient security framework for intrusion detection and prevention in int...
An efficient security framework for intrusion detection and prevention in int...An efficient security framework for intrusion detection and prevention in int...
An efficient security framework for intrusion detection and prevention in int...
 

Recently uploaded

ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.
sachin chaurasia
 
gray level transformation unit 3(image processing))
gray level transformation unit 3(image processing))gray level transformation unit 3(image processing))
gray level transformation unit 3(image processing))
shivani5543
 
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
amsjournal
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
insn4465
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
co23btech11018
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
KrishnaveniKrishnara1
 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
Roger Rozario
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
Las Vegas Warehouse
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
VICTOR MAESTRE RAMIREZ
 
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
171ticu
 
Introduction to AI Safety (public presentation).pptx
Introduction to AI Safety (public presentation).pptxIntroduction to AI Safety (public presentation).pptx
Introduction to AI Safety (public presentation).pptx
MiscAnnoy1
 
Casting-Defect-inSlab continuous casting.pdf
Casting-Defect-inSlab continuous casting.pdfCasting-Defect-inSlab continuous casting.pdf
Casting-Defect-inSlab continuous casting.pdf
zubairahmad848137
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
jpsjournal1
 
International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...
gerogepatton
 
Material for memory and display system h
Material for memory and display system hMaterial for memory and display system h
Material for memory and display system h
gowrishankartb2005
 
john krisinger-the science and history of the alcoholic beverage.pptx
john krisinger-the science and history of the alcoholic beverage.pptxjohn krisinger-the science and history of the alcoholic beverage.pptx
john krisinger-the science and history of the alcoholic beverage.pptx
Madan Karki
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Yasser Mahgoub
 

Recently uploaded (20)

ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.
 
gray level transformation unit 3(image processing))
gray level transformation unit 3(image processing))gray level transformation unit 3(image processing))
gray level transformation unit 3(image processing))
 
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
 
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
 
Introduction to AI Safety (public presentation).pptx
Introduction to AI Safety (public presentation).pptxIntroduction to AI Safety (public presentation).pptx
Introduction to AI Safety (public presentation).pptx
 
Casting-Defect-inSlab continuous casting.pdf
Casting-Defect-inSlab continuous casting.pdfCasting-Defect-inSlab continuous casting.pdf
Casting-Defect-inSlab continuous casting.pdf
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
 
International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...
 
Material for memory and display system h
Material for memory and display system hMaterial for memory and display system h
Material for memory and display system h
 
john krisinger-the science and history of the alcoholic beverage.pptx
john krisinger-the science and history of the alcoholic beverage.pptxjohn krisinger-the science and history of the alcoholic beverage.pptx
john krisinger-the science and history of the alcoholic beverage.pptx
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
 

Deep learning for cancer tumor classification using transfer learning and feature concatenation

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 12, No. 6, December 2022, pp. 6736~6743 ISSN: 2088-8708, DOI: 10.11591/ijece.v12i6.pp6736-6743  6736 Journal homepage: http://ijece.iaescore.com Deep learning for cancer tumor classification using transfer learning and feature concatenation Abdallah Mohamed Hassan1 , Mohamed Bakry El-Mashade1 , Ashraf Aboshosha2 1 Electrical Engineering Department, Faculty of Engineering, Al-Azhar University, Cairo, Egypt 2 NCRRT, Egyptian Atomic Energy Authority, Cairo, Egypt Article Info ABSTRACT Article history: Received Sep 6, 2021 Revised Jun 19, 2022 Accepted Jul 15, 2022 Deep convolutional neural networks (CNNs) represent one of the state-of-the-art methods for image classification in a variety of fields. Because the number of training dataset images in biomedical image classification is limited, transfer learning with CNNs is frequently applied. Breast cancer is one of most common types of cancer that causes death in women. Early detection and treatment of breast cancer are vital for improving survival rates. In this paper, we propose a deep neural network framework based on the transfer learning concept for detecting and classifying breast cancer histopathology images. In the proposed framework, we extract features from images using three pre-trained CNN architectures: VGG-16, ResNet50, and Inception-v3, and concatenate their extracted features, and then feed them into a fully connected (FC) layer to classify benign and malignant tumor cells in the histopathology images of the breast cancer. In comparison to the other CNN architectures that use a single CNN and many conventional classification methods, the proposed framework outperformed all other deep learning architectures and achieved an average accuracy of 98.76%. Keywords: Breast cancer Cancer tumor Classification Deep learning Feature concatenation Transfer learning This is an open access article under the CC BY-SA license. Corresponding Author: Abdallah Mohamed Hassan Electrical Engineering Department, Faculty of Engineering, Al-Azhar University Cairo, Egypt Email: abdallah.mohamed@azhar.edu.eg 1. INTRODUCTION Analysis of the microscopic images that represent various human tissues has developed as one of the most vital fields of biomedical research, as it aids in the understanding of a variety of biological processes. Different applications of microscopic images classification have been developed, which include identifying simple patient conditions and studying complex cell processes. Tissue image classification is extremely important. After lung cancer, breast cancer is considering the most frequent cancer type studied and the most prevalent type of cancer in women has the highest death rate in the world [1]. The radiologist uses microscopic images of the breast to detect cancer indications in women at an early stage, and the rate of survival will be increased if detected early. Pathologists use a microscope to analyze a sample of microscopic images of breast tissue to detect and classify the types of cancer tumors, which are categorized into benign and malignant tumor. The benign tumor is harmless, and the majority of this type is unable to become a breast cancer source, while the malignant tumor is characterized by abnormal divisions and irregular growth. Because manual classification of microscopic images is time-consuming and expensive, there is a growing demand for automated systems as the rate of breast cancer rises and diagnosis differs. As a result, computer-aided-diagnosis (CAD) system is required to decrease a specialist’s workload by increasing the efficiency of diagnostic and reducing classification subjectivity. Various applications have been developed
  • 2. Int J Elec & Comp Eng ISSN: 2088-8708  Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan) 6737 for microscopic image classification. Traditional automated classification techniques such as local binary patterns (LBP) [2] that use hand-crafted features extractors, support vector machines (SVM) [3] as a linear classifier, clustering-based algorithms segmentation and classification of nuclei [4], [5], hybrid SVM-ANN [6]. Although these methods produced some acceptable results in classification, the accuracy might be improved. Deep convolutional neural networks were used to overcome the accuracy limits in traditional machine learning techniques and have developed as one of the most advanced methods in the classification process [7]. Deep CNN systems as nuclei detection and classification [8], tumor detection [9], skin disease classification [10], detection and classification of lymph nodes metastasis [11]. On large datasets, CNN systems perform well, but it fails on small datasets to achieve high gains. The principle of transfer learning is used to exploit deep neural networks in small datasets to enhance the CNN structure’s performance by combining their knowledge to reduce computing costs and achieve high accuracy. CNN architecture is learned on a generic large dataset of nature images and then employed as a features extractor using the pre-trained CNN structure in transfer learning. The generic features extracted from the CNN can apply to various datasets [12], [13]. To improve transfer learning performance, the use of a combination of multiple CNNs structures has been introduced and could eventually replace the usage of a single CNN model. VGG16, ResNet50, and Inception-v3 networks have developed an accurate and fast model for image classification [14]–[16], which are pre-trained on ImageNet. In the suggested framework, we use transfer learning and a combination of extracted from multiple CNN architectures to overcome the shortcomings in cancer tumor detection and classification in existing systems. We can summarize the contributions in this research in the following: i) provide a framework for detecting and classifying breast cancer tumor that use CNN architectures, ii) apply the transfer learning concept and provide a comparative analysis of accuracy for three different deep CNN architectures, and iii) using a combination of extracted features from various networks to improve classification accuracy. 2. PROPOSED METHOD In this paper, we suggest a framework by using three different deep CNN architectures: VGG16, ResNet50, and Inception, these CNNs were pre-trained on ImageNet dataset [17]. We used them for the breast cancer tumor detection and classification in histopathology images. The suggested model combined various low-level features that were separately extracted from various CNN architectures, and then fed it into a fully connected (FC) layer to classify the benign and malignant tumor. 2.1. Pre-trained CNN architectures for features extraction In this section, three different CNN models are used for feature extraction of the proposed method, VGG-16 [18], ResNet50 [19], and Inception-v3 [20]. These models are concatenated into the FC layer which is used to classify breast cancer tumor. The ImageNet dataset, which contains multiple generic image descriptors, was used to pre-train these CNNs [21], and then feature extraction is performed using transfer learning concept. The structures for each CNN architectures are described briefly: 2.1.1. VGG-16 architecture VGG-16 is made up of 16 layers, containing 13 convolution layers, pooling layers, and three FC layers [18]. The number of channels in convolution layers is 64 channels in the first layer and rises after each pooling layer by a factor of two until it reaches 512. A 3x3 window size filter and a 2x2 pooling network are used in the convolution network. VGG-16 is a convolutional network similar to the model AlexNet but contains more convolution layers. Because of its simple architecture, it outperforms AlexNet. VGG-16’s basic architecture is shown in Figure 1. 2.1.2. ResNet50 architecture Residual networks (ResNet) [19] are a group of deep neural networks that have architectures that are similar but varying depths that perform well at classification tasks on ImageNet [22]. To deal with the degradation problem of deep neural networks, the residual learning unit is a structure introduced by ResNet [19]. The merit of this structure is it enhances classification accuracy without raising model complexity. ResNet50’s basic architecture is depicted in Figure 2. 2.1.3. Inception-v3 architecture Inception-v3 [20] is an enhanced version of the GoogLeNet architecture [23], which uses transfer learning in biomedical applications to achieve high classification performance [24], [25]. Inception suggested a model that combines many convolutional filters of varying sizes into a single one. As a result of this design,
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743 6738 the computational complexity and the number of trained parameters are reduced. Inception-v3’s basic architecture is depicted in Figure 3. Figure 1. The VGG-16 CNN architecture [18] Figure 2. The ResNet50 CNN architecture [19] Figure 3. Inception-v3 CNN architecture [20] 2.2. Data augmentation Because CNN’s performance weakens when used with small datasets due to overfitting [26], which gives unwell results on test data despite it achieving good performance in the training data, it requires large data sets to attain higher accuracy. In this paper, to reduce overfitting issues and expand the dataset, a data augmentation technique is used [26]. The number of data at training is increased by using image processing methods to apply geometric transformations to image datasets at data augmentation technique. During the training stage, using flipping samples vertical and horizontal, scaling, translation, and rotation, the training data is increased. Because microscopic images rotationally are invariant, cancer tumor microscopic images can be easily analyzed from various positions without affecting the diagnosis [27]. Input 3x3 Conv, 64 3x3 Conv, 64 2x2 Pooling 3x3 Conv, 128 3x3 Conv, 128 2x2 Pooling 3x3 Conv, 256 3x3 Conv, 256 3x3 Conv, 256 2x2 Pooling 3x3 Conv, 512 3x3 Conv, 512 3x3 Conv, 512 2x2 Pooling 3x3 Conv, 512 3x3 Conv, 512 3x3 Conv, 512 2x2 Pooling FC - 4096 FC - 4096 FC - 1000 Output Conv1 Patch: 7x7 Stride: 2 3 x Conv2_x Conv: 1x1, 64 Conv: 3x3, 64 Conv: 1x1, 256 Pool Patch: 3x3 Stride: 2 4 x Conv3_x Conv: 1x1, 128 Conv: 3x3, 128 Conv: 1x1, 512 6 x Conv4_x Conv: 1x1, 256 Conv: 3x3, 256 Conv: 1x1, 1024 3 x Conv5_x Conv: 1x1, 512 Conv: 3x3, 512 Conv: 1x1, 2048 Conv Patch: 3x3 Stride: 2 Conv padded Patch: 3x3 Stride: 1 Conv Patch: 3x3 Stride: 1 Pool Patch: 3x3 Stride: 2 Conv Patch: 3x3 Stride: 1 Conv Patch: 3x3 Stride: 2 Conv Patch: 3x3 Stride: 1 3 x Inception Model 1 5 x Inception Model 2 2 x Inception Model 3 Pool Patch: 8x8 Stride: 0 Linear Logits
  • 4. Int J Elec & Comp Eng ISSN: 2088-8708  Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan) 6739 2.3. Transfer learning To achieve high accuracy and train a model from scratch, it needs a large amount of data, but getting a large dataset of relevant problems can be difficult in some cases. As a result, the term “transfer learning” has been introduced. The CNN model structure is first trained for a task using a large image dataset related to that task and then transferred to a wanted task which is trained on a small dataset [28]. The similarity between the source training dataset and the target dataset and selection of pre-trained model are two steps in the process of transfer learning process. If the size of the used dataset is small and related to the original training dataset, there is high overfitting probability. If the target dataset size is large and different from the source training dataset, there is low overfitting probability [16], and in this case all that is required for the pre-trained model is fine-tuning. 2.4. The proposed network structure First, the three CNN architectures VGG16, ResNet50, and Inception-v3 are trained on a dataset of general images from 1,000 categories using ImageNet dataset [17], after which a transfer learning method can be used, allowing CNN architectures to learn generic characteristics from other image datasets without the need to train models from scratch. The CNN model’s transfer learning architecture is shown in Figure 4, the pre-trained network acts as a features extractor for general features of image, and add FC layers for classification. The details of the extracted features from the CNN architecture can be summarized as the following: i) VGG-16: 512 feature is extracted from the last layer as shown in Figure 1; ii) ResNet50: 2048 feature is extracted from the last layer as shown in Figure 2; iii) Inception-v3: 2048 feature is extracted from the last logits layer as shown in Figure 3. The extracted features from the per-trained models are concatenated to form 4,612-dimensional feature. The concatenated features are then fed into the FC layer using average pooling for classification of the benign and malignant tumor. Figure 5 shows the structure of the proposed feature concatenation scheme. Figure 4. The CNN model’s transfer learning architecture Figure 5. The proposed feature concatenation structure 3. EXPERIMENTAL RESULTS AND DISCUSSION 3.1. Dataset description The proposed framework is evaluated using the breast cancer histopathology images (BreakHis) dataset [29]. The dataset consists of 7,909 breast cancer histopathology images using various factors of magnification (40X, 100X, 200X, and 400X) which were collected from patients and contain 5,429 malignant and 2,480 benign samples. The details of the dataset used are illustrated in Table 1. Table 1. The breast cancer histopathological images dataset Magnification Malignant Benign Total 40X 1,370 625 1,995 100X 1,437 644 2,081 200X 1,390 623 2,013 400X 1,232 588 1,820 Total 5,429 2480 7,909 Image FC layers Pre-trained CNN model Output Image Features concatenation Pre-trained VGG-16 FC layers Pre-trained ResNet50 Pre-trained Inception-v3 Output
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743 6740 3.2. Results and discussion We split the dataset into two parts: the first one contains 80% of the dataset as a training set (6,328 image) for training the models and the other contains 20% as a testing set (1,581 image) for testing the models. The accuracy of classification for the proposed architecture is compared to three single transfer learning network architectures: VGG16, ResNet50, and Inception individually. Table 2 displays the classification accuracy results obtained by the proposed model and other used architectures in different magnification factor and then the average magnification accuracy is calculated for each model. As the results shown in Table 2, it can be noticed that the proposed framework achieved the highest accuracy of 99.732%, 98.947%, 99.328%, and 98.626% at magnification factor 40X, 100X, 200X, and 400X respectively. The VGG-16, ResNet50, and Inception-v3 architectures give an average magnification accuracy of 95.25%, 90.19%, and 97.02%, respectively. Meanwhile the suggested model achieves 99.16% as an average accuracy. The results shown indicate that the suggested architecture outperforms the three single architectures and achieves high accuracy in the cancer tumor classification. Table 2. The accuracy of the proposed model and other CNN models based on different magnification factor CNN model Magnification accuracy (%) Average magnification accuracy (%) 40X 100X 200X 400X VGG-16 96.242 96.447 96.102 92.214 95.25 ResNet50 89.262 92.368 88.441 90.687 90.19 Inception-v3 97.572 96.586 97.172 96.748 97.02 Proposed Framework 99.732 98.947 99.328 98.626 99.16 In this situation, to get more accurate results for the different models, the entire dataset is divided using multiple splitting ratio procedures into training and testing parts, such as 90-10%, 80-20%, and 70-30% ratios. The 90-10% splitting ratio indicates that 90% of the data are used when train the model, while the remaining 10% are used to test the model. Table 3 and Figure 6 compare the proposed framework architecture to the other CNN models based on different splitting ratios. The "Class Type" represents the type of tumor, where B denotes benign and M is malignant cancer in Table 3. It shows the precision of each class type and the accuracy of each splitting ratio. In addition, it provides the average accuracy based on splitting ratios of each CNN models. Figure 6 shows the accuracy of CNN architectures at different splitting ratios and an average accuracy for the architectures. As shown in Table 3 and Figure 6, the proposed framework architecture achieves the highest accuracy compared to single architectures in the classification of the cancer tumor. The VGG-16, ResNet50, and Inception-v3 architectures achieve an average accuracy 95.68%, 88.43%, and 96.49% respectively, while the suggested model achieves an average accuracy 98.76%. Table 3. Comparative analysis of accuracy based on different splitting ratios for proposed model with other CNN models CNN model Splitting ratio Class type Precision Accuracy of splitting ratio (%) Average accuracy (%) VGG-16 90%-10% B M 95.05 96.73 96.20 80%-20% B M 93.72 96.22 95.44 95.68 70%-30% B M 95.82 95.20 95.40 ResNet50 90%-10% B M 84.55 93.35 90.60 80%-20% B M 72.87 95.20 88.22 88.43 70%-30% B M 79.92 89.49 86.49 Inception-v3 90%-10% B M 98.37 96.31 96.95 80%-20% B M 94.94 96.96 96.32 96.49 70%-30% B M 97.77 95.48 96.2 Proposed Framework 90%-10% B M 97.96 99.81 99.23 80%-20% B M 97.58 99.17 98.67 98.76 70%-30% B M 97.98 98.58 98.39
  • 6. Int J Elec & Comp Eng ISSN: 2088-8708  Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan) 6741 Figure 6. Comparative analysis of accuracy for CNN models 3.3. Comparison between the proposed framework and other methods In this section, it is preferable to compare the results achieved from our proposed framework technique with those achieved using various conventional classification methods as indicated in Table 4. It can be indicated that the performed structures in [13]–[16] have an accuracy 92.63%, 97%, 97.08%, and 97.52% respectively, while our proposed scenario has the highest accuracy of all four procedures at 98.76%. These results demonstrate the suggested framework's superiority over other similar methodologies. Table 4. Comparison between the proposed framework and other methods Method Accuracy (%) Nguyen [13] 92.63 Kensert [14] 97.00 Vesal [15] 97.08 Khan [16] 97.52 Proposed Framework 98.76 4. CONCLUSION In this study, we suggest a deep learning framework based on the transfer learning principle for detection and classification of breast cancer Histopathological images. In this framework, by using three different deep CNN models (VGG-16, ResNet50, and Inception-v3), the features from breast cancer images are extracted, and then concatenated to improve classification accuracy. Data augmentation is a method for increasing the dataset size to minimize over-fitting issues and improve the efficiency of CNN architecture. The work presented here shows how transfer learning and features concatenation of multiple CNN architectures can improve classification accuracy when compared to single CNN networks and achieves excellent classification accuracy. It is also compared the proposed framework's performance to that of different existing classification methods, and it is found that the proposed model achieves 98.76% as an average accuracy. REFERENCES [1] R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2019,” CA: A Cancer Journal for Clinicians, vol. 69, no. 1, pp. 7–34, Jan. 2019, doi: 10.3322/caac.21551. [2] D. Liu, S. Wang, D. Huang, G. Deng, F. Zeng, and H. Chen, “Medical image classification using spatial adjacent histogram based on adaptive local binary patterns,” Computers in Biology and Medicine, vol. 72, pp. 185–200, May 2016, doi: 10.1016/j.compbiomed.2016.03.010. [3] D. Lin, Z. Lin, S. Sothiharan, L. Lei, and J. Zhang, “An SVM based scoring evaluation system for fluorescence microscopic image classification,” in 2015 IEEE International Conference on Digital Signal Processing (DSP), Jul. 2015, pp. 543–547, doi: 10.1109/ICDSP.2015.7251932. [4] M. Kowal, A. F. P. Obuchowicz, J. Korbicz, and R. Monczak, “Computer-aided diagnosis of breast cancer based on fine needle biopsy microscopic images,” Computers in Biology and Medicine, vol. 43, no. 10, pp. 1563–1572, Oct. 2013, doi: 10.1016/j.compbiomed.2013.08.003. [5] Y. M. George, H. H. Zayed, M. I. Roushdy, and B. M. Elbagoury, “Remote computer-aided breast cancer detection and diagnosis system based on cytological images,” IEEE Systems Journal, vol. 8, no. 3, pp. 949–964, Sep. 2014, doi: 10.1109/JSYST.2013.2279415. [6] T. S. Lim, K. G. Tay, A. Huong, and X. Y. Lim, “Breast cancer diagnosis system using hybrid support vector machine-artificial neural network,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 4, pp. 3059–3069, Aug. 2021, doi: 10.11591/ijece.v11i4.pp3059-3069.
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6736-6743 6742 [7] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539. [8] K. Sirinukunwattana, S. E. A. Raza, Y.-W. Tsang, D. R. J. Snead, I. A. Cree, and N. M. Rajpoot, “Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1196–1206, May 2016, doi: 10.1109/TMI.2016.2525803. [9] A. A. Cruz-Roa, J. E. Arevalo Ovalle, A. Madabhushi, and F. A. González Osorio, “A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection,” in Advanced Information Systems Engineering, Springer Berlin Heidelberg, 2013, pp. 403–410, doi: 10.1007/978-3-642-40763-5_50. [10] A. Esteva, B. Kuprel, and S. Thrun, “Deep networks for early stage skin disease and skin cancer classification,” Project Report, Stanford University, 2015. [11] B. E. Bejnordi et al., “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer,” JAMA, vol. 318, no. 22, pp. 2199–2210, Dec. 2017, doi: 10.1001/jama.2017.14585. [12] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2014, pp. 512–519, doi: 10.1109/CVPRW.2014.131. [13] L. D. Nguyen, D. Lin, Z. Lin, and J. Cao, “Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation,” in 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018, pp. 1–5, doi: 10.1109/ISCAS.2018.8351550. [14] A. Kensert, P. J. Harrison, and O. Spjuth, “Transfer learning with deep convolutional neural networks for classifying cellular morphological changes,” SLAS Discovery, vol. 24, no. 4, pp. 466–475, Apr. 2019, doi: 10.1177/2472555218818756. [15] S. Vesal, N. Ravikumar, A. Davari, S. Ellmann, and A. Maier, “Classification of breast cancer histology images using transfer learning,” in Lecture Notes in Computer Science, Springer International Publishing, 2018, pp. 812–819, doi: 10.1007/978-3-319- 93000-8_92. [16] S. Khan, N. Islam, Z. Jan, I. Ud Din, and J. J. P. C. Rodrigues, “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning,” Pattern Recognition Letters, vol. 125, pp. 1–6, Jul. 2019, doi: 10.1016/j.patrec.2019.03.022. [17] J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp. 248–255, doi: 10.1109/CVPR.2009.5206848. [18] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, Sep. 2014. [19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90. [20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2818–2826, doi: 10.1109/CVPR.2016.308. [21] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” Advances in neural information processing systems, Nov. 2014 [22] Y. Yu et al., “Modality classification for medical images using multiple deep convolutional neural networks,” Journal of Computational Information Systems, vol. 11, no. 15, pp. 5403–5413, 2015 [23] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 2015, pp. 1–9, doi: 10.1109/CVPR.2015.7298594. [24] H.-C. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, May 2016, doi: 10.1109/TMI.2016.2528162. [25] A. Kumar, J. Kim, D. Lyndon, M. Fulham, and D. Feng, “An ensemble of fine-tuned convolutional neural networks for medical image classification,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 31–40, Jan. 2017, doi: 10.1109/JBHI.2016.2635663. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386. [27] D. C. Cireşan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Mitosis detection in breast cancer histology images with deep neural networks,” in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2013, Springer Berlin Heidelberg, 2013, pp. 411–418, doi: 10.1007/978-3-642-40763-5_51. [28] L. Yang, S. Hanneke, and J. Carbonell, “A theory of transfer learning with applications to active learning,” Machine Learning, vol. 90, no. 2, pp. 161–189, Feb. 2013, doi: 10.1007/s10994-012-5310-y. [29] F. A. Spanhol, L. S. Oliveira, C. Petitjean, and L. Heutte, “A dataset for breast cancer histopathological image classification,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455–1462, Jul. 2016, doi: 10.1109/TBME.2015.2496264. BIOGRAPHIES OF AUTHORS Abdallah Mohamed Hassan received the B.Sc. and M.Sc. degrees in Electronics and Communications Engineering from the Faculty of Engineering, Al-Azhar University, Cairo, Egypt, in 2012 and 2018 respectively. He is an Assistant Lecturer in the Faculty of Engineering, Al-Azhar University, Cairo, Egypt. He is currently a Ph.D. student at Faculty of Engineering, Al-Azhar university, Cairo. His research activities are within artificial intelligent and deep learning. He can be contacted at email: abdallah.mohamed@azhar.edu.eg.
  • 8. Int J Elec & Comp Eng ISSN: 2088-8708  Deep learning for cancer tumor classification using transfer learning and … (Abdallah Mohamed Hassan) 6743 Mohamed Bakry El-Mashade received the B.Sc. degree in electrical engineering from Al-Azhar University, Cairo, in 1978, the M.Sc. degree in the theory of communications from Cairo University, in 1982, Le D.E.A. d’Electronique (Spécialité: Traitment du Signal), and Le Diploma de Doctorat (Spécialité: Composants, Signaux et Systems) in optical communications, from USTL, L’Academie de Montpellier, Montpellier, France, in 1985 and 1987 respectively. He serves on the Editorial Board of several International Journals. He has also served as a reviewer for many international journals. He was the author of more than 60 peer-reviewed journal articles and the coauthor of more than 60 journal technical papers as well as three international book chapters. He serves on the Editorial Board of International Journal of Communications, Networks and System Sciences IJCNS. He has organized a special issue on Recent Trends of Wireless Communication Networks for International Journal of Communications, Network and Systems sciences IJCNS. He received the best research paper award from International Journal of Semiconductor Science and Technology in 2014 for his work on “Noise Modeling Circuit of Quantum Structure Type of Infrared Photodetectors”. He won the Egyptian Encouraging Award, in Engineering Science, two times (1998 and 2004). He was included in the American Society ‘Marquis Who’s Who’ as a ‘Distinguishable Scientist’ in 2004, and in the International Biographkal Centre of Cambridge, England as an ‘Outstanding Scientist’ in 2005. He has been named an official listed in the 2020 edition of ‘Marquis Who’s Who in the World®’ . His research interests include statistical signal processing, digital and optical signal processing, free space optical communications, fiber Bragg grating, quantum structure family of optical devices, SDR, cognitive radio, and software defined radar and SAR. He can be contacted at email: elmashade@yahoo.com. Ashraf Aboshosha received the B.Sc. in industrial electronics from Menoufia University, Egypt in 1990. Since 1992 he is a researcher at the (NCRRT/EAEA), where he served as a junior member of the instrumentation and control committee. In 1997 he received his M.Sc. in automatic control and measurement engineering. From 1997 to 1998 he was guest researcher at research centre Jülich (FZJ), Germany. From 2000 to 2004 he was a doctoral student (DAAD-scholarship) at Wilhelm Schickard Institute for informatics (WSI), Eberhard- Karls-University, Tübingen, Germany. Where he received his Doctoral degree (Dr. rer. nat.) in 2004. He is the E-i-C of ICGST LLC, Delaware, USA. He is the author of more than 7 Books (4 in English and 3 in Arabic) and about 50 academic articles. He supervised more than 20 academic M.Sc. & Ph.D. theses. He can be contacted at email: editor@icgdt.com.