SlideShare a Scribd company logo
1 of 68
Download to read offline
Editor-in-Chief
Dr. Sergey Victorovich Ulyanov
State University “Dubna”, Russian Federation
Editorial Board Members
Ebtehal Turki Alotaibi, Saudi Arabia
José Miguel Rubio, Chile
Luis Pérez Domínguez, Mexico
Brahim Brahmi, Canada
Behzad Moradi, Iran
Hesham Mohamed Shehata, Egypt
Mahmoud Shafik, United Kingdom
Siti Azfanizam Ahmad, Malaysia
Hafiz Alabi Alaka, United Kingdom
Abdelhakim Deboucha, Algeria
Karthick Srinivasan, Canada
Ozoemena Anthony Ani, Nigeria
Rong-Tsu Wang, Taiwan
Yu Zhao, China
Aslam Muhammad, Pakistan
Yong Zhong, China
Xin Zhang, China
Anish Pandey, Bhubaneswar
Hojat Moayedirad, Iran
Mohammed Abdo Hashem Ali, Malaysia
Paolo Rocchi, Italy
Falah Hassan Ali Al-akashi, Iraq
Chien-Ho Ko, Taiwan
Bakİ Koyuncu, Turkey
Wai Kit Wong, Malaysia
Viktor Manahov, United Kingdom
Riadh ayachi, Tunisia
Terje Solsvik Kristensen, Norway
Hussein Chible Chible, Lebanon
Tianxing Cai, United States
Mahmoud Elsisi, Egypt
Jacky Y. K. NG, Hong Kong
Li Liu, China
Fushun Liu, China
Reza Javanmard Alitappeh, Iran
Luiz Carlos Sandoval Góes, Brazil
Abderraouf Maoudj, Algeria
Ratchatin Chancharoen, Thailand
Shih-Wen Hsiao, Taiwan
Nguyen-Truc-Dao Nguyen, United States
Lihong Zheng, Australia
Hassan Alhelou, Syrian Arab Republic
Fazlollah Abbasi, Iran
Chi-Yi Tsai, TaiWan
Shuo Feng, Canada
Mohsen Kaboli, Germany
Dragan Milan Randjelovic, Serbia
Milan Kubina, Slovakia
Yang Sun, China
Yongmin Zhang, Canada
mouna Afif, Tunisia
Yousef Awwad Daraghmi, Palestine
Ahmad Fakharian, Iran
Kamel Guesmi, Algeria
Yuwen Shou, Taiwan
Sung-Ja Choi, Korea
Yahia ElFahem Said, Saudi Arabia
Michał Pająk, Poland
Qinwei Fan, China
Andrey Ivanovich Kostogryzov, Russian Federation
Ridha Ben Salah, Tunisia
Andrey G. Reshetnikov, Russian Federation
Mustafa Faisal Abdelwahed, Egypt
Ali Khosravi, Finland
Chen-Wu Wu, China
Mariam Shah Musavi, France
Shing Tenqchen, Taiwan
Konstantinos Ilias Kotis, Greece
Dr. Sergey Victorovich Ulyanov
Editor-in-Chief
Artificial Intelligence
Advances
Volume 1 Issue 1 · April 2019 · ISSN 2661-3220 (Online)
Architecture of a Commercialized Search Engine Using Mobile Agents
Falah Al-akashi
To Perform Road Signs Recognition for Autonomous Vehicles Using Cascaded Deep Learn-
ing Pipeline
Riadh Ayachi, Yahia ElFahem Said, Mohamed Atri
GFLIB: an Open Source Library for Genetic Folding Solving Optimization Problems
Mohammad A. Mezher
Quantum Fast Algorithm Computational Intelligence PT I: SW / HW Smart Toolkit
Ulyanov S.V
A Novel Dataset For Intelligent Indoor Object Detection Systems
Mouna Afif, Riadh Ayachi1 Yahia Said, Edwige Pissaloux, Mohamed Atri
Volume 1 | Issue 1 | April 2019 | Page 1-58
Artificial Intelligence Advances
Article
Review
Contents
Copyright
Artificial Intelligence Advances is licensed under a Creative Commons-Non-Commercial 4.0 International Copyright
(CC BY- NC4.0). Readers shall have the right to copy and distribute articles in this journal in any form in any medium,
and may also modify, convert or create on the basis of articles. In sharing and using articles in this journal, the user must
indicate the author and source, and mark the changes made in articles. Copyright © BILINGUAL PUBLISHING CO. All
Rights Reserved.
1
11
18
52
44
1
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569
Artificial Intelligence Advances
https://ojs.bilpublishing.com/index.php/aia
ARTICLE
To Perform Road Signs Recognition for Autonomous Vehicles Using
Cascaded Deep Learning Pipeline
Riadh Ayachi1
Yahia ElFahem Said1,2*
Mohamed Atri1
1. Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Tunisia
2. Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia
ARTICLE INFO ABSTRACT
Article history
Received: 26 February 2019
Accepted: 6 April 2019
Published Online: 30 April 2019
Autonomous vehicle is a vehicle that can guide itself without human con-
duction. It is capable of sensing its environment and moving with little or
no human input. This kind of vehicle has become a concrete reality and
may pave the way for future systems where computers take over the art of
driving. Advanced artificial intelligence control systems interpret sensory
information to identify appropriate navigation paths, as well as obstacles
and relevant road signs. In this paper, we introduce an intelligent road
signs classifier to help autonomous vehicles to recognize and understand
road signs. The road signs classifier based on an artificial intelligence
technique. In particular, a deep learning model is used, Convolutional
Neural Networks (CNN). CNN is a widely used Deep Learning model to
solve pattern recognition problems like image classification and object
detection. CNN has successfully used to solve computer vision problems
because of its methodology in processing images that are similar to the
human brain decision making. The evaluation of the proposed pipeline
was trained and tested using two different datasets. The proposed CNNs
achieved high performance in road sign classification with a validation
accuracy of 99.8% and a testing accuracy of 99.6%. The proposed meth-
od can be easily implemented for real time application.
Keywords:
Traffic signs classification
Autonomous vehicles
Artificial intelligence
Deep learning
Convolutional Neural Networks CNN
Image understanding
*Corresponding Author:
Yahia ElFahem Said,
Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia;
Email: said.yahia1@gmail.com
1. Introduction
I
n the recent years, we notice that the number of ac-
cidents increases with a huge way. According to the
American safety council [13]
more than 40000 dies
because of cars accidents. The main cause of accident was
non-respect of the road rules and speed limits. Automated
technologies have been developed and reaches a signifi-
cant result. Autonomous vehicles are proposed as a solu-
tion to make roads safer by taking the control. An autono-
mous vehicle based on artificial intelligence will not make
error in judging situation like human does. Traffic signs
classifier is the feature key for developing autonomous ve-
hicles. It provides a global overview about the road rules
to control the vehicle and the way how it reacts according
to given situation.
Generally, an autonomous vehicle is composed from a
big number of sensors and cameras. The visual informa-
tion provided by the cameras can be used to recognize the
road signs. To process visual information, a well-known
Deep Learning model, Convolutional Neural Networks
(CNN) [1]
, are proposed. They are widely used in image
2
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
processing tasks such as object recognition, image classi-
fication[2]
and object localization[3]
. CNNs are successfully
used to solve computer vision tasks[4]
because of their
power in visual context processing that mimic the biolog-
ical system were every neuron in the network is applied
in a restricted region of the receptive field[5]
. Then all the
neurons of the network overlapped to cover the entire re-
ceptive field. So, features from all the receptive field are
shared everywhere in the network with less effort. The
major advantage of the Convolutional Neural networks is
the ability to learn directly from the image[6]
, unlike other
classification algorithm that need a hand-crafted feature to
learn from.
For human, recognizing and classifying a traffic sign is
an easy task and the classification will be totally correct
but for an artificial system, it is a hard task that needs a
lot of computation effort. In many countries the shape and
the color of the same road sign is different. Figure 1 illus-
trates an example of the stop sign in different countries.
In addition, the road sign can look different because of the
environment factors like rain, sun and dust. Though the
mentioned challenges need to be processed successfully
to make a robust road sign classifier with the minimum of
error.
Figure 1. Stop Sign in Different Countries
In this paper, we propose a pipeline based on data
preprocessing algorithm and deep learning model to rec-
ognize and classify traffic signs. The data preprocessing
pipeline is composed by five stages. First, data loading
and augmentation are performed. Then, all the images are
resized and shuffled. All the images are then transformed
to gray scale channel. After that, we apply a local histo-
gram equalization[8, 9, 10]
. Finally, we normalize the images
to feed them to the proposed convolutional neural net-
work.
As CNN model, we propose two different networks.
The first one is 14 layers subset from the VGGNet mod-
el[12]
, which is invented by VGG (Visual Geometry Group)
DOI: https://doi.org/10.30564/aia.v1i1.569
from University of Oxford, and was the 1st runner-up of
the classification task in the ILSVRC2014 challenge[32]
and the winner of the localization task. The second one is
the Deep Residual Network ResNet[11]
. It was arguably the
most groundbreaking work in the computer vision/deep
learning community in the last few years. ResNet makes it
possible to train up to hundreds or even thousands of lay-
ers and still achieves compelling performance.
By testing the proposed networks, we achieve high
performance in both validation and tests. The best per-
formance was achieved using the 34 layers ResNet archi-
tecture with a validation accuracy of 99.8% and a testing
accuracy of 99.6%. Also achieving an inference speed of
more than 40 frames per second, the pipeline can be im-
plemented for real time applications.
The remainder of the paper is organized as follows.
Related works on traffic signs classification are presented
in Section 2. Section 3 describes the proposed pipeline
to recognize and classify road signs. In Section 4, exper-
iments and results are detailed. Finally, Section 5 con-
cludes the paper.
2. Related Works
The need for a robust traffic sign classifier became an
important benchmark that must be solved. Many research
works were presented in the literature[14,15,36]
. Ohgushi et
al.[16]
introduced a traffic signs classifier based on color
information and Bags of Features (BoF) as a features
extractor and a support vector machine (SVM) as a clas-
sifier. The proposed mothed struggle in recognizing the
traffic signs in real condition especially when the sign is
intensively illuminated or partially occluded.
Some research investigated the detection of the traffic
sign without performing the classification process[17,18]
.
Wu et al.[17]
proposed a method to detect only round traffic
signs in the Chinese roads. In other side, researchers focus
on detecting and recognizing the traffic sign[19]
. The pro-
posed method only detects round signs and cannot detect
other signs shapes.
A three steps method to detect and recognize traffic
signs was proposed by Wali et al.[20]
. The first step was
data preprocessing. The second was detecting the exis-
tence of the sign and the third was classifying it. For the
detection process, they apply the color segmentation with
shape matching and for the classification process they
use SVM as a classifier. The proposed method achieves
95.71% of accuracy. Lai et al.[21]
introduced a traffic signs
recognition method using smart phone. They used color
detection to perform color space segmentation and shape
recognition method using template matching by calculat-
ing the similarity. Also, an optical character recognition
3
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569
(OCR) was implemented inside the shape border to decide
on the sign class. The proposed method was very limited
on red traffic signs only. Gecer et al.[38]
propose to use
color-blob-based COSFIRE to recognize traffic signs. The
proposed method was based on a Combination of Shifted
Filter Responses with compute the response of different
filters is different regions in each channel of the color
space (ie. RGB). The proposed method achieves 98.94%
as accuracy on the GTSRB dataset.
Virupakshappa et al.[22]
used a machine learning meth-
od by combining the bag-of-visual-words technique
with Speeded up Robust Features (SURF) for features
extraction then feed the features to an SVM classifier to
recognize the traffic signs. The proposed method achieves
an accuracy of 95.2%. A system based on a BoW descrip-
tor enhanced using spatial histogram was used by Shams
et al.[23]
to improve the classification process based on an
SVM classifier.
Lin et al.[24]
introduced a two-stage fuzzy inference
model to detect traffic signs in video frame the they apply
a two-stage fuzzy inference model to classifier the signs.
The method provides high performance only on prohibito-
ry and warning signs. In[25]
, Yin et al. presented a revolu-
tionary technique for real time processing based on Hough
transformation to localize the sign in the image the use the
rotation invariant binary pattern (RIBP) descriptor to ex-
tract features. As a classification method they use artificial
neural networks.
A cascade Convolutional Neural Network model was
introduced by Rachmadi et al.[26]
to perform the traffic
signs classification process of the Japanese road signs.
The proposed method achieves a performance of 97.94%
and can be implemented for real time processing with a
speed less than 20 ms per image. The mothed of Sermanet
et al.[39]
was based on a multi-scale convolutional neural
network. This method introduces a new connection way
by skipping layers and the use of pooling layers with
down sampling ratios for connection that skip layers dif-
ferent than those that do not skip layers. The proposed
method improves its efficiency by reaching 99.1% accura-
cy. Cireçsan et al[37]
used a combination of CNNs and train
them in parallel using differently preprocessed data. It
uses an arbitrary number of CNNs each is combined from
seven layers, input layer, two convolution layers, two max
pooling layers and two fully connected layers. The predic-
tion is provided by averaging the output of all the CNNs.
The proposed technique further boosts the classification
accuracy to 99.4%. The use of convolutional neural net-
works has led to enhance the classification accuracy com-
pared with the machine learning techniques.
In the recent years, several vehicle manufactories de-
velop new techniques to perform traffic signs classifica-
tion. As an example, BMW announced the integration of a
traffic sign classifier in the BMW 5 series. Moreover, oth-
er vehicle manufactories were trying to implement those
technologies[27]
. Volkswagen implement a traffic sign
classifier in the Audi A8[28]
. All the existing researches on
the traffic signs classification proved the important of this
technology for autonomous cars.
3. Proposed Method
As mentioned above many traffic signs classification
techniques are proposed. Our method focusses on the data
preprocessing technique to enhance the images quality
and to reduce the number of features learned by the con-
volutional Neural Network so we ensure the real time
implementation. As shown in figure 2, the preprocessing
technique contain five phases: data loading and augmen-
tation, images resizing and shuffling[29]
, gray scaling, local
histogram equalization[30]
and data normalization.
As a first phase, we load the data and we generate new
examples using a data augmentation technique. The data
augmentation process is applied to maximize the amount
of the training data. Also, the data augmentation was used
in the tests by generating more points of view of the tested
image to ensure better prediction.
In the second phase, we resize all the images to
height*width*3 where 3 denotes the 3 channels color
space. Then the images are shuffled to avoid obtaining
minibatches of highly correlated examples. So, the train-
ing algorithm will choose a different minibatch each time
it iterates. In third phase, we perform gray scaling to re-
duce the number of channels of the image so the images
are scaled to height *width*1. As result of the gray scal-
ing technique the number of learned filters was reduced in
the convolutional neural network. Also, the training and
inference time can be reduced. In the fourth phase, we ap-
ply local histogram equalization[31]
to enhance the images
contrast by separating the most frequent intensity values.
Usually, this increases the global contrast of the images
and allows to the areas of lower local contrast to gain a
higher contrast. The fifth phase consists of data normaliza-
tion which is a simple process applied to get the same data
scale of all the examples ensuring an equal representation
of all the features. The preprocessing pipeline is an im-
portant stage to enhance the data injected to the network
in both training and testing process.
4
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Figure 2. Data Preprocessing
The second part of our method is the Convolutional
Neural Network (CNN). Generally, a convolutional neu-
ral network is feedforward neural network used to solve
computer vision tasks. Usually, a CNN contains six types
of layers: input layer, convolution layers, nonlinear layers,
pooling layers, fully connected layers and an output layer.
Figure 3 illustrates a CNN architecture.
The complete proposed pipeline is composed from a
data preprocessing stage and a convolutional neural net-
work for traffic signs classification. The proposed pipeline
can be summarized by the pseudo code presented in algo-
rithm 1.
Algorithm 1: proposed pipeline for traffic signs classification
Train input: images, labels
Test input: images
Output: images classes
Mode: choose the mode (training or testing)
Batch size: choose a batch size (number of images per batch)
Image size: choose the images size
Number of batches: choose a number of batches
If mode: training
For batch in range (number of batches):
Load the data (images and labels)
Apply data augmentation
Resize the images
Shuffle the images
Apply local histogram equilibration
Normalize the images
Fit the images into the convolutional neural network
Initialize the CNN parameters (load weights from pretrained model)
Compute the mapping function
Generate the output
Repeat
Compute the loss function (difference between output class and input
label)
Optimize CNN parameters (apply backpropagation algorithm)
Until output class input label
Chose next batch
Else (mode: testing)
Load the data (images)
Apply data augmentation
Resize the images
Apply local histogram equilibration
Normalize the images
Fit the images into the convolutional neural network
Load parameters from trained model
Compute the mapping function
Generate the output
Figure 3. Convolutional Neural Network Architecture
The first CNN to use is VGGNet[12]
. VGGNet have two
main architectures: the VGG16 which is a 16 layers CNN
and the VGG19 which is a 19 layers CNN. The VGGNet
architectures are presented in figure 4. VGGNet achieves
a top 5 error in the ILSVRC2014 classification challenge
[32]
of 7.32%. In our work we will just use 14 layers from
the VGGNet by saving the first 10 layers and the 4 last
layers. Also, in the third block we will use just 2 convolu-
tional layers and a pooling layer.
Figure 4. VGGNet Architecture
The second CNN that we will explore is ResNet[11]
which presents a revolutionary architecture to accelerate
the convergence of the very deep neural networks (more
than 20 layers) by implementing residual blocks instead
of classic plain blocks used in VGGNet. An illustration of
the residual block is shown in figure 5. ResNet wins the
ILSVRC2015 classification contest [32]
achieving the top-
5 validation error of 3.57%[11]
. To perform traffic signs
classification, we choose ResNet 34 architecture. Figure
5 presents the structure of ResNet 34 which is a 34 layers
CNN with residual blocks. A residual block is an accumu-
lation of the input and the output of the block.
VGGNet and ResNet are trained to classify natural im-
ages according to the ImageNet [32]
with 1000 classes. To
make it perfect for the traffic signs classifier, the transfer
learning technique was applied by replacing the output
layers of those architectures by another layer contains the
classes of the traffic signs. The transfer learning technique
is well known technique in deep learning which helps to
use existing architecture to solve new tasks by freezing
some layers and fine tuning the other layers or retrain
them from scratch. The transfer learning is used to speed
up the training process and to improve the performance
DOI: https://doi.org/10.30564/aia.v1i1.569
5
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
of the used deep learning architecture. Using the transfer
learning technique allows to use the pre-trained weights
as a starting point to optimize the existing architecture for
the news task.
Figure 5. ResNet34 Structure
Another advantage of the transfer learning is possibili-
ty to use a small amount of data to train the deep learning
model and achieve high performance.
4. Experiments and Results
In this work two datasets were used to train and evaluate
the networks. The first dataset is the German traffic signs
dataset GTSRB[34]
, which is a large multi-class dataset
for traffic signs classification benchmark. In this dataset
there is a training directory and a testing directory, each
contain 43 traffic signs classes providing more than 50000
total images of traffic signs in real conditions. Figure 6
represents the classes of the German traffic signs dataset.
The second data set is the Belgium traffic signs dataset
BTSC[35]
. This dataset provides a training and teasing data
separately. The training and the testing data contain 62
traffic signs classes and more than 4000 images of real
traffic signs in the Belgium roads.
Figure 6: the German Traffic Signs Dataset Classes
In all our experiments, all the networks are developed
using the TensorFlow deep neural network framework.
The training is performed using a desktop with Intel i7
processor and an Nvidia GTX960 GPGPU.
To achieve good performance, we use a variant of
configuration by manipulating the images sizes, the batch
size, the dropout probability and choosing the learning
algorithm (optimizer). We start by resizing the images to
32*32. Also, we start by using a large batch size (1024),
the dropout probability of 0.25 and as learning algorithm
we use stochastic gradient descent and we perform train-
ing the network.
The final used images resizing value was determined
after testing many different values such as 32*32, 64*64,
96*96 and 128*128, and after several tests, we end up
by the best configuration which is resizing the images to
96*96, using a minibatch of 256, a dropout probability of
0.5 and the Adam optimizer. The Adam optimizer is an ex-
tension of the stochastic gradient descent optimizer which
guarantee a better and faster converge. In addition, it does
not need a learning rate, it will generate its own learning
rate and optimize it until finding the best value.
Figure 7. the Belgium Dataset Classes
In the data pre-processing pipeline, the data was pre-
pared for training and testing the model. First, loading the
data and applying the data augmentation technique. Figure
8 shows an example of the generated data using the pro-
posed data augmentation technique. Second, resizing the
data and shuffle it to generate mixed mini batches. Then,
images were transformed to the gray scale space color.
Figure 9 illustrates an example of the gray scaled images.
DOI: https://doi.org/10.30564/aia.v1i1.569
6
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Figure 8. Data Augmentation
Figure 9. Gray Scaling
The local histogram equalization was then applied to
equilibrate the images contrasts. Figure 10 present images
after applying the local histogram equalization. Finally,
normalizing the data and feed it to the convolutional neu-
ral network. An example of the normalized data is pre-
sented in figure 11.
Figure 10. Local Histogram Equalization
Figure 11. Normalized Gray Images and the Original Col-
or Images
In the training process, the data was injected to the
CNN architectures and the parameters are optimized. In
the ResNet 34, the first convolution layer was used to per-
form feature extraction and down sampling in the same
time by using 7*7 kernels to incorporate features with
larger receptive field and a stride of 2. Figure 12 presents
the output feature maps of the first ResNet 34 convolution
layer. The residual blocks are used for features extraction
using 2 convolutional layers with 3*3 kernels and zero
padding was applied. The input and the output of each re-
sidual block are accumulated to control parameters num-
ber explosion. Figure 13 presents the output feature maps
of the first ResNet 34 residual block.
Figure 12. Features Maps of the First ResNet34 Convolu-
tion Layer
DOI: https://doi.org/10.30564/aia.v1i1.569
7
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Figure 13. Features Maps of the First ResNet34 Residual
Block
A way to visualize the CNN performance is by repre-
senting the corresponding confusion matrix. The confu-
sion matrix shows the ways in which the classification
CNN model is confused when it makes predictions. Fig-
ure 14 shows the confusion matrix of the ResNet on the
GTSRB dataset.
Figure 14. Confusion Matrix of ResNet34
Table 1. Performance of the Proposed Architectures in
Term of Accuracy in Both Datasets
Accuracy (%)
Dataset GTSRB BTSC
VGG (12 layers) 99.3 98.3
ResNet 34 99.6 98.8
Table 1 summarize the obtained accuracy on the test-
ing data of the trained models on the GTSRB and the
BTSC datasets. As shown in table 1 the best performance
is obtained on the GTSRB dataset using the ResNet 34
architecture and this proves the importance of the residual
block to enhance the network performance without any
explosion in the complexity when using very deep convo-
lutional neural network. The results obtained on the BTSC
data set are lower because of the lack of data. The dataset
contains only 4965 images divided on training data and
testing data. The reported data on the GTSRB dataset
proved that the proposed traffic sign classifier outper-
formed the human accuracy which is 98.32%. The most of
the false negative examples are caused by totally or par-
tially damaged images after performing the data pre-pro-
cessing. Figure 15 illustrate an example of the damaged
images.
Figure 15. Damaged Images after Preprocessing
Table 2. Inference Speed of Each Architecture
Architecture frames/second
VGG (12 layers) 57
ResNet 34 43
Table 2 summarize the number of images processed
per second by each architecture. For real time implemen-
tation, we need an equilibration between accuracy and
speed. Our best proposed CNN achieve an accuracy of
99.621% which is an acceptable value in comparison of
human accuracy and outperform the state-of-the-art mod-
els in the traffic signs classification task.
Table 3 presents a comparison between our architec-
tures and other proposed architectures and methods tested
on the GTSRB dataset.
DOI: https://doi.org/10.30564/aia.v1i1.569
8
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Table 3. Accuracy Comparison
Architecture Accuracy (%)
Committee of CNN [37] 99.4
Color-blob-based COSFIRE filters
[38]
98.9
Sermanet [39] 99.1
Proposed VGG (12 layers) 99.3
Proposed ResNet 34 99.6
As reported in table 3, our proposed ResNet 34 archi-
tecture outperform state of the art methods in traffic signs
classification. Also, our architecture can be easily imple-
mented for real time applications. A real time application
needs at least a 25 frames per second and as reported in
table 2, the lowest architecture processes 43 frames per
second. In other hand, all the proposed architecture out-
performs human accuracy in the traffic signs classification
benchmark.
To make it useful for real word application and human
interpretable, we implement the ResNet 34 architecture in
traffic signs classification application where we label the
images with human understandable labels. In both train-
ing and tests label were encoded as integers. As example
the labels were encoded from 0 to 42 range in the GTSRB
dataset. The testing images was collected from the web
and does not belong to the datasets. The top 5 probabili-
ties of the softmax layer were visualized. Figure 16 pres-
ents an example of the top 5 probabilities of the softmax
layer and their corresponding input images. The classifier
achieves a good performance when applied to the new im-
ages and proves the generalization power.
5. Conclusion
Traffic signs classification was and still an important ap-
plication for autonomous cars. Cars need real time and
embedded solutions that is why we need to provide a
balance between speed and accuracy. In this paper, we
propose an artificial intelligence technique based on deep
learning model, Convolutional Neural Network to perform
the traffic signs classification benchmark. The reported
results prove that the proposed solutions can be effective-
ly implemented for real time applications and provide an
acceptable accuracy outperforming human performance.
The proposed architectures can be more optimized for
embedded implementation.
Figure 16. ResNet34 Softmax Probabilitie
Conflicts of Interest:
The authors declare no conflict of interest.
References
[1] O'Shea, K., & Nash, R. An introduction to con-
volutional neural networks. arXiv preprint arX-
iv:1511.08458, 2015.
[2] Ciresan, D. C., Meier, U., Masci, J., Maria Gambar-
della, L., & Schmidhuber, J. Flexible, high perfor-
mance convolutional neural networks for image clas-
sification. In IJCAI Proceedings-International Joint
Conference on Artificial Intelligence, 2011,22(1):
1237.
[3] Tompson, J., Goroshin, R., Jain, A., et al. Efficient
object localization using convolutional networks. In:
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition. 2015: 648-656.
[4] Simonyan, K., & Zisserman, A. (2014). Very deep
convolutional networks for large-scale image recog-
nition. arXiv preprint arXiv:1409.1556.
[5] Simard, P. Y., Steinkraus, D., & Platt, J. C. Best prac-
tices for convolutional neural networks applied to vi-
sual document analysis. In null (p. 958). IEEE, 2003.
[6] LeCun, Y., Jackel, L. D., Bottou, L., Cortes, C.,
Denker, J. S., Drucker, H., ... & Vapnik, V. Learning
algorithms for classification: A comparison on hand-
written digit recognition. Neural networks: the statis-
tical mechanics perspective, 1995, 261: 276.
[7] Zhu, H., Chan, F. H., & Lam, F. K. Image contrast
enhancement by constrained local histogram equal-
ization. Computer vision and image understand-
ing, 1999, 73(2): 281-290.
[8] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced
contrast enhancement using partially overlapped
sub-block histogram equalization. IEEE transactions
on circuits and systems for video technology, 2001,
DOI: https://doi.org/10.30564/aia.v1i1.569
9
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
11(4): 475-484.
[9] Stark, J. A. Adaptive image contrast enhancement us-
ing generalizations of histogram equalization. IEEE
Transactions on image processing, 2000, 9(5): 889-
896.
[10] He, K., Zhang, X., Ren, S., & Sun, J. Deep residual
learning for image recognition. In Proceedings of the
IEEE conference on computer vision and pattern rec-
ognition, 2016: 770-778.
[11] Sercu, T., Puhrsch, C., Kingsbury, B., & LeCun, Y.
Very deep multilingual convolutional neural net-
works for LVCSR. In Acoustics, Speech and Signal
Processing (ICASSP), 2016 IEEE International Con-
ference on, IEEE, 2016: 4955-4959.
[12] U.S. vehicle deaths topped 40,000 in 2017, National
Safety Council estimates. https://www.usatoday.com/
story/money/cars/2018/02/15/national-safety-coun-
cil-traffic-deaths/340012002
[13] Vitabile, S.; Pollaccia, G.; Pilato, G.; Sorbello, F.
Road signs recognition using a dynamic pixel ag-
gregation technique in the HSV color space. In
Proceedings of the 11th International Conference on
Image Analysis and Processing, Palermo, Italy, 2001,
26–28: pp. 572–577.
[14] Zeng, Y.; Lan, J.; Ran, B.; Wang, Q.; Gao, J. Resto-
ration of motion-blurred image based on border de-
formation detection: A traffic sign restoration model.
PLoS ONE, 10, e0120885. 2015.
[15] Ohgushi, K.; Hamada, N. Traffic sign recognition
by bags of features. In Proceedings of the TENCON
2009—2009 IEEE Region 10 Conference, Singapore,
2009, 23–26: 1–6.
[16] Wu, J.; Si, M.; Tan, F.; Gu, C. Real-time automatic
road sign detection. In Proceedings of the Fifth Inter-
national Conference on Image and Graphics (ICIG
’09), Xi’an, China, 2009: 540– 544.
[17] Belaroussi, R.; Foucher, P.; Tarel, J.P.; Soheilian, B.;
Charbonnier, P.; Paparoditis, N. Road sign detection
in images: A case study. In Proceedings of the 20th
International Conference on Pattern Recognition
(ICPR), Istanbul, Turkey; 2010: 484–488.
[18] Shoba, E.; Suruliandi, A. Performance analysis on
road sign detection, extraction and recognition tech-
niques. In Proceedings of the 2013 International Con-
ference on Circuits, Power and Computing Technolo-
gies (ICCPCT), Nagercoil, India, 2013: 1167–1173.
[19] Wali, S.B.; Hannan, M.A.; Hussain, A.; Samad, S.A.
An automatic traffic sign detection and recognition
system based on colour segmentation, shape match-
ing, and svm. Math. Probl. Eng, 2015.
[20] Lai, C.H.; Yu, C.C. An efficient real-time traffic sign
recognition system for intelligent vehicles with smart
phones. In Proceedings of the 2010 International
Conference on Technologies and Applications of
Artificial Intelligence, Hsinchu, Taiwan, 2010: 195–
202.
[21] Virupakshappa, K.; Han, Y.; Oruklu, E. Traffic sign
recognition based on prevailing bag of visual words
representation on feature descriptors. In Proceedings
of the 2015 IEEE International Conference on Elec-
tro/Information Technology (EIT), Dekalb, IL, USA,
2015: 489–493.
[22]Shams, M.M.; Kaveh, H.; Safabakhsh, R. Traffic sign
recognition using an extended bag-of-features model
with spatial histogram. In Proceedings of the 2015
Signal Processing and Intelligent Systems Confer-
ence (SPIS), Tehran, Iran, 2015: 189–193.
[23] Lin, C.-C.; Wang, M.-S. Road sign recognition with
fuzzy adaptive pre-processing models. Sensors, 6415.
2012.
[24] Yin, S.; Ouyang, P.; Liu, L.; Guo, Y.; Wei, S. Fast
traffic sign recognition with a rotation invariant bina-
ry pattern-based feature. Sensors, 2015, 2161–2180.
[25] Rachmadi½, R. F., Komokata½, Y., Íchimura½, K.,
& Koutaki½, G. (2017). Road sign classification
system using cascade convolutional neural network,
2017.
[26] Continental. Traffic Sign Recognition. Available on-
line: 2017. http://www.contionline.com/generator/
www/de/en/continental/automotive/general/chassis/
safety/hidden/verkehrszei chenerkennung_en.html
[27] Choi, Y.; Han, S.I.; Kong, S.-H.; Ko, H. Driver status
monitoring systems for smart vehicles using physi-
ological sensors: A safety enhancement system from
automobile manufacturers. IEEE Signal Process.
2016: 22–34.
[28] Dean, J., & Ghemawat, S. MapReduce: simplified
data processing on large clusters. Communications of
the ACM, 2008, 51(1), 107-113.
[29] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced
contrast enhancement using partially overlapped
sub-block histogram equalization. IEEE transactions
on circuits and systems for video technology, 2001,
11(4), 475-484.
[30] Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A.
A., & Chae, O. A dynamic histogram equalization for
image contrast enhancement. IEEE Transactions on
Consumer Electronics, 2007, 53(2).
[31] Olga, R., Jia, D., Hao S., Jonathan, K., Sanjeev,
S., Sean, M., Zhiheng, H., Andrej, K., Aditya,
K., Michael, B., Alexander, C. B., and Li, F. Im-
ageNet Large Scale Visual Recognition Chal-
lenge. IJCV, 2015.
[32] Hinton, Geoffrey E, Srivastava, Nitish, Krizhevsky,
DOI: https://doi.org/10.30564/aia.v1i1.569
10
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan
R. Improving neural networks by preventing co-ad-
aptation of feature detectors. arXiv preprint arX-
iv:1207.0580, 2012.
[33] J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel, Man
vs. computer: Benchmarking machine learning algo-
rithms for traffic sign recognition, Neural Networks.
2012, ISSN 0893-6080.
http://www.sciencedirect.com/science/article/pii/
S0893608012000457
[34] Radu Timofte, Karel Zimmermann, Luc van
Gool, Multi-view traffic sign detection, recognition,
and 3D localisation, IEEE Workshop on Applications
of Computer Vision, WACV, 2009.
[35] Fredrik, L. and Michael, F., Using Fourier Descrip-
tors and Spatial Models for Traffic Sign Recogni-
tion, In Proceedings of the 17th Scandinavian Con-
ference on Image Analysis, SCIA, LNCS 6688, 2011:
238-24.
[36] CireşAn, Dan, et al. "Multi-column deep neural
network for traffic sign classification." Neural net-
works 2012, 32: 333-338.
[37] Gecer, B., Azzopardi, G., & Petkov, N. Color-blob-
based COSFIRE filters for object recognition. Image
and Vision Computing, 2017, 57: 165-174.
[38] Sermanet, P., & LeCun, Y. Traffic sign recognition
with multi-scale convolutional networks. In Neural
Networks (IJCNN), the 2011 International Joint Con-
ference on. IEEE, 2011: 2809-2813.
DOI: https://doi.org/10.30564/aia.v1i1.569
11
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608
Artificial Intelligence Advances
https://ojs.bilpublishing.com/index.php/aia
ARTICLE
GFLIB: an Open Source Library for Genetic Folding Solving Optimi-
zation Problems
Mohammad A. Mezher*
Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA
ARTICLE INFO ABSTRACT
Article history
Received: 8 March 2019
Accepted: 16 April 2019
Published Online: 30 April 2019
This paper aims at presenting GFLIB, a Genetic Folding MATLAB
toolbox for supervised learning problems. In essence, the goal of
GFLIB is to build a concise model of supervised learning, and a free
open source MATLAB toolbox for performing classification and regres-
sion. The GFLIB is specifically designed for most of the traditionally
used features, to evolve in applications of mathematical models. The
toolbox suits all kinds of users; from the users who implemented GFLIB
as “black box”, to advanced researchers who want to generate and test
new functionalities and parameters of GF algorithm. The toolbox and its
documentation are freely available for download at: https://github.com/
mohabedalgani/gflib.git
Keywords:
GF toolbox
GF Algorithm
Evolutionary algorithms
Classification
Regression
Optimization
LIBSVM
*Corresponding Author:
Mohammad A. Mezher,
Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA;
Email: mmezher@fbsu.edu.sa
1. Introduction
A
ll evolutionary algorithms[1]
are biologically
stimulated, by using the “survival fittest” con-
cept found with the aid of Darwinian evolution.
GF algorithm is one of the EA relative member which is
used to resolve complicated problems through random-
ly producing populations of computer programs. Every
computer program (Chromosome) undergoes a number
of natural adjustments called crossover and mutation, to
create a brand-new population. These operators then could
be iterated to generate the fittest chromosome which are
evaluated using one of the performance measurements.
GF algorithm is a member of the Evolutionary algorithm’s
family, but it uses a simple floating-point system for
genes, in formulating the GF chromosomes.
Certainly, there are quite a number of open source evo-
lutionary algorithms toolboxes used for MATLAB[2, 3]
, but
none specific for genetic folding algorithm. GFLIB looks
forward to providing such a free open source toolbox that
can be used and developed by others. Accordingly, the
GFLIB toolbox was designed from scratch and adopted to
12
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
ensure code reusability and clarity. The end result was to
deal with a wide variety of machine learning usage prob-
lems. The need for a fast and easy way is to try different
dataset on a distinct range of parameters. The GFLIB ex-
amined on different MATLAB versions and computer sys-
tems, namely version (R2017b) for Windows and (R2017a)
for Mac.
This standalone toolbox will offer alternatives for us-
ers/researchers to help them decide on both training and
testing data set with a number of k-folding cross-vali-
dation, mathematics operators, crossover and mutation
operators, crossover and mutation rates, kernel types,
and various number of GF parameters. Furthermore, this
toolbox will offer an output option to prevent results in
different formats and figures such as; roc curve, structural
complexity, fitness values, and mean square errors.
In other words, the aim of building a standalone su-
pervised learning toolbox is to spread GF algorithm in all
data set fall within the classification and regression prob-
lems.
2. GFLIB Structure
2.1 Previous Version of GF Toolbox Structure
The old version of GFLIB was first introduced having the
toolbox rely completely[4]
on GP Toolbox[2]
. The toolbox
contained GF algorithms for supervised classification and
regression problems, but it was aligning the structure des-
ignated for the GP. At that time, the GF toolbox was lack-
ing from holding unique encoding and decoding mecha-
nisms fully functioning as intended for being integrated
with GP Toolbox. The development of GF toolbox, there-
fore, was oriented to the optimization and integration of
the existing GP toolbox. The implementation of GF tool-
box was done using MATLAB and the GP package. The
idea was to encode and decode using the GF tree wherein,
the GFLIB was built using GF mechanism shown in [5]
.
2.2 Current Version of GFLIB Toolbox Structure
Although the main GF structure was demonstrated in
detail in [4, 5]
, where the paper will be mainly designed to
be highlighted on the structure of GFLIB toolbox only.
GFLIB is a research MATLAB project that is essentially
intended to offer the user with a complete toolbox without
the need to know how GF algorithm works on a specific
dataset. The recent developed GFLIB toolbox additional-
ly, will grant researchers an entire control in comparison
with different well-known evolutionary algorithms. The
variety of options which GFLIB presents can be used as a
very important tool for researchers, students, and experts
who are interested in testing their personal dataset.
2.2.1 Data Structures
GFLIB provides an easy way to add a dataset in a text
format. The text files may be found in both UCI dataset
[6]
and LIBSVM dataset [7]
. GFLIB is mainly supporting
the .txt data type which is found in the same style of UCI
dataset only.
The main data structures in the GFLIB Toolbox are
genotype and phenotypes which represents the GF chro-
mosomes. The chromosomes present in GF are considered
to be the main structure in the algorithm. The GF chromo-
some consists three-parts: an index number of the gene in
a chromosome which represents the father, and the two
points inside the gene which represents the children.
Then the GF chromosome structure encodes an en-
tire population in a single float-number of formats ls.rs,
whereas lc is the left child number and rc is the right child
number. Phenotypes are stored in a structure of a deter-
mined number of populations. The ith population pop (i)
consists of chromstr and chromnum, chromstr is formulat-
ed for the operator name and chromnum is formulated for
the GF encoding number and both represent the lc and the
rc. The root operator and GF number must be scalar. In all
of these GF structures, each GF number corresponds to a
particular gene either for the right child chromosome, or
the left child chromosome respectively.
In general, the main purpose of the encoding and de-
coding process of GF chromosome is to have an arithme-
tic understanding. GF encodes any arithmetic operation
by dividing it to left and right sides. Each side is divided
into other valid genes to formulate a GF chromosome.
The encoding process depends on the number of operands
the arithmetic operations used. At first, two-operands
operators’ term is (e.g. the minus operator) placed at the
first gene, referring to other operators repeatedly to end
up with terminals. However, the operator types called by
a father gene are; two children (two operands), one child
(one operand) and no child (terminal).
In the meantime, to decode a chromosome, take the
first gene which has two divisions (children) with respec-
tive operands; ls child and rs child. Repeatedly, for each
father, a number of children to be called every time until
a kernel function is represented. The decoding/encoding
process [4, 5, 8, 9]
executes the folding father operator (e.g.
plus) over the ls child (minus) and the rs child (multiply).
The folding mechanisms develops a new algorithm known
as Genetic Folding algorithm.
The three datasets used here for comparative analysis
includes; Iris dataset (multi-classification problem), a
Heart dataset (binary classification problem), and Hous-
ing dataset (regression problem). The Iris dataset is a
DOI: https://doi.org/10.30564/aia.v1i1.608
13
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
dataset made by biologist Ronald Fisher, used in 1936 as
an example of linear discriminant analysis. There are 50
samples from every of 3 species of Iris (Iris setosa, Iris
virginica and Iris versicolor). Four features were measured
from every sample: the length and the width of the sepals
and petals, in centimetres. [10]
The second dataset is the Heart dataset (the part obtained
from Cleveland Clinic Foundation), using a subset of 14 at-
tributes. The purpose is to detect heart diseases in a patient.
Its integer value goes from 0 (no presence) to 4. [6]
The last dataset for testing the regression problems is
“Housing dataset”. The Housing dataset has a median val-
ue of the house price along with 13 other parameters that
could potentially be related to housing prices. The aim
of the dataset is to predict a linear regression model by
estimating the median price of owner-occupied homes in
Boston.
2.2.2 GFLIB Toolbox Structure
The toolbox provides algorithms like SVC, SVR,
and Genetic Folding Algorithm. It provides easy to use
MATLAB files, which takes in input basic parameters for
each algorithm based on the selected file. For example,
in regression problems, there is a file (regress.m) to enter
kernel type, number of k-fold, number of population and
the maximum number of generations to be considered.
The list of parameters users can input and uniformed for
classification and regression problems are shown in below
Table 1:
Table 1. List of Parameters in GFLIB
Name Definition Values
Mutprob mutation probability A float value
Crossprob crossover probability A float value
Maxgen max generation An integer value
Popsize population size An integer value
Type problem type multi,binary,regress
Data Dataset *.txt
Kernel Kernel Type rbf,linear,polynomial,gf
Crossval Crossvalidation An integer value
Oplist
operators and oper-
ands
‘Plus_s’,’Minus_s’,
‘Plus_v’,’Minus_v’,’Sine’,
‘Cosine’,’Tanh’,’Log’,
‘x’,’y’
Oplimit length of chromosome An integer value
The main directory of GFLIB contains a set of main
purposes of GFLIB .m files, described in details in this
section, and the two following subdirectories;
• @data which contains the folder of @binary for da-
taset, @multi for multiclassification dataset and @regress
for regression dataset. The folder to manipulate or add
more dataset to these subdirectories.
• @libsvm, whose functions, discussed in [7]
and being
integrated into the toolbox to play the SVM role.
• Binary, multi and regress files, which forms the ba-
sic use of each problem type respectively.
The list of figures shown in Table 2 is designed and
integrated into the GFLIB toolbox for the sake of compar-
ison with other algorithms and toolboxes.
Table 2. List of GFLIB Figures Shown in the Toolbox
Name Type
Population Diversity Fitness distribution vs. Generation
Accuracy Accuracy value vs. Generation
Structure Complexity Tree Depth/size vs. Generation
Tree Structure GP tree structure
GF Chromosome GF Chromosome structure
In the developed GFLIB toolbox, the focus was on
applying supervised learning to GFLIB toolbox for a re-
al-world problem shown in Table III using LIBSVM as
described in Figure 1.
However, the choice on which particular dataset type to
be used will be determined by the user referee to it in the
path, the GF algorithm will run accordingly. Also, once
the user decides on the GF parameters to run with, the
right GF algorithm (classifier or regression) will run con-
sequentially.
The GA Toolbox was built using GF structs (chromo-
somes) for the purpose of implementing the core of GF
encoding and decoding mechanisms. Here, the major
functions of the GFLIB Toolbox are outlined:
(1) Population representation and initialisation:
genpop, initpop
The GFLIB Toolbox supports floating-point chromo-
some representation. The floating-point was initialized by
the Toolbox function, to create a floating-point GF chro-
mosome, initpop. A genpop is provided to build a vector
describing the populations and figures statistics.
(2) Fitness assignment: calcfitnes, kernel, kernelvalue
The fitness function transforms the raw objective func-
tion equations found, using GF algorithm into non-nega-
tive values. However, kernelvalue which will be repeat-
edly used for all individuals in the population, kernel. The
Toolbox supports both libsvm [7]
package and the fitrsvm [11]
function in MATLAB. Using both, GFLIB could success-
fully generate models that are capable of fitting the aba-
lone data set. The result of the libsvm (using the svmtrain
function) was used along with svmpredict, to successfully
predict the different input parameters. The GF algorithm
included eight arithmetic operators in the toolbox. The
DOI: https://doi.org/10.30564/aia.v1i1.608
14
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
arithmetic operators shown in Table 1 are either one oper-
and operator (sine, cosine, tanh, and log) or two operands
operator (plus, minus).
(3) Genetic Folding operators: crossover, mutation
The GFLIB supports two types of operators by dividing
the population size into two-equal sizes. Each half-size
will undergo one type of operator. The GFLIB operators
are one-point crossover, two-point crossover, and swap
mutation operators.
(4) Selection operators: selection
This function selects a given number of individuals
from the current population according to their fitness and
returns a row structs to their indices. Currently, roulette
wheel selection method was conducted for GFLIB toolbox.
The selection methods particularly, are required to balance
between the quality of solutions and genetic diversity.
(5) Performance figures: genpop
The list of figures included to demonstrate the perfor-
mance of the GF algorithm is; the ROC curve (only for bi-
nary), expression tree, fitness values, population diversity,
accuracy verses complexity, and structure complexity. The
GFLIB also includes well-known kernel functions in order
to differentiate comparisons easily. The file also prints the
best GF chromosome in two different formats; genes num-
bers and operator string.
Fig 1. GFLIB life Cycle
3. GF Algorithm Using Generative Models
Genetic Folding (GF) [4, 5, 8, 9]
is a novel algorithm stimulat-
ed by means of folding mechanism, inspired by the RNA
sequence. GF can represent an NP problem by a simple
array of floating number instead of using a complex tree
structure. First, GF generates an initial population com-
pound of basic mathematics operations randomly. Then,
valid chromosomes (expression) can be evaluated. GF as-
signed a fitness value for every chromosome depending on
the fitness function being developed. The chromosome is
then selected by the roulette wheel. After which the fittest
chromosome will be subjected to the genetic operators in
order to generate a new population in an independent way.
In every population, the chromosomes are also subjected
to a filter to test the validity of the chromosome. The ge-
netic operators used to generate a new population for the
next generation. The entire procedure is repeated until the
optimum chromosome (kernel) is achieved.
4. Experiments on LIBGF
This paper first shows GFLIB methods work on binary
and multi-classification problems; then carries out a re-
gression problem using GFLIB methods. Three datasets
are chosen as testing data for the two types of experi-
ments. Part of their properties is included in Table III and
Table VI for classification and regression respectively.
Amongst them, the same parameters from k-folding to
operator list are for experiments conducted. Other well-
known kernels are included for the sake of comparison
with GFLIB. However, the list of datasets was used in
both binary, multi-classification, and regression problems
brought from UCI dataset[6]
.
4.1 LIBGF for Classification Problems
The classification dataset included in GFLIB shown in Ta-
ble 3 includes the respective details.
Table 3. Classification Datasets Used in the GFLIB
Name Type Size
Credit approval Binary 690*15
Statlog German Credit Binary 1000*20
Heart Scale Binary 270*13
Ionosphere Binary 351*34
Sonar Scale Binary 208*60
Spam Binary 4601*57
Iris Scale Multi 150*4
Zoo Multi 101*18
The list of parameters’ value used in the experiments
DOI: https://doi.org/10.30564/aia.v1i1.608
15
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
for both binary and multiclassification problems is shown
in table 4.
Table 4. List of Classification Parameter Values
Name Definition
mutprob 0.1
crossprob 0.5
maxgen 20
popsize 50
type Binary, multi
data In table III
kernel GF, rbf,linear,polynomial
crossval 10-fold
oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’
oplimit 20 %length of chromosome
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation
0
10
20
30
40
50
60
70
80
90
100
fitness/accuracy
%
Fitness
maximum:97.297297
average:82.378378
median:97.297297
avg-std:50.505797
avg+std:114.250959
bestsofar:100.000000
Figure 2. Classification Fitness Values
The best chromosome string found using GFLIB for
the iris dataset is:
Plus_s Sine Plus_v X Y Sine Sine Y Y X X
And the best chromosome GF number formed for the
above-mentioned string was:
2.3 4.5 6.7 0.4 0.5 8.9 10.11 0.8 0.9 0.10 0.11
The maximum fitness (Accuracy) found using GFLIB
in all generations for the iris dataset was 100.00 %
4.2 LIBGF for Regression Problems
For all figure’s types except ROC curve, the experiment
was tested on running the algorithm for 20 generations
with 10 cross validation. The best performance was the
smallest value of the mean square error found of the ob-
jective function obtained over all function evaluations.
For 50 population conducted at each combination of a
half-mutation size, a half-crossover size, 0.1 a mutation
rate, and 0.5 a crossover rate. Thus, for each generation,
20 combinations of operators are experimented to form a
valid GF chromosome. The GF operators’ rates are shown
in Table 6.
In Table 5, the list of regression datasets included in
GFLIB is shown in and associated with a brief description
of the dimensionality.
Table 5. Regression Datasets Used in the GFLIB
Name Type Size
Abalone Regression 4177*8
Housing Regression 506*13
MPG Regression 392*6
Table 6 shows the list of parameters and values used to
run a regression test on Housing dataset:
Table 6. List of Regression Parameter Values
Name Definition
mutprob 0.1
crossprob 0.5
maxgen 20
popsize 50
type regression
data In table VI
kernel GF, rbf,linear,polynomial
crossval 10-fold
oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’
oplimit 20 %length of chromosome
The best performance value found was with the MSE
value of 0.000121 in all generations as shown in Figure 3.
In Figure 4 and Figure 5, the results conducted using the
GFLIB to demonstrate the variety of results shown using
GFLIB toolbox. The population diversity figure, the figure
plots in dots the highest and lowest fitness values found
in a population. The structure complexity figure plots the
folding depth of the best GF chromosome found in each
generation. The size of each folding counted based on the
number of calling occurred by the first number formulated
in a GF chromosome.
A GF chromosome structure has been well-defined to
represent a structural folding of a GF chromosome. Then,
the GF chromosome is extracted and arranged as a tree
structure of real numbers. The GF encoding part of the
toolbox is used to evolve the tree-structure of a program
whereas the GF decoding part of the toolbox is applied to
determine the string of the structural chromosome. Exper-
imental results have shown the promise of the developed
approach.
DOI: https://doi.org/10.30564/aia.v1i1.608
16
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
0 1 2 3 4 5 6 7 8 9 10
generation
-4
-2
0
2
4
6
8
10
12
14
16
MSE
10
-5 Fitness
maximum:0.000086
average:0.000082
median:0.000086
avg-std:0.000048
avg+std:0.000116
bestsofar:0.000121
Figure 3. Regression Fitness Value
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation
0
10
20
30
40
50
60
70
80
90
100
fitness
distribution
Population Diversity
a. Population Diversity
Plus_s
Sine
x y
Plus_v
Sine
y y
Sine
x x
b. GF Tree Structure
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
generation
4
5
6
7
8
9
10
11
tree
depth/size
Structure Complexity
maximum depth:4
bestsofar depth:4
bestsofar size:11
c. GF Structure Complixity
Figure 4. GFLIB Toolbox Ran for Iris Multiclassification
Dataset
a. Population Diversity
Minus_s
Minus_s
y Minus_v
x x
Log
Minus_s
x Cosine
Sine
b. GF Tree Structure
DOI: https://doi.org/10.30564/aia.v1i1.608
17
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
0 1 2 3 4 5 6 7 8 9 10
generation
4
5
6
7
8
9
10
11
tree
depth/size
Structure Complexity
maximum depth:4
bestsofar depth:4
bestsofar size:11
c. Structure Complixity
Figure 5. GFLIB Toolbox Ran for Housing Regression
Dataset
The best GF string found using GFLIB for the iris data-
set is:
Minus_s Minus_s Log Y Minus_v Minus_s Sine X X X cosine
And the best GF number formed for the above-men-
tioned string was:
2.3 4.5 6.7 0.4 8.9 10.11 0.7 0.8 0.9 0.10 0.11
5. Conclusion
GFLIB toolbox is presented and built using MATLAB,
for users and researchers who are interested in solving
real NP problems. The key feature of this toolbox is the
structure of the GF chromosome, and the encoding and
decoding processes included in the toolbox. In this GFLIB
toolbox, eleven well-known UCI datasets are studied and
implemented with their relative performance analysis;
ROC curve, fitness values, structural analysis, tree struc-
ture, and population diversity. These datasets can be cate-
gorised into two categories: classification and regression.
All figures are comparable with another set of three well-
known kernel functions. GFLIB toolbox of any category
allows users to select their parameter choice. Balanced
parameters of GF chromosome must be considered, to
maintain the genetic diversity within the population of
candidate solutions throughout generations. But, on the
other hand, the MATLAB GFLIB files tends to facilitate
the development time of the toolbox.
In this paper, the GFLIB is being compared with three
well-known kernels. In future researches, I intend to com-
pare GFLIB with a GA and GP alone as well. I also intend
to compare the toolbox with other kinds of hybrid meth-
ods, such as the hybrid decision tree/instance.
References
[1] Seyedali Mirjalili. Evolutionary Algorithms and Neu-
ral Networks Theory and Applications. Springer
international Publishing; June 2018.
[2] Sara Silva and Jonas Almeida, “Gplab-a genetic pro-
gramming toolbox for matlab,” In Proc. of the Nordic
MATLAB Conference, pp. 273--278, 2005.
[3] A.J. Chipperfield and P.J. Fleming, “The MATLAB
genetic algorithm toolbox”, IEE Colloquium on
Applied Control Techniques Using MATLAB, UK,
1995
[4] Mezher, Mohammad and Abbod, Maysam. (2010).
Genetic Folding: A New Class of Evolutionary Algo-
rithms. 279-284.
[5] Mohd Mezher, Maysam Abbod. Genetic Folding:
An Algorithm for Solving Multiclass SVM Prob-
lems. Applied Soft Computing, Elsiver Journal.
41(2):464-472. 2014.
[6] C L Blake, C J Merz. UCI repository of machine
learning databases University of California, Irvine,
Department of Information and Computer Sciences.
1998.
[7] Chang, Chih-Chung and Lin, Chih-Jen. LIBSVM: A
library for support vector machines. ACM Transac-
tions on Intelligent Systems and Technology. 2(3):
1-27. 20011.
[8] Mohd Mezher, Maysam Abbod. Genetic Folding:
A New Class of Evolutionary Algorithms. October
2010.
[9]Mohd Mezher, Maysam Abbod. A New Genetic Fold-
ing Algorithm for Regression Problems. Proceedings
- 2012 14th International Conference on Modelling
and Simulation, UKSim. 46-51. 2012.
[10] R. A. Fisher (1936). “The use of multiple measure-
ments in taxonomic problems”. Annals of Eugenics.
7 (2): 179–188.
[11] Statistics and Machine Learning Toolbox Users
guide. 2018b,the MathWorks, Inc., Natick, Massa-
chusetts, United States.
DOI: https://doi.org/10.30564/aia.v1i1.608
18
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619
Artificial Intelligence Advances
https://ojs.bilpublishing.com/index.php/aia
ARTICLE
Quantum Fast Algorithm Computational Intelligence PT I: SW / HW
Smart Toolkit
Ulyanov S.V.*
State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia
ARTICLE INFO ABSTRACT
Article history
Received: 12 March 2019
Accepted: 18 April 2019
Published Online: 30 April 2019
A new approach to a circuit implementation design of quantum algorithm
gates for quantum massive parallel fast computing implementation is pre-
sented. The main attention is focused on the development of design meth-
od of fast quantum algorithm operators as superposition, entanglement
and interference which are in general time-consuming operations due to
the number of products that have to be performed. SW & HW support
sophisticated smart toolkit of supercomputing accelerator of quantum
algorithm simulation is described. The method for performing Grover’s
interference without product operations as Benchmark introduced. The
background of developed information technology is the "Quantum /
Soft Computing Optimizer" (QSCOptKBTM) software based on soft
and quantum computational intelligence toolkit. Quantum genetic and
quantum fuzzy inference algorithm gate design considered. The quantum
information technology of imperfect knowledge base self-organization
design of fuzzy robust controllers for the guaranteed achievement of
intelligent autonomous robot the control goal in unpredicted control situ-
ations is described.
Keywords:
Quantum algorithm gate
Superposition
Entanglement
Interference
Quantum simulator
*Corresponding Author:
Ulyanov S.V.,
State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia;
Email: ulyanovsv@mail.ru
1. Introduction: Role of Quantum Synergetic
Effects in AI and Intelligent Control Models
R.
Feynman and Yu. Manin, independently,
suggested and correctly shown that quantum
computing can be effectively applied for simu-
lation and searching of solutions of classically intractable
quantum systems problems using quantum programmable
computer (as physical devices). Recent research shows
successful engineering application of end-to-end quantum
computing information technologies (as quantum sophisti-
cated algorithms and quantum programming) in searching
of solutions of algorithmic unsolved problems in classical
dynamic intelligent control systems, artificial intelligence,
intelligent cognitive robotics etc.
Concrete developments are the cognitive “man-robot”
interactions in collective multi-agent systems, “brain-com-
puter-device” interface of autism children supporting with
robots for service use, and so on. These applications are
examples successful result applications of efficient clas-
sical simulation of quantum control algorithms in the al-
gorithmic unsolved problems of classical control systems
robustness in unpredicted control situations.
Related works. Many interesting results are published
as fundamentals and applications of quantum / classical
hybrid approach to design of different smart classical or
quantum dynamic systems. For example, an error mitiga-
tion technique and classical post-processing can be con-
19
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619
veniently applied, thus offering a hybrid quantum-clas-
sical algorithm for currently available noisy quantum
processors [1]
or Quantum Triple Annealing Minimization
(QTAM) algorithm utilizes the framework of simulated
annealing, which is a stochastic point-to-point search
method: The quantum gates that act on the quantum states
formulate a quantum circuit with a given circuit height
and depth [2]
. A new local fixed-point iteration plus global
sequence acceleration optimization algorithm for general
variational quantum circuit algorithms in [3]
is described.
The basic requirements for universal quantum computing
have all been demonstrated with ions and quantum algo-
rithms using few-ion-qubit systems have been implement-
ed [4]
. Quantum computing is finding a vital application
in providing speed-ups for machine learning problems,
critical in “big data” world. Machine learning already per-
meates many cutting-edge technologies, and may become
instrumental in advanced quantum technologies. Aside
from quantum speed-up in data analysis, or classical ma-
chine learning optimization used in quantum experiments,
quantum enhancements have also been (theoretically)
demonstrated for interactive learning tasks, highlighting
the potential of quantum-enhanced learning agents[5]
. In [6]
the system PennyLane as a Python 3 software framework
for optimization and machine learning of quantum and
hybrid quantum / classical computations is introduced. A
plugin system makes the framework compatible with any
gate-based quantum simulator or hardware and provided
plugins for Strawberry Fields, Rigetti Forest, Qiskit, and
ProjectQ, allowing PennyLane optimizations to be run on
publicly accessible quantum devices provided by Rigetti
and IBM Q. On the classical front, PennyLane interfaces
with accelerated machine learning libraries such as Ten-
sorFlow, PyTorch, and auto grad. PennyLane can be used
for the optimization of variational quantum eigensolvers,
quantum approximate optimization, quantum machine
learning models, and many other applications. The first
industry-based and societal relevant applications will be
as a quantum accelerator. It is based on the idea that any
end-application contains multiple parts and the properties
of these parts are better executed by a particular acceler-
ator which can be either an FPGA, a GPU or a TPU. The
quantum accelerator added as an additional coprocessor.
The formal definition of an accelerator is indeed a co-pro-
cessor linked to the central processor and that executes
much faster certain parts of the overall application [7]
.
Limited quantum memory is one of the most important
constraints for near-term quantum devices. Understanding
whether a small quantum computer can simulate a larger
quantum system, or execute an algorithm requiring more
qubits than available, is both of theoretical and practical
importance and in [8]
is discussed. One prominent platform
for constructing a multi-qubit quantum processor involves
superconducting qubits, in which information is stored
in the quantum degrees of freedom of nanofabricated,
anharmonic oscillators constructed from superconduct-
ing circuit elements. The requirements imposed by larger
quantum processors have shifted of mindset within the
community, from solely scientific discovery to the devel-
opment of new, foundational engineering abstractions as-
sociated with the design, control, and readout of multi-qu-
bit quantum systems. The result is the emergence of a
new discipline termed quantum engineering, which serves
to bridge the basic sciences, mathematics, and computer
science with fields generally associated with traditional
engineering [9, 10]
.
Moreover, new synergetic effects defined and extract-
ed from the measurement of quantum information (that
hidden in classical control states of traditional controllers
with time-dependent coefficient gain schedule) are the
information resource for the increasing of the control sys-
tem robustness and guarantee the achievement of control
goal in hazard situations. The background of this syner-
getic effect is the creation of new knowledge from exper-
imental response signals of imperfect knowledge bases
on unpredicted situations using quantum algorithm of
knowledge self-organization as quantum fuzzy inference.
The background of developed information technology is
the "Quantum / Soft Computing Optimizer" (QSCOptKB
TM) software based on soft and quantum computational
intelligence toolkit.
Algorithmic constraints on mathematical models of
data processing in classical form of computing (based on
Church-Turing thesis and using background of classical
physics laws) are dramatically differs from physical con-
straints on resources limitation in data information pro-
cessing models that based on quantum mechanical models
such as information transmission, information bounds on
the extraction of knowledge, amount of quantum acces-
sible experimental information, quantum Kolmogorov’s
complexity, speed-up quantum limit of data processing,
quantum channel capacity etc. Meaning exploring of the
Landauer’s thesis as “Information is physical” has pre-
pared as result the background for changing, clarification
and expanding the Church-Turing thesis, and introduce
the R&D idea of quantum computing exploring and quan-
tum computer development for successful solving many
classically algorithmic unsolved (intractable in classical
mean) problems.
The classification of quantum algorithms is demonstrat-
ed on Fig. 1.
20
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Quantum Algorithms
Decision Making
Searching
Deutsch’s
Deutsch-Jozsa’s
Grover’s Shor’s
Quantum Genetic
Search Algorithm
Robust Knowledge Base Design for
Fuzzy Controllers on QFI
Quantum Fuzzy Control
Quantum Fuzzy Modelling System
A
l
g
o
r
i
t
h
m
L
i
b
r
a
r
y
Design
system
Figure 1. Classification of Quantum Algorithms and In-
terrelations with Quantum Fuzzy Control
Quantum algorithms are in general random: decision
making quantum algorithms of Deutch-Jozsa and quan-
tum search algorithms (QSA) of Shor and Grover are ex-
amples of successful applications of quantum effects and
constraints from introduction new classes computational
basis quantum operators as superposition, entanglement
and interference that are absent in classical computational
models. These effects given the possibility to introduce
new types of computation as quantum parallel massive
computing using superposition operator, operator of en-
tanglement (super-correlation or quantum oracle) created
the possibility of “good” (in general unknown) solution
search and operator of quantum interference help extract
searching “good” solutions with maximal amplitude
probability . All of these operators are reversible, clas-
sical irreversible operator of measurement (as example,
coin) extract the result of quantum algorithm computing.
Note, that quantum effects that described above absent in
classical models of computation and demonstrated the ef-
fectiveness of quantum constraints in classical models of
computations.
Figure 2 demonstrate the computing analogy between
soft and quantum algorithms and its operators that are
used in quantum soft computing information technology.
Superposition
n
Interference
…
Main problem
Solution method Solution method
Global
optimization
GA
structures
Quantum search
algorithms
Analogies
Search spaces
Classical
approach
Fitness
function
Minimum
entropy production
Quantum
approach
Mutation
Crossover
Selection
Initial position (0,1)
Changing of probability choice
0
1
0 0
1
1
0
.…
0 1 1
.…
0
1 1
0
Binary code
Classical states
1 0
0 and 1
0 1
   
= =
   
   
Superposition
( )
1
0 1
2
±
Generation& creation
of entanglement states
Generation& creation
of superposition states
Superposition of solutions
& oracle measurements
Initial position
1
0
0
 
=  
 
Solution
space
2
1 n
Solution
space
2
n
1
1
Quantum Fourier
transform
One qubit
rotation gate
Controlled-Not
two-qubit gate
Solution
space
2
1
N
(Reproduction)
GA
operators
GA operations
General
solution
space
Quantum
operators
Quantum
operations
⊗
Disentanglement
⊗
⊗
⊗
Expert
decision making
Entanglement
Figure 2. Interrelations between Soft and Quantum Oper-
ators in Genetic and Quantum Algorithms
From quantum programming a quantum computer point
view there no exist currently the general methodology of
quantum computing and simulation of dynamic systems
but it was developed many proposals of quantum simula-
tors (see, for example, the large list of quantum simulators
available on [https://quantiki.org/wiki/list-qc-simulators]).
Remark. The purpose of this article is concerned with
the problem of discovering new QAs. Same as D-Wave,
processor supercomputing processes in a quantum com-
puter can be described as a synergetic union of hybrid
quantum / classical HW, and quantum SW with quantum
soft support of quantum programming.
Remark. To understand more clearly the fundamental
capabilities and limitations of quantum computation we
are to discover efficient QAs for interesting engineering
problems as intelligent cognitive control systems.
One the most important open problem in computer sci-
ence is to estimate the possibility of quantum speed-up for
the search of computational problems solution.
Oracular, or black-box, problems are the first exam-
ples of problems that can be solved faster with a quantum
computer than with a classical computer. The computer in
the black box model is given access to oracle (or a black
box) that can be queried to acquire information about the
problem. To find the solution to the problem using as few
queries to the oracle as possible is the computation goal
[11-13]
.
1.1 Goal and Problem Solving
This article consider the design possibility a family of
quantum decision-making and search algorithms (QA’s)
DOI: https://doi.org/10.30564/aia.v1i1.619
21
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
(see, Fig. 1) that it is the background of quantum com-
putational intelligence for solving the problems of Big &
Mining data, deep quantum machine learning (based on
quantum neural network), global optimization in intelli-
gent quantum control (using quantum genetic algorithms)
etc. (see, in details Pt II).
1.2 Method of Solution and Smart Toolkit
The presented method and relative hardware implements
matrix and algorithmic forms of quantum operators that
are used in a QA (entanglement or oracle operators, and
interference operator as in second and third steps of QA
implementation) that increasing computational speed-
up with respect to the corresponding SW realization of
a traditional and a new QSA. A high level structure of
a generic entanglement block that uses logic gates as
analogy elements is described. Method for perform-
ing Grover interference without products is introduced
[14, 15]
. QUANTUM ALGORITHM ACCELATOR
COMPUTING: SW / HW SUPPORT
A. General Structure of Quantum Algorithm
The problem solved by a QA can be stated in the sym-
bolic form:
Input A function f: {0,1}n
→{0,1}
m
Problem Find a certain property of function f
A given function f is the map of one logical state into
another and QA estimate qualitative properties of function
f .
General description of QA on Fig. 3 is demonstrated
(physically the type of operator F
U describes the qualita-
tive properties of the function f ).
Figure 4 shows the steps of QA that includes almost
of described qualitative peculiarities of function f and
physical interpretation of applied quantum operators.
In the scheme diagram of Fig. 5 the structure of a QA
is outlined.
|x>
H
UF
|0>
Input Superposition Entanglement Interference Output
H
|0>
INT
.
.
.
n
|x>
m .
.
.
.
.
.
.
.
.
h
S
S
h
h
h
Repeated k times
.
.
.
.
.
.
M
E
A
S
U
R
E
M
E
N
T
bit
bit
bit
bit
Figure 3. General Description of QAG
Qualitative properties
of function
Problem
Quantum Fourier
transformation
Problem oriented
operator
Hadamard
transformation
Answer QAG design
Quantum oracle as black box
Coding of
function
properties
Qualitative
properties
of function
Classical input
QC output
SCO
Quantum KB optimizer
( )( ) ( )
Interference Quantum oracle Superposition
fin initial
ψ ψ
=  
 
Quantum massive parallel computing
Figure 4. General Structure of QA
Encoder
f→F ; F→UF
f
INPUT
UF
Quantum Block
Basis
Vectors
Decoder
Answer
OUTPUT
Binary strings
level
Complex
Hilbert space
Map Table and
Interpretation Spaces
Figure 5. Scheme Diagram of QA - structure
As above mentioned QA estimates (without numerical
computing) the qualitative properties of the function f . Thus
with QAs we can study qualitative properties of function
f without quantitative estimation of function values.
For example, Fig. 6 represents the general approach to
Grover’ QAG design.
Figure 6. Circuit and Quantum Gate Representation of
Grover’s QSA
DOI: https://doi.org/10.30564/aia.v1i1.619
22
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
As a termination condition criterion minimum-entropy
based method is adopted [13]
.
The structure of a QAG in Fig. 3 in general form de-
fined as following:
( )
1
h
n n m
F
QAG Int I U H S
+
   
= ⊗ ⋅ ⋅ ⊗
 
  (1)
Where I is the identity operator; S is equal to I or H and
dependent on the problem description.
Fast algorithms design to simulate most of known QAs
on classical computers [15-17]
and computational intelli-
gence toolkit is following: 1) Matrix based approach; 2)
Model representations of quantum operators in fast QAs;
3) Algorithmic based approach, when matrix elements are
calculated on “demand”; 4) Problem-oriented approach,
where we succeeded to run Grover’s algorithm with up
to 64 and more qubits with Shannon entropy calculation
(up to 1024 without termination condition); 5) Quantum
algorithms with reduced number of operators (entangle-
ment-free QA, and so on).
Remark. In this article we describe briefly main blocks
[13-17]
in Fig. 6: i) unified operators; ii) problem-oriented
operators; iii) Benchmarks of QA simulation on classical
computers; and iv) quantum control algorithms based on
quantum fuzzy inference (QFI) and quantum genetic al-
gorithm (QGA) as new types of QSA (see, more in details
Part II of this article).
Let us consider matrix based and problem-oriented
approaches to simulate most of known QAs on classical
computers and small quantum computer.
I. Quantum operator’s description: SW&HW smart
toolkit support
We consider from simulation viewpoint the structure
of quantum operators as superposition, entanglement and
interference[14,16,18,19,23-26]
in matrix based approach.
Superposition operators of QA’s.
The superposition operator consists in general form of
the combination of the tensor products Hadamard H op-
erators with identity operator I :
1 1 1 0
1
,
1 1 0 1
2
H I
   
= =
   
−
    .
The superposition operator of most QAs can be ex-
pressed (see Fig. 3 and Eq. (1)) as:
1 1
n m
i i
Sp H S
= =
   
= ⊗ ⊗ ⊗
   
   ,
Where n and m are the numbers of inputs and of
outputs respectively. Numbers of outputs m as well as
structures of corresponding superposition and interference
operators in [12, 13]
for different QAs presented.
Elements of the Walsh-Hadamard operator could be
obtained as following:
( )
*
/ 2 / 2
,
1, if is even
1 1
1, if is odd
2 2
i j
n
n n
i j
i j
H
i j
∗
− 
 
= = 
  − ∗
 (2)
Where 0,1,...,2 , 0,1,...,2
n n
i j
= = . Its elements could be
obtained by the simple replication according to the rule
presented in Eq. (2).
Interference operators of main QA’s
Interference operators for Grover’s algorithm[18, 19]
writ-
ten as a block matrix:
/2
,
/2 /2 /2
1
2
,
1 1 1
1 ,
,
2 2 2
Grover n
n n
i j
n n n
i j i j
Int D I I I
I i j
I I
I i j
= ≠
 
  = ⊗ = − ⊗ =
 
 
 
− =

   
− + ⊗ ⊗ = 
    ≠
     , (3)
where 0,...,2 1, 0,...,2 1
n n
i j
= − = − , n
D refers to diffu-
sion operator:
[ ]
1 ( )
/ 2
,
( 1)
2
AND i j
n n
i j
D
=
−
= [4,8]
. Note that with
bigger number of qubits, gain coefficient will become
smaller.
Entanglement operators of main QA’s
Operators of entanglement in general form are the part
of QA and the information about the function (being ana-
lyzed) is coded as “input-output” relation. In the general
approach for coding binary functions into corresponding
entanglement gates arbitrary binary function considered
as: { } { }
: 0,1 0,1 ,
n m
f → such that 0 1 0 1
( ,..., ) ( ,..., )
n m
f x x y y
− −
=
. Firstly irreversible function f transfer into reversible
function F , as following: { } { }
: 0,1 0,1 ,
m n m n
F
+ +
→ and
( )
0 1 0 1 0 1 0 1 0 1
,..., , ,..., ( ,..., , ( ,..., ) ( ,..., ))
n m n n m
F x x y y x x f x x y y
− − − − −
= ⊕ ,
where ⊕ denotes addition modulo 2. This transfor-
mation create unitary quantum operator and performs the
similar transformation. With reversible function F it is
possible design an entanglement operator matrix accord-
ing to the following rule:
[U iff F j i i j
F ]i j
B B
,
= = ∈
1 ( ) , , 0,..,0;1,..,1;
B B
 
 
 

n m n m
+ +

B denotes binary coding.
A diagonal block matrix of the form:
UF =
 
 
 
 
 
 
M
0
0

M
0
2 1
n
− is
actually resulted entanglement operator.
Each block , 0,...,2 1
n
i
M i
= − , can be obtained as fol-
lowing 1
0
, iff ( , ) 0
, iff ( , ) 1
m
i
k
I F i k
M
C F i k
−
=
=

= ⊗ 
=
 (4)
DOI: https://doi.org/10.30564/aia.v1i1.619
23
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
And consists of m tensor products of I or of C op-
erators, where C stays for NOT operator.
Note that entanglement operator (4) is a sparse matrix
and according to this property, the simulation of entangle-
ment operation accelerated.
II. QA computing accelerator: SW&HW support
Figure 7 shows the structure of intelligent quantum
computing accelerator.
Software
Package
Software
Package
PC
•S.C. Optimizer
•Q.S.C. Optimizer
•Q.G.S.Algorithm
•Q.G. Design
•Grover’s Gate
•Shor’s Gate
•General purpose
Selection
Crossover
G.A. Controller
Mutation
G.A. Acc. H.W. Sup. Ent. Int.
Quantum Gate
Controller
Quantum Operators
Q.C. Accelerator H.W.
User’s Control System
User’s Control System
Figure 7. Intelligent Quantum Soft Computing Accelera-
tor Structure
HW of quantum computing accelerator is based on
standard silicon element background.
QA structure implementation for HW and MatLab is on
Fig. 8 demonstrated (see, Fig. 23).
a
Output
Input
Background of HW
implementation
Intelligent
computation
operators
Digital computation
of Shannon entropy
Stop
Criterion
Superposition
Interference
Entanglement
P
r
e
-
I
n
t
e
r
Figure 8. QA Structure Presentation for HW (a) and Mat-
Lab (b) Implementations
Different structures of QA can be realized as shown in
Table 1 below.
Table 1. Quantum Gate Types for QA’s Structure Design
Title Type of Algorithm
Symbolic Form of QAG:
( ) 
1
h
m n m
F
Superposition
Entanglement
Interference
Int I U H S
+
   
 
⊗ ⋅ ⋅ ⊗
 
 
 
 
 
 








Deutsch-
Jozsa
(D. – J.)
m=1, S=H(x=1)
Int=n
H
k=1 h=0
( ) ( )
. . 1
n D J n
F
H I U H
− +
⊗ ⋅ ⋅
Simon
(Sim)
m=n,S=I
(x=0)Int=n
H
k=O(n) h=0
( ) ( )
n n Sim n n
F
H I U H I
⊗ ⋅ ⋅ ⊗
Shor
(Shr)
m=n, S=I
(x=0)Int=QFTn
k=O(Poly(n)) h=0
( ) ( )
n Shr n n
n F
QFT I U H I
⊗ ⋅ ⋅ ⊗
Grover
(Gr)
m=1, S=H(x=1)
Int=Dn
k=1, h=O(2n/2
)
( ) ( )
1
Gr n
n F
D I U H
+
⊗ ⋅ ⋅
1.3 Information Analysis of QA and Criterion for
Solution of the QSA-termination Problem
The communication capacity gives an index of efficiency
of a quantum computation [19]
. The measure of Shannon
information entropy is used for optimization of the termi-
nation problem of Grover’s QSA. Information analysis of
Grover’s QSA based on of Eq. (5), gives a lower bound on
necessary amount of entanglement for searching of suc-
cess result and of computational time: any QSA that uses
the quantum oracle calls { }
s
O as 2
I s s
− must call the
oracle at least
1 1
2 log
e
P
T N
N
π π
 
−
≥ +
 
  times to achieve a
probability of error e
P [20]
.
The information intelligent measure of QA as ( )
T ψ
ℑ
of the state ψ is[12, 21]
:
( )
( ) ( )
1 .
Sh VN
T T
T
S S
T
ψ ψ
ψ
−
ℑ =
−
(6)
With respect to the qubits in T and to the basis
{ }
1 n
B i i
= ⊗ ⊗

The measure (6) is minimal (i.e., 0) when
( )
Sh
T
S T
=
y and ( ) 0
VN
T
S =
y , it is maximal (i.e., 1)
when ( ) ( )
Sh VN
T T
S S
=
y y . Thus the intelligence of the QA
state is maximal if the gap between the Shannon and the
von Neumann entropy for the chosen result qubit is mini-
mal.
Information QA-intelligent measure (6) and interrela-
tions between information measures in Table 1 are used
together with the step-by-step natural majorization princi-
ple for solution of QA-termination problem and interrela-
tions between information measures ( ) ( )
Sh VN
T T
S S
ψ ψ
≥ are
used together with entropic relations of the step-by-step
natural majorization principle for solution of QA-termi-
DOI: https://doi.org/10.30564/aia.v1i1.619
24
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
nation problem [12]
. From Eq. (6) we can see that (for pure
states)
( )
( ) ( )
max 1 min
Sh VN
T T
T
S S
T
ψ ψ
ψ
 
−
 
ℑ −
 
 

( ) ( )
min , 0
Sh VN
T T
S S
ψ ψ =
 , (7)
i.e. from Eq. (6) the principle of Shannon entropy min-
imum is as follows.
Figure 9 shows digital block of Shannon entropy min-
imum calculation and the main idea of the termination
criterion based on this minimum of entropy [13, 14]
.
(a)
b
Scheme background for SW implementation
Search space
of solutions Intelligent computation
operators
Information stopping
criteria
Measurement
of result
SW additional
functions
(b)
Figure 9. Digital Block of Shannon Entropy Minimum
Calculation (a) and MatLab (b) Implementations
Number of iterations of QA defined during the calcula-
tion process of minimum entropy search.
The structure of HW implementation of main quantum
operators.
Figure 10 shows the structure of superposition and in-
terference operator simulation.
Shor
Grover
Deutsch-Jozsa
Common
Part
Superposition
Operator
Quantum
Algorithm
1 1
n n
H H H
+ = ⊗
1 1
n n
H H H
+ = ⊗
n n n n
H I H I
⊗ = ⊗
nH
nH
nH
Shor
Grover
D.-Jozsa
Common
Part
Interference
Operator
Quantum
Algorithm
( )
0
phase
n
n
n
QFT I H I
=
⇒
⊗ ⊗
n
D I
⊗ =
H I
⊗
H I
⊗
H I
⊗
nH I n H I
⊗ = ⋅ ⊗
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
n n
H H I
×
 
−
 
 
 
 
 
 
 
 
 
 
− ⊗



    




Sup. Int.
I
⊗
n
H
Level 1
Level 2
Level 3
Software
Software
Hardware
n H
⋅ H I
⊗
Controller
Figure 10. Computation of Superposition and Interference
Operators
The superposition state is created by appli-
cation of Hadamard matrix to column vector
as
[ ] ( )
T 1 1 0
1 1 0 1
1 0 1
     
− = = + = −
     
− −
      . According to
this rule of quantum computing the superposition model-
ing circuit is developed [16]
.
Figure 11 shows the superposition modeling circuit.
The first operations needed are H|0>, H|0> and
H|1>. Neglecting the factor 1/20.5, it can be
written:
H
UF
|0>
Input Superposition Entanglement
H
|0>
.
.
.
n
|1> H
.
.
.
h11=1
1
1
+
+
+
--
h21=1
1
|h22|=1
0
1
0
h12=1
0
0 [1 --1]T
|0> |0>












−
⊗












−
⊗












− 1
0
1
1
1
1
0
1
1
1
1
1
0
1
1
1
1
1
Direct product can be performed via AND
gates. In fact
(2 qubits)
Final superposition state
0
)
0
1
(
0
*
1
;
1
)
1
1
(
1
*
1
;
1
1
1
1
*
1 =
∧
=
−
=
∧
−
=
−
=
∧
=
Figure 11. Superposition (Qubit) Modeling Circuit
Qubits simulation circuits with tensor product on Fig.
12 is shown.
Note: no multipliers are introduced
2-qubit superposition
1 0
1
1
1
1
0
0
1 0
1
1
0
0


























=
0
1
0
1
0
1
0
1
1 0
1
1
1
1
0
0 











=
0
1
0
1






=
⊗












=
⊗






0
0
1
or
1
1 A
A
A
A
A
3-qubit superposition
Figure 12. Qubits Simulation Circuits with Tensor Prod-
uct
DOI: https://doi.org/10.30564/aia.v1i1.619
25
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
Figure 13 shows the computation of entanglement op-
erators.
PC
S
u
p
e
r
p
o
s
i
t
i
o
n
E
n
t
a
n
g
l
e
m
e
n
t
Interference
Quantum Gate
Hardware Accelerator
Quantum
Operators
a c
Entanglement operators of quantum algorithms: a - Deutsch-Jozsa’s; b – Grover’s; c – Shor’s
b








=
1
0
0
1
I 







=
0
1
1
0
C














=
⊗
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
I
I














=
⊗
0
1
0
0
1
0
0
0
0
0
0
1
0
0
1
0
C
I














=
⊗
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
I
C














=
⊗
0
0
0
1
0
0
1
0
0
1
0
0
1
0
0
0
C
C














0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Figure 13. The Computation of Entanglement Operators
Figure 14 shows the entanglement creation circuit.
f(x) 0 1 0 0
Idea: to avoid encoding steps by acting directly on
entanglement output vector via function f.
The output of entanglement can be realized by using
couples of XOR gates:
g1 g2 g3 g4 g5 g6 g7 g8
00 01 10 11
y1 y2 y3 y4 y5 y6 y7 y8
Superposition Output Entanglement Output
Figure 14. The Entanglement Creation Circuit
Thus it is possible to obtain output of entanglement
G=UF ×Y without calculate matrix product and have only
knowledge of corresponding row of diagonal UF matrix
(see, Fig. 13).
Finally output vector G can write as following (Fig.
15):





−
+
+
=
=
,
0
)
1
(
2
1
)
(
,
2
1
2
/
elsewhere
j
x
f
i
if
g
n
j
n
i
11 C⊗C
For n = 2
10
01
00
f(x)
C⊗I
I⊗C
I⊗I
Mi
Example of UF
Figure 15: Equivalent form of Output Vector G
Figure 16 shows the entanglement circuit realization.
DOI: https://doi.org/10.30564/aia.v1i1.619
26
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
.
000 001 010 011 100 101 110 111
5V
0V
J5 J10 J6 J11 J7 J12 J8 J13
Max333: Maxim analogue switches
Connectors
Binary function
Figure 16. Entanglement Circuit Realization
Figure 17 shows the circuit realization of interference
operator according to the scheme in Fig. 10.
I-b:Pre– interference
.
Let us consider the output V of
theentanglement block.
V=[v1 v2 …… vi……v2
n+1]
Infact, if Y istheinterference
output vector, itselementsyi are
TL081OPAMP
(not implemented, being even = -odd)







−
−
=
∑
∑
=
−
=
−
−
even
i
for
v
v
odd
i
for
v
v
y
i
j
j
n
i
j
j
n
i n
n
,
2
1
,
2
1
2
1
2
1
2
1
1
2
1
I-c: Interference
.
TL084OPAMP
(not implemented, being even = -odd)







−
−
=
∑
∑
=
−
=
−
−
even
i
for
v
v
odd
i
for
v
v
y
i
j
j
n
i
j
j
n
i n
n
,
2
1
,
2
1
2
1
2
1
2
1
1
2
1
ith element
processing
unit
Inti
vi yi
Figure 17. Interference Circuit Realization
Let us consider briefly applications of QAG design ap-
proach in highly structured QSA; and in AI, informatics,
computer sciences and intelligent control problems (see
Part II).
SIMULATION OF QA - COMPUTING ON CLAS-
SICAL COMPUTER
We discuss the general outline of the Grover’s QAs us-
ing the quantum gate (QAG) as
( ) ( )
1
h
Gr n
n F
QAG D I U H
⊗ +
 
= ⊗ ⋅ ⋅
  (7)
General method design of QAGs in [13, 14]
is developed
and is briefly described.
Figure 18a represents QAG of Grover’s algorithm (7)
as control system, and Fig. 18b describe a general struc-
ture scheme of Grover's QSA (see, Fig. 1 and Table 1) [13]
.
0
1
n
⊗
Superposition
1
n
H
⊗ +
Information
Source
Unmarked
States
Entanglement
F
U
Quantum
Oracle
R.S.
ε
Information
Optimization
Wise
Controller
*
u
Termination
Interference
n
D I
⊗
Control
Object
Local Control Feed-back
Global Information Feed-back
POV
Measure
Measurement
Process
Decision-Making Feed-Forward
Answer
Information
Comparator
Physical
Comparator
Marked
States
( )
min Sh vN
S S
−
Qualitative
Properties
Initial States
POV : Positive
Operator-Valued
(a)
Grover Quantum Gate
The output is Φ = [(Dn ⊗I) ⋅UF ]h ⋅ ( n+1H)






−
=
1
1
1
1
2
1
H















=








=
1
0
1
0
1
0
1
0 1
0 c
c
i +
=
ϕ
Basis
qubits



≠
=
−
= −
−
j
i
j
i
d n
n
ij 1
1
2
/
1
1
2
/
1
With:
|1>
H
UF
|0>
INPUT STEP 1 STEP 2 STEP 3 OUTPUT
H
H
|0>
Dn
.
.
.
n
h
h
h
bit
bit
bit
(b)
Figure 18. General Structure Scheme of Grover QSA
The Hadamard gates (Step 1) are the basic components
for the superposition operation, the operator F
U (Step 2)
performs entanglement operation and n
D (Step 3) is the
diffusion matrix related to the interference operation. Our
purpose is to realize some classical circuits (i.e. circuits
DOI: https://doi.org/10.30564/aia.v1i1.619
27
Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019
Distributed under creative commons license 4.0
composed of classical gates AND, NAND, XOR etc.) that
simulate the quantum operations of Grover QSA. To this
aim all quantum operators must be expressed in terms
of functions easily and efficiently described by classical
components. When we try to make the HW components
that perform this basic operations according to the classi-
cal scheme we encounter two main difficulties.
High-level gate design of Grover’s QSA (Model
based approach)
In this section we present a new model based HW im-
plementing the functional steps of Grover’s QSA from
a high-level gate design point of view. According to the
high-level scheme in Eq. (7) introduced in Fig. 4, the pro-
posed circuit can be divided into two main parts.
Part I: (Analogue) Step-by-step calculation of output
values. This part is divided into the following subparts:
I-a: Superposition; I-c: Pre-Interference (for vector’s approach);
I-b: Entanglement; I-d: Interference
Part II: (Digital) Entropy evaluation, vector storing for
iterations and output visualization. This part also provides
initial superposition of basis vectors 0 and 1 .
Figure 19 shows a general structure scheme of the HW
realization for the Grover’s QSA-circuits and itself can be
considered as a classical prototype of intelligent control
quantum system.
Figure 19. A General HW-scheme of the Grover’s QSA
Example. The most interesting novelty involves the
structure of interference: in fact the generic element i
v
(interference output) can be written in function of i
g (en-
tanglement output) as the following
2
2 1
1
1
2
2
1
1
1
,
2
1
,
2
n
n
j i
n
j
i
j i
n
j
g g for i odd
v
g g for i even
−
−
=
−
=

−


= 
 −


∑
∑
(8)
Figures 20a and 20b show the Simulink schematic de-
sign and circuit realization of superposition, entanglement
and interference operator’s blocks of the Grover’s QAG.
I-a
II
I
I-c
I-b
Analogue
Part
Digital Part:
Stop Criterion
Minimum of
Shannon entropy
Superposed
input
Main
board
Entanglement
Interference
CPLD Board
Figure 20a. Simulink Scheme of 3-qubits Grover Search
System
Superposition
Superposition
Entanglement
Entanglement
Interference
Interference
Pre
Pre-
-Interference
Interference
Figure 20b. Pre Prototype Scheme Circuit of Grover’s
QAG
Referring to Fig. 19, pre-interference operation evalu-
ates a weighted sum of odd (even) output elements of en-
tanglement, while interference itself uses this contribution
in order to provide (by means of difference with i
g ) the
respective i
v . This simple (but powerful) result in Eq. (8)
has several consequences.
Figure 21 shows experimental HW evolution of Gro-
ver’s quantum search algorithm for three qubits.
Main Board
CPLD Board
Entire Board
Figure 21. HW Realization of Grover QSA
DOI: https://doi.org/10.30564/aia.v1i1.619
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019
Artificial Intelligence Advances | Vol.1, Iss.1 April 2019

More Related Content

Similar to Artificial Intelligence Advances | Vol.1, Iss.1 April 2019

A Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage MakerA Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage Maker
ijtsrd
 
Deep Learning Applications and Image Processing
Deep Learning Applications and Image ProcessingDeep Learning Applications and Image Processing
Deep Learning Applications and Image Processing
ijtsrd
 
Image Classification using Deep Learning
Image Classification using Deep LearningImage Classification using Deep Learning
Image Classification using Deep Learning
ijtsrd
 
Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...
IAESIJAI
 
Advance Self Driving Car Using Machine Learning
Advance Self Driving Car Using Machine LearningAdvance Self Driving Car Using Machine Learning
Advance Self Driving Car Using Machine Learning
Associate Professor in VSB Coimbatore
 
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
ijtsrd
 

Similar to Artificial Intelligence Advances | Vol.1, Iss.1 April 2019 (20)

A Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage MakerA Traffic Sign Classifier Model using Sage Maker
A Traffic Sign Classifier Model using Sage Maker
 
Deep Learning Applications and Image Processing
Deep Learning Applications and Image ProcessingDeep Learning Applications and Image Processing
Deep Learning Applications and Image Processing
 
IRJET- Deep Learning Techniques for Object Detection
IRJET-  	  Deep Learning Techniques for Object DetectionIRJET-  	  Deep Learning Techniques for Object Detection
IRJET- Deep Learning Techniques for Object Detection
 
Intelligent System For Face Mask Detection
Intelligent System For Face Mask DetectionIntelligent System For Face Mask Detection
Intelligent System For Face Mask Detection
 
Image Classification using Deep Learning
Image Classification using Deep LearningImage Classification using Deep Learning
Image Classification using Deep Learning
 
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITIONTRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
 
Transfer Learning with Convolutional Neural Networks for IRIS Recognition
Transfer Learning with Convolutional Neural Networks for IRIS RecognitionTransfer Learning with Convolutional Neural Networks for IRIS Recognition
Transfer Learning with Convolutional Neural Networks for IRIS Recognition
 
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITION
 
Prediction of Age by utilising Image Dataset utilising Machine Learning
Prediction of Age by utilising Image Dataset utilising Machine LearningPrediction of Age by utilising Image Dataset utilising Machine Learning
Prediction of Age by utilising Image Dataset utilising Machine Learning
 
Accident vehicle types classification: a comparative study between different...
Accident vehicle types classification: a comparative study  between different...Accident vehicle types classification: a comparative study  between different...
Accident vehicle types classification: a comparative study between different...
 
Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...Embedded artificial intelligence system using deep learning and raspberrypi f...
Embedded artificial intelligence system using deep learning and raspberrypi f...
 
IRJET- Car Defect Detection using Machine Learning for Insurance
IRJET- Car Defect Detection using Machine Learning for InsuranceIRJET- Car Defect Detection using Machine Learning for Insurance
IRJET- Car Defect Detection using Machine Learning for Insurance
 
Automatic Selection of Open Source Multimedia Softwares Using Error Back-Prop...
Automatic Selection of Open Source Multimedia Softwares Using Error Back-Prop...Automatic Selection of Open Source Multimedia Softwares Using Error Back-Prop...
Automatic Selection of Open Source Multimedia Softwares Using Error Back-Prop...
 
Advance Self Driving Car Using Machine Learning
Advance Self Driving Car Using Machine LearningAdvance Self Driving Car Using Machine Learning
Advance Self Driving Car Using Machine Learning
 
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...An assistive model of obstacle detection based on deep learning: YOLOv3 for v...
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...
 
Artificial intelligence and sensor based assistive sytem for visually impaire...
Artificial intelligence and sensor based assistive sytem for visually impaire...Artificial intelligence and sensor based assistive sytem for visually impaire...
Artificial intelligence and sensor based assistive sytem for visually impaire...
 
IRJET- Comparative Study of Different Techniques for Text as Well as Object D...
IRJET- Comparative Study of Different Techniques for Text as Well as Object D...IRJET- Comparative Study of Different Techniques for Text as Well as Object D...
IRJET- Comparative Study of Different Techniques for Text as Well as Object D...
 
Automatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVMAutomatism System Using Faster R-CNN and SVM
Automatism System Using Faster R-CNN and SVM
 
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
A Survey of Convolutional Neural Network Architectures for Deep Learning via ...
 
Pre-trained based CNN model to identify finger vein
Pre-trained based CNN model to identify finger veinPre-trained based CNN model to identify finger vein
Pre-trained based CNN model to identify finger vein
 

More from Bilingual Publishing Group

Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
Bilingual Publishing Group
 

More from Bilingual Publishing Group (20)

Journal of Electronic & Information Systems | Vol.5, Iss.2 October 2023
Journal of Electronic & Information Systems | Vol.5, Iss.2 October 2023Journal of Electronic & Information Systems | Vol.5, Iss.2 October 2023
Journal of Electronic & Information Systems | Vol.5, Iss.2 October 2023
 
Journal of Atmospheric Science Research | Vol.6, Iss.4 October 2023
Journal of Atmospheric Science Research | Vol.6, Iss.4 October 2023Journal of Atmospheric Science Research | Vol.6, Iss.4 October 2023
Journal of Atmospheric Science Research | Vol.6, Iss.4 October 2023
 
Journal of Atmospheric Science Research | Vol.7, Iss.1 January 2024
Journal of Atmospheric Science Research | Vol.7, Iss.1 January 2024Journal of Atmospheric Science Research | Vol.7, Iss.1 January 2024
Journal of Atmospheric Science Research | Vol.7, Iss.1 January 2024
 
Journal of Computer Science Research | Vol.5, Iss.4 October 2023
Journal of Computer Science Research | Vol.5, Iss.4 October 2023Journal of Computer Science Research | Vol.5, Iss.4 October 2023
Journal of Computer Science Research | Vol.5, Iss.4 October 2023
 
Research on World Agricultural Economy | Vol.4,Iss.4 December 2023
Research on World Agricultural Economy | Vol.4,Iss.4 December 2023Research on World Agricultural Economy | Vol.4,Iss.4 December 2023
Research on World Agricultural Economy | Vol.4,Iss.4 December 2023
 
Sequential Damming Induced Winter Season Flash Flood in Uttarakhand Province ...
Sequential Damming Induced Winter Season Flash Flood in Uttarakhand Province ...Sequential Damming Induced Winter Season Flash Flood in Uttarakhand Province ...
Sequential Damming Induced Winter Season Flash Flood in Uttarakhand Province ...
 
Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
Heterogeneity of Soil Nutrients: A Review of Methodology, Variability and Imp...
 
Cascade Tank Water Quality Management: A Case Study in Thirappane Tank Cascad...
Cascade Tank Water Quality Management: A Case Study in Thirappane Tank Cascad...Cascade Tank Water Quality Management: A Case Study in Thirappane Tank Cascad...
Cascade Tank Water Quality Management: A Case Study in Thirappane Tank Cascad...
 
Advances in Geological and Geotechnical Engineering Research | Vol.5, Iss.4 O...
Advances in Geological and Geotechnical Engineering Research | Vol.5, Iss.4 O...Advances in Geological and Geotechnical Engineering Research | Vol.5, Iss.4 O...
Advances in Geological and Geotechnical Engineering Research | Vol.5, Iss.4 O...
 
Journal of Geographical Research | Vol.6, Iss.4 October 2023
Journal of Geographical Research | Vol.6, Iss.4 October 2023Journal of Geographical Research | Vol.6, Iss.4 October 2023
Journal of Geographical Research | Vol.6, Iss.4 October 2023
 
Journal of Environmental & Earth Sciences | Vol.5, Iss.2 October 2023
Journal of Environmental & Earth Sciences | Vol.5, Iss.2 October 2023Journal of Environmental & Earth Sciences | Vol.5, Iss.2 October 2023
Journal of Environmental & Earth Sciences | Vol.5, Iss.2 October 2023
 
Sustainable Marine Structures Vol 5 No 2 September 2023.pdf
Sustainable Marine Structures Vol 5 No 2 September 2023.pdfSustainable Marine Structures Vol 5 No 2 September 2023.pdf
Sustainable Marine Structures Vol 5 No 2 September 2023.pdf
 
Sustainable Marine Structures | Volume 02 | Issue 01 | January 2020
Sustainable Marine Structures | Volume 02 | Issue 01 | January 2020Sustainable Marine Structures | Volume 02 | Issue 01 | January 2020
Sustainable Marine Structures | Volume 02 | Issue 01 | January 2020
 
Sustainable Marine Structures | Volume 02 | Issue 02 | July 2020
Sustainable Marine Structures | Volume 02 | Issue 02 | July 2020Sustainable Marine Structures | Volume 02 | Issue 02 | July 2020
Sustainable Marine Structures | Volume 02 | Issue 02 | July 2020
 
Sustainable Marine Structures | Volume 03 | Issue 01 | January 2021
Sustainable Marine Structures | Volume 03 | Issue 01 | January 2021Sustainable Marine Structures | Volume 03 | Issue 01 | January 2021
Sustainable Marine Structures | Volume 03 | Issue 01 | January 2021
 
Sustainable Marine Structures | Volume 03 | Issue 02 | July 2021
Sustainable Marine Structures | Volume 03 | Issue 02 | July 2021Sustainable Marine Structures | Volume 03 | Issue 02 | July 2021
Sustainable Marine Structures | Volume 03 | Issue 02 | July 2021
 
Sustainable Marine Structures | Volume 04 | Issue 01 | January 2022
Sustainable Marine Structures | Volume 04 | Issue 01 | January 2022Sustainable Marine Structures | Volume 04 | Issue 01 | January 2022
Sustainable Marine Structures | Volume 04 | Issue 01 | January 2022
 
Sustainable Marine Structures | Volume 04 | Issue 02 | July 2022
Sustainable Marine Structures | Volume 04 | Issue 02 | July 2022Sustainable Marine Structures | Volume 04 | Issue 02 | July 2022
Sustainable Marine Structures | Volume 04 | Issue 02 | July 2022
 
Sustainable Marine Structures | Volume 05 | Issue 01 | March 2023
Sustainable Marine Structures | Volume 05 | Issue 01 | March 2023Sustainable Marine Structures | Volume 05 | Issue 01 | March 2023
Sustainable Marine Structures | Volume 05 | Issue 01 | March 2023
 
Research on World Agricultural Economy | Vol.4,Iss.3 September 2023
Research on World Agricultural Economy | Vol.4,Iss.3 September 2023Research on World Agricultural Economy | Vol.4,Iss.3 September 2023
Research on World Agricultural Economy | Vol.4,Iss.3 September 2023
 

Recently uploaded

Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
Digital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptxDigital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptx
MohamedFarag457087
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Sérgio Sacani
 
The Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptxThe Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptx
seri bangash
 
Module for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learningModule for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learning
levieagacer
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
Areesha Ahmad
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
PirithiRaju
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
Areesha Ahmad
 

Recently uploaded (20)

GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort ServiceCall Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
 
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
 
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Digital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptxDigital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptx
 
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit flypumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
The Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptxThe Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptx
 
Kochi ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Kochi ESCORT SERVICE❤CALL GIRL
Kochi ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Kochi ESCORT SERVICE❤CALL GIRLKochi ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Kochi ESCORT SERVICE❤CALL GIRL
Kochi ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Kochi ESCORT SERVICE❤CALL GIRL
 
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
High Profile 🔝 8250077686 📞 Call Girls Service in GTB Nagar🍑
 
300003-World Science Day For Peace And Development.pptx
300003-World Science Day For Peace And Development.pptx300003-World Science Day For Peace And Development.pptx
300003-World Science Day For Peace And Development.pptx
 
chemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdfchemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdf
 
Module for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learningModule for Grade 9 for Asynchronous/Distance learning
Module for Grade 9 for Asynchronous/Distance learning
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 

Artificial Intelligence Advances | Vol.1, Iss.1 April 2019

  • 1.
  • 2. Editor-in-Chief Dr. Sergey Victorovich Ulyanov State University “Dubna”, Russian Federation Editorial Board Members Ebtehal Turki Alotaibi, Saudi Arabia José Miguel Rubio, Chile Luis Pérez Domínguez, Mexico Brahim Brahmi, Canada Behzad Moradi, Iran Hesham Mohamed Shehata, Egypt Mahmoud Shafik, United Kingdom Siti Azfanizam Ahmad, Malaysia Hafiz Alabi Alaka, United Kingdom Abdelhakim Deboucha, Algeria Karthick Srinivasan, Canada Ozoemena Anthony Ani, Nigeria Rong-Tsu Wang, Taiwan Yu Zhao, China Aslam Muhammad, Pakistan Yong Zhong, China Xin Zhang, China Anish Pandey, Bhubaneswar Hojat Moayedirad, Iran Mohammed Abdo Hashem Ali, Malaysia Paolo Rocchi, Italy Falah Hassan Ali Al-akashi, Iraq Chien-Ho Ko, Taiwan Bakİ Koyuncu, Turkey Wai Kit Wong, Malaysia Viktor Manahov, United Kingdom Riadh ayachi, Tunisia Terje Solsvik Kristensen, Norway Hussein Chible Chible, Lebanon Tianxing Cai, United States Mahmoud Elsisi, Egypt Jacky Y. K. NG, Hong Kong Li Liu, China Fushun Liu, China Reza Javanmard Alitappeh, Iran Luiz Carlos Sandoval Góes, Brazil Abderraouf Maoudj, Algeria Ratchatin Chancharoen, Thailand Shih-Wen Hsiao, Taiwan Nguyen-Truc-Dao Nguyen, United States Lihong Zheng, Australia Hassan Alhelou, Syrian Arab Republic Fazlollah Abbasi, Iran Chi-Yi Tsai, TaiWan Shuo Feng, Canada Mohsen Kaboli, Germany Dragan Milan Randjelovic, Serbia Milan Kubina, Slovakia Yang Sun, China Yongmin Zhang, Canada mouna Afif, Tunisia Yousef Awwad Daraghmi, Palestine Ahmad Fakharian, Iran Kamel Guesmi, Algeria Yuwen Shou, Taiwan Sung-Ja Choi, Korea Yahia ElFahem Said, Saudi Arabia Michał Pająk, Poland Qinwei Fan, China Andrey Ivanovich Kostogryzov, Russian Federation Ridha Ben Salah, Tunisia Andrey G. Reshetnikov, Russian Federation Mustafa Faisal Abdelwahed, Egypt Ali Khosravi, Finland Chen-Wu Wu, China Mariam Shah Musavi, France Shing Tenqchen, Taiwan Konstantinos Ilias Kotis, Greece
  • 3. Dr. Sergey Victorovich Ulyanov Editor-in-Chief Artificial Intelligence Advances Volume 1 Issue 1 · April 2019 · ISSN 2661-3220 (Online)
  • 4. Architecture of a Commercialized Search Engine Using Mobile Agents Falah Al-akashi To Perform Road Signs Recognition for Autonomous Vehicles Using Cascaded Deep Learn- ing Pipeline Riadh Ayachi, Yahia ElFahem Said, Mohamed Atri GFLIB: an Open Source Library for Genetic Folding Solving Optimization Problems Mohammad A. Mezher Quantum Fast Algorithm Computational Intelligence PT I: SW / HW Smart Toolkit Ulyanov S.V A Novel Dataset For Intelligent Indoor Object Detection Systems Mouna Afif, Riadh Ayachi1 Yahia Said, Edwige Pissaloux, Mohamed Atri Volume 1 | Issue 1 | April 2019 | Page 1-58 Artificial Intelligence Advances Article Review Contents Copyright Artificial Intelligence Advances is licensed under a Creative Commons-Non-Commercial 4.0 International Copyright (CC BY- NC4.0). Readers shall have the right to copy and distribute articles in this journal in any form in any medium, and may also modify, convert or create on the basis of articles. In sharing and using articles in this journal, the user must indicate the author and source, and mark the changes made in articles. Copyright © BILINGUAL PUBLISHING CO. All Rights Reserved. 1 11 18 52 44
  • 5. 1 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 Artificial Intelligence Advances https://ojs.bilpublishing.com/index.php/aia ARTICLE To Perform Road Signs Recognition for Autonomous Vehicles Using Cascaded Deep Learning Pipeline Riadh Ayachi1 Yahia ElFahem Said1,2* Mohamed Atri1 1. Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Tunisia 2. Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia ARTICLE INFO ABSTRACT Article history Received: 26 February 2019 Accepted: 6 April 2019 Published Online: 30 April 2019 Autonomous vehicle is a vehicle that can guide itself without human con- duction. It is capable of sensing its environment and moving with little or no human input. This kind of vehicle has become a concrete reality and may pave the way for future systems where computers take over the art of driving. Advanced artificial intelligence control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant road signs. In this paper, we introduce an intelligent road signs classifier to help autonomous vehicles to recognize and understand road signs. The road signs classifier based on an artificial intelligence technique. In particular, a deep learning model is used, Convolutional Neural Networks (CNN). CNN is a widely used Deep Learning model to solve pattern recognition problems like image classification and object detection. CNN has successfully used to solve computer vision problems because of its methodology in processing images that are similar to the human brain decision making. The evaluation of the proposed pipeline was trained and tested using two different datasets. The proposed CNNs achieved high performance in road sign classification with a validation accuracy of 99.8% and a testing accuracy of 99.6%. The proposed meth- od can be easily implemented for real time application. Keywords: Traffic signs classification Autonomous vehicles Artificial intelligence Deep learning Convolutional Neural Networks CNN Image understanding *Corresponding Author: Yahia ElFahem Said, Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia; Email: said.yahia1@gmail.com 1. Introduction I n the recent years, we notice that the number of ac- cidents increases with a huge way. According to the American safety council [13] more than 40000 dies because of cars accidents. The main cause of accident was non-respect of the road rules and speed limits. Automated technologies have been developed and reaches a signifi- cant result. Autonomous vehicles are proposed as a solu- tion to make roads safer by taking the control. An autono- mous vehicle based on artificial intelligence will not make error in judging situation like human does. Traffic signs classifier is the feature key for developing autonomous ve- hicles. It provides a global overview about the road rules to control the vehicle and the way how it reacts according to given situation. Generally, an autonomous vehicle is composed from a big number of sensors and cameras. The visual informa- tion provided by the cameras can be used to recognize the road signs. To process visual information, a well-known Deep Learning model, Convolutional Neural Networks (CNN) [1] , are proposed. They are widely used in image
  • 6. 2 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 processing tasks such as object recognition, image classi- fication[2] and object localization[3] . CNNs are successfully used to solve computer vision tasks[4] because of their power in visual context processing that mimic the biolog- ical system were every neuron in the network is applied in a restricted region of the receptive field[5] . Then all the neurons of the network overlapped to cover the entire re- ceptive field. So, features from all the receptive field are shared everywhere in the network with less effort. The major advantage of the Convolutional Neural networks is the ability to learn directly from the image[6] , unlike other classification algorithm that need a hand-crafted feature to learn from. For human, recognizing and classifying a traffic sign is an easy task and the classification will be totally correct but for an artificial system, it is a hard task that needs a lot of computation effort. In many countries the shape and the color of the same road sign is different. Figure 1 illus- trates an example of the stop sign in different countries. In addition, the road sign can look different because of the environment factors like rain, sun and dust. Though the mentioned challenges need to be processed successfully to make a robust road sign classifier with the minimum of error. Figure 1. Stop Sign in Different Countries In this paper, we propose a pipeline based on data preprocessing algorithm and deep learning model to rec- ognize and classify traffic signs. The data preprocessing pipeline is composed by five stages. First, data loading and augmentation are performed. Then, all the images are resized and shuffled. All the images are then transformed to gray scale channel. After that, we apply a local histo- gram equalization[8, 9, 10] . Finally, we normalize the images to feed them to the proposed convolutional neural net- work. As CNN model, we propose two different networks. The first one is 14 layers subset from the VGGNet mod- el[12] , which is invented by VGG (Visual Geometry Group) DOI: https://doi.org/10.30564/aia.v1i1.569 from University of Oxford, and was the 1st runner-up of the classification task in the ILSVRC2014 challenge[32] and the winner of the localization task. The second one is the Deep Residual Network ResNet[11] . It was arguably the most groundbreaking work in the computer vision/deep learning community in the last few years. ResNet makes it possible to train up to hundreds or even thousands of lay- ers and still achieves compelling performance. By testing the proposed networks, we achieve high performance in both validation and tests. The best per- formance was achieved using the 34 layers ResNet archi- tecture with a validation accuracy of 99.8% and a testing accuracy of 99.6%. Also achieving an inference speed of more than 40 frames per second, the pipeline can be im- plemented for real time applications. The remainder of the paper is organized as follows. Related works on traffic signs classification are presented in Section 2. Section 3 describes the proposed pipeline to recognize and classify road signs. In Section 4, exper- iments and results are detailed. Finally, Section 5 con- cludes the paper. 2. Related Works The need for a robust traffic sign classifier became an important benchmark that must be solved. Many research works were presented in the literature[14,15,36] . Ohgushi et al.[16] introduced a traffic signs classifier based on color information and Bags of Features (BoF) as a features extractor and a support vector machine (SVM) as a clas- sifier. The proposed mothed struggle in recognizing the traffic signs in real condition especially when the sign is intensively illuminated or partially occluded. Some research investigated the detection of the traffic sign without performing the classification process[17,18] . Wu et al.[17] proposed a method to detect only round traffic signs in the Chinese roads. In other side, researchers focus on detecting and recognizing the traffic sign[19] . The pro- posed method only detects round signs and cannot detect other signs shapes. A three steps method to detect and recognize traffic signs was proposed by Wali et al.[20] . The first step was data preprocessing. The second was detecting the exis- tence of the sign and the third was classifying it. For the detection process, they apply the color segmentation with shape matching and for the classification process they use SVM as a classifier. The proposed method achieves 95.71% of accuracy. Lai et al.[21] introduced a traffic signs recognition method using smart phone. They used color detection to perform color space segmentation and shape recognition method using template matching by calculat- ing the similarity. Also, an optical character recognition
  • 7. 3 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.569 (OCR) was implemented inside the shape border to decide on the sign class. The proposed method was very limited on red traffic signs only. Gecer et al.[38] propose to use color-blob-based COSFIRE to recognize traffic signs. The proposed method was based on a Combination of Shifted Filter Responses with compute the response of different filters is different regions in each channel of the color space (ie. RGB). The proposed method achieves 98.94% as accuracy on the GTSRB dataset. Virupakshappa et al.[22] used a machine learning meth- od by combining the bag-of-visual-words technique with Speeded up Robust Features (SURF) for features extraction then feed the features to an SVM classifier to recognize the traffic signs. The proposed method achieves an accuracy of 95.2%. A system based on a BoW descrip- tor enhanced using spatial histogram was used by Shams et al.[23] to improve the classification process based on an SVM classifier. Lin et al.[24] introduced a two-stage fuzzy inference model to detect traffic signs in video frame the they apply a two-stage fuzzy inference model to classifier the signs. The method provides high performance only on prohibito- ry and warning signs. In[25] , Yin et al. presented a revolu- tionary technique for real time processing based on Hough transformation to localize the sign in the image the use the rotation invariant binary pattern (RIBP) descriptor to ex- tract features. As a classification method they use artificial neural networks. A cascade Convolutional Neural Network model was introduced by Rachmadi et al.[26] to perform the traffic signs classification process of the Japanese road signs. The proposed method achieves a performance of 97.94% and can be implemented for real time processing with a speed less than 20 ms per image. The mothed of Sermanet et al.[39] was based on a multi-scale convolutional neural network. This method introduces a new connection way by skipping layers and the use of pooling layers with down sampling ratios for connection that skip layers dif- ferent than those that do not skip layers. The proposed method improves its efficiency by reaching 99.1% accura- cy. Cireçsan et al[37] used a combination of CNNs and train them in parallel using differently preprocessed data. It uses an arbitrary number of CNNs each is combined from seven layers, input layer, two convolution layers, two max pooling layers and two fully connected layers. The predic- tion is provided by averaging the output of all the CNNs. The proposed technique further boosts the classification accuracy to 99.4%. The use of convolutional neural net- works has led to enhance the classification accuracy com- pared with the machine learning techniques. In the recent years, several vehicle manufactories de- velop new techniques to perform traffic signs classifica- tion. As an example, BMW announced the integration of a traffic sign classifier in the BMW 5 series. Moreover, oth- er vehicle manufactories were trying to implement those technologies[27] . Volkswagen implement a traffic sign classifier in the Audi A8[28] . All the existing researches on the traffic signs classification proved the important of this technology for autonomous cars. 3. Proposed Method As mentioned above many traffic signs classification techniques are proposed. Our method focusses on the data preprocessing technique to enhance the images quality and to reduce the number of features learned by the con- volutional Neural Network so we ensure the real time implementation. As shown in figure 2, the preprocessing technique contain five phases: data loading and augmen- tation, images resizing and shuffling[29] , gray scaling, local histogram equalization[30] and data normalization. As a first phase, we load the data and we generate new examples using a data augmentation technique. The data augmentation process is applied to maximize the amount of the training data. Also, the data augmentation was used in the tests by generating more points of view of the tested image to ensure better prediction. In the second phase, we resize all the images to height*width*3 where 3 denotes the 3 channels color space. Then the images are shuffled to avoid obtaining minibatches of highly correlated examples. So, the train- ing algorithm will choose a different minibatch each time it iterates. In third phase, we perform gray scaling to re- duce the number of channels of the image so the images are scaled to height *width*1. As result of the gray scal- ing technique the number of learned filters was reduced in the convolutional neural network. Also, the training and inference time can be reduced. In the fourth phase, we ap- ply local histogram equalization[31] to enhance the images contrast by separating the most frequent intensity values. Usually, this increases the global contrast of the images and allows to the areas of lower local contrast to gain a higher contrast. The fifth phase consists of data normaliza- tion which is a simple process applied to get the same data scale of all the examples ensuring an equal representation of all the features. The preprocessing pipeline is an im- portant stage to enhance the data injected to the network in both training and testing process.
  • 8. 4 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Figure 2. Data Preprocessing The second part of our method is the Convolutional Neural Network (CNN). Generally, a convolutional neu- ral network is feedforward neural network used to solve computer vision tasks. Usually, a CNN contains six types of layers: input layer, convolution layers, nonlinear layers, pooling layers, fully connected layers and an output layer. Figure 3 illustrates a CNN architecture. The complete proposed pipeline is composed from a data preprocessing stage and a convolutional neural net- work for traffic signs classification. The proposed pipeline can be summarized by the pseudo code presented in algo- rithm 1. Algorithm 1: proposed pipeline for traffic signs classification Train input: images, labels Test input: images Output: images classes Mode: choose the mode (training or testing) Batch size: choose a batch size (number of images per batch) Image size: choose the images size Number of batches: choose a number of batches If mode: training For batch in range (number of batches): Load the data (images and labels) Apply data augmentation Resize the images Shuffle the images Apply local histogram equilibration Normalize the images Fit the images into the convolutional neural network Initialize the CNN parameters (load weights from pretrained model) Compute the mapping function Generate the output Repeat Compute the loss function (difference between output class and input label) Optimize CNN parameters (apply backpropagation algorithm) Until output class input label Chose next batch Else (mode: testing) Load the data (images) Apply data augmentation Resize the images Apply local histogram equilibration Normalize the images Fit the images into the convolutional neural network Load parameters from trained model Compute the mapping function Generate the output Figure 3. Convolutional Neural Network Architecture The first CNN to use is VGGNet[12] . VGGNet have two main architectures: the VGG16 which is a 16 layers CNN and the VGG19 which is a 19 layers CNN. The VGGNet architectures are presented in figure 4. VGGNet achieves a top 5 error in the ILSVRC2014 classification challenge [32] of 7.32%. In our work we will just use 14 layers from the VGGNet by saving the first 10 layers and the 4 last layers. Also, in the third block we will use just 2 convolu- tional layers and a pooling layer. Figure 4. VGGNet Architecture The second CNN that we will explore is ResNet[11] which presents a revolutionary architecture to accelerate the convergence of the very deep neural networks (more than 20 layers) by implementing residual blocks instead of classic plain blocks used in VGGNet. An illustration of the residual block is shown in figure 5. ResNet wins the ILSVRC2015 classification contest [32] achieving the top- 5 validation error of 3.57%[11] . To perform traffic signs classification, we choose ResNet 34 architecture. Figure 5 presents the structure of ResNet 34 which is a 34 layers CNN with residual blocks. A residual block is an accumu- lation of the input and the output of the block. VGGNet and ResNet are trained to classify natural im- ages according to the ImageNet [32] with 1000 classes. To make it perfect for the traffic signs classifier, the transfer learning technique was applied by replacing the output layers of those architectures by another layer contains the classes of the traffic signs. The transfer learning technique is well known technique in deep learning which helps to use existing architecture to solve new tasks by freezing some layers and fine tuning the other layers or retrain them from scratch. The transfer learning is used to speed up the training process and to improve the performance DOI: https://doi.org/10.30564/aia.v1i1.569
  • 9. 5 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 of the used deep learning architecture. Using the transfer learning technique allows to use the pre-trained weights as a starting point to optimize the existing architecture for the news task. Figure 5. ResNet34 Structure Another advantage of the transfer learning is possibili- ty to use a small amount of data to train the deep learning model and achieve high performance. 4. Experiments and Results In this work two datasets were used to train and evaluate the networks. The first dataset is the German traffic signs dataset GTSRB[34] , which is a large multi-class dataset for traffic signs classification benchmark. In this dataset there is a training directory and a testing directory, each contain 43 traffic signs classes providing more than 50000 total images of traffic signs in real conditions. Figure 6 represents the classes of the German traffic signs dataset. The second data set is the Belgium traffic signs dataset BTSC[35] . This dataset provides a training and teasing data separately. The training and the testing data contain 62 traffic signs classes and more than 4000 images of real traffic signs in the Belgium roads. Figure 6: the German Traffic Signs Dataset Classes In all our experiments, all the networks are developed using the TensorFlow deep neural network framework. The training is performed using a desktop with Intel i7 processor and an Nvidia GTX960 GPGPU. To achieve good performance, we use a variant of configuration by manipulating the images sizes, the batch size, the dropout probability and choosing the learning algorithm (optimizer). We start by resizing the images to 32*32. Also, we start by using a large batch size (1024), the dropout probability of 0.25 and as learning algorithm we use stochastic gradient descent and we perform train- ing the network. The final used images resizing value was determined after testing many different values such as 32*32, 64*64, 96*96 and 128*128, and after several tests, we end up by the best configuration which is resizing the images to 96*96, using a minibatch of 256, a dropout probability of 0.5 and the Adam optimizer. The Adam optimizer is an ex- tension of the stochastic gradient descent optimizer which guarantee a better and faster converge. In addition, it does not need a learning rate, it will generate its own learning rate and optimize it until finding the best value. Figure 7. the Belgium Dataset Classes In the data pre-processing pipeline, the data was pre- pared for training and testing the model. First, loading the data and applying the data augmentation technique. Figure 8 shows an example of the generated data using the pro- posed data augmentation technique. Second, resizing the data and shuffle it to generate mixed mini batches. Then, images were transformed to the gray scale space color. Figure 9 illustrates an example of the gray scaled images. DOI: https://doi.org/10.30564/aia.v1i1.569
  • 10. 6 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Figure 8. Data Augmentation Figure 9. Gray Scaling The local histogram equalization was then applied to equilibrate the images contrasts. Figure 10 present images after applying the local histogram equalization. Finally, normalizing the data and feed it to the convolutional neu- ral network. An example of the normalized data is pre- sented in figure 11. Figure 10. Local Histogram Equalization Figure 11. Normalized Gray Images and the Original Col- or Images In the training process, the data was injected to the CNN architectures and the parameters are optimized. In the ResNet 34, the first convolution layer was used to per- form feature extraction and down sampling in the same time by using 7*7 kernels to incorporate features with larger receptive field and a stride of 2. Figure 12 presents the output feature maps of the first ResNet 34 convolution layer. The residual blocks are used for features extraction using 2 convolutional layers with 3*3 kernels and zero padding was applied. The input and the output of each re- sidual block are accumulated to control parameters num- ber explosion. Figure 13 presents the output feature maps of the first ResNet 34 residual block. Figure 12. Features Maps of the First ResNet34 Convolu- tion Layer DOI: https://doi.org/10.30564/aia.v1i1.569
  • 11. 7 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Figure 13. Features Maps of the First ResNet34 Residual Block A way to visualize the CNN performance is by repre- senting the corresponding confusion matrix. The confu- sion matrix shows the ways in which the classification CNN model is confused when it makes predictions. Fig- ure 14 shows the confusion matrix of the ResNet on the GTSRB dataset. Figure 14. Confusion Matrix of ResNet34 Table 1. Performance of the Proposed Architectures in Term of Accuracy in Both Datasets Accuracy (%) Dataset GTSRB BTSC VGG (12 layers) 99.3 98.3 ResNet 34 99.6 98.8 Table 1 summarize the obtained accuracy on the test- ing data of the trained models on the GTSRB and the BTSC datasets. As shown in table 1 the best performance is obtained on the GTSRB dataset using the ResNet 34 architecture and this proves the importance of the residual block to enhance the network performance without any explosion in the complexity when using very deep convo- lutional neural network. The results obtained on the BTSC data set are lower because of the lack of data. The dataset contains only 4965 images divided on training data and testing data. The reported data on the GTSRB dataset proved that the proposed traffic sign classifier outper- formed the human accuracy which is 98.32%. The most of the false negative examples are caused by totally or par- tially damaged images after performing the data pre-pro- cessing. Figure 15 illustrate an example of the damaged images. Figure 15. Damaged Images after Preprocessing Table 2. Inference Speed of Each Architecture Architecture frames/second VGG (12 layers) 57 ResNet 34 43 Table 2 summarize the number of images processed per second by each architecture. For real time implemen- tation, we need an equilibration between accuracy and speed. Our best proposed CNN achieve an accuracy of 99.621% which is an acceptable value in comparison of human accuracy and outperform the state-of-the-art mod- els in the traffic signs classification task. Table 3 presents a comparison between our architec- tures and other proposed architectures and methods tested on the GTSRB dataset. DOI: https://doi.org/10.30564/aia.v1i1.569
  • 12. 8 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Table 3. Accuracy Comparison Architecture Accuracy (%) Committee of CNN [37] 99.4 Color-blob-based COSFIRE filters [38] 98.9 Sermanet [39] 99.1 Proposed VGG (12 layers) 99.3 Proposed ResNet 34 99.6 As reported in table 3, our proposed ResNet 34 archi- tecture outperform state of the art methods in traffic signs classification. Also, our architecture can be easily imple- mented for real time applications. A real time application needs at least a 25 frames per second and as reported in table 2, the lowest architecture processes 43 frames per second. In other hand, all the proposed architecture out- performs human accuracy in the traffic signs classification benchmark. To make it useful for real word application and human interpretable, we implement the ResNet 34 architecture in traffic signs classification application where we label the images with human understandable labels. In both train- ing and tests label were encoded as integers. As example the labels were encoded from 0 to 42 range in the GTSRB dataset. The testing images was collected from the web and does not belong to the datasets. The top 5 probabili- ties of the softmax layer were visualized. Figure 16 pres- ents an example of the top 5 probabilities of the softmax layer and their corresponding input images. The classifier achieves a good performance when applied to the new im- ages and proves the generalization power. 5. Conclusion Traffic signs classification was and still an important ap- plication for autonomous cars. Cars need real time and embedded solutions that is why we need to provide a balance between speed and accuracy. In this paper, we propose an artificial intelligence technique based on deep learning model, Convolutional Neural Network to perform the traffic signs classification benchmark. The reported results prove that the proposed solutions can be effective- ly implemented for real time applications and provide an acceptable accuracy outperforming human performance. The proposed architectures can be more optimized for embedded implementation. Figure 16. ResNet34 Softmax Probabilitie Conflicts of Interest: The authors declare no conflict of interest. References [1] O'Shea, K., & Nash, R. An introduction to con- volutional neural networks. arXiv preprint arX- iv:1511.08458, 2015. [2] Ciresan, D. C., Meier, U., Masci, J., Maria Gambar- della, L., & Schmidhuber, J. Flexible, high perfor- mance convolutional neural networks for image clas- sification. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, 2011,22(1): 1237. [3] Tompson, J., Goroshin, R., Jain, A., et al. Efficient object localization using convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 648-656. [4] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recog- nition. arXiv preprint arXiv:1409.1556. [5] Simard, P. Y., Steinkraus, D., & Platt, J. C. Best prac- tices for convolutional neural networks applied to vi- sual document analysis. In null (p. 958). IEEE, 2003. [6] LeCun, Y., Jackel, L. D., Bottou, L., Cortes, C., Denker, J. S., Drucker, H., ... & Vapnik, V. Learning algorithms for classification: A comparison on hand- written digit recognition. Neural networks: the statis- tical mechanics perspective, 1995, 261: 276. [7] Zhu, H., Chan, F. H., & Lam, F. K. Image contrast enhancement by constrained local histogram equal- ization. Computer vision and image understand- ing, 1999, 73(2): 281-290. [8] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE transactions on circuits and systems for video technology, 2001, DOI: https://doi.org/10.30564/aia.v1i1.569
  • 13. 9 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 11(4): 475-484. [9] Stark, J. A. Adaptive image contrast enhancement us- ing generalizations of histogram equalization. IEEE Transactions on image processing, 2000, 9(5): 889- 896. [10] He, K., Zhang, X., Ren, S., & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern rec- ognition, 2016: 770-778. [11] Sercu, T., Puhrsch, C., Kingsbury, B., & LeCun, Y. Very deep multilingual convolutional neural net- works for LVCSR. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Con- ference on, IEEE, 2016: 4955-4959. [12] U.S. vehicle deaths topped 40,000 in 2017, National Safety Council estimates. https://www.usatoday.com/ story/money/cars/2018/02/15/national-safety-coun- cil-traffic-deaths/340012002 [13] Vitabile, S.; Pollaccia, G.; Pilato, G.; Sorbello, F. Road signs recognition using a dynamic pixel ag- gregation technique in the HSV color space. In Proceedings of the 11th International Conference on Image Analysis and Processing, Palermo, Italy, 2001, 26–28: pp. 572–577. [14] Zeng, Y.; Lan, J.; Ran, B.; Wang, Q.; Gao, J. Resto- ration of motion-blurred image based on border de- formation detection: A traffic sign restoration model. PLoS ONE, 10, e0120885. 2015. [15] Ohgushi, K.; Hamada, N. Traffic sign recognition by bags of features. In Proceedings of the TENCON 2009—2009 IEEE Region 10 Conference, Singapore, 2009, 23–26: 1–6. [16] Wu, J.; Si, M.; Tan, F.; Gu, C. Real-time automatic road sign detection. In Proceedings of the Fifth Inter- national Conference on Image and Graphics (ICIG ’09), Xi’an, China, 2009: 540– 544. [17] Belaroussi, R.; Foucher, P.; Tarel, J.P.; Soheilian, B.; Charbonnier, P.; Paparoditis, N. Road sign detection in images: A case study. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey; 2010: 484–488. [18] Shoba, E.; Suruliandi, A. Performance analysis on road sign detection, extraction and recognition tech- niques. In Proceedings of the 2013 International Con- ference on Circuits, Power and Computing Technolo- gies (ICCPCT), Nagercoil, India, 2013: 1167–1173. [19] Wali, S.B.; Hannan, M.A.; Hussain, A.; Samad, S.A. An automatic traffic sign detection and recognition system based on colour segmentation, shape match- ing, and svm. Math. Probl. Eng, 2015. [20] Lai, C.H.; Yu, C.C. An efficient real-time traffic sign recognition system for intelligent vehicles with smart phones. In Proceedings of the 2010 International Conference on Technologies and Applications of Artificial Intelligence, Hsinchu, Taiwan, 2010: 195– 202. [21] Virupakshappa, K.; Han, Y.; Oruklu, E. Traffic sign recognition based on prevailing bag of visual words representation on feature descriptors. In Proceedings of the 2015 IEEE International Conference on Elec- tro/Information Technology (EIT), Dekalb, IL, USA, 2015: 489–493. [22]Shams, M.M.; Kaveh, H.; Safabakhsh, R. Traffic sign recognition using an extended bag-of-features model with spatial histogram. In Proceedings of the 2015 Signal Processing and Intelligent Systems Confer- ence (SPIS), Tehran, Iran, 2015: 189–193. [23] Lin, C.-C.; Wang, M.-S. Road sign recognition with fuzzy adaptive pre-processing models. Sensors, 6415. 2012. [24] Yin, S.; Ouyang, P.; Liu, L.; Guo, Y.; Wei, S. Fast traffic sign recognition with a rotation invariant bina- ry pattern-based feature. Sensors, 2015, 2161–2180. [25] Rachmadi½, R. F., Komokata½, Y., Íchimura½, K., & Koutaki½, G. (2017). Road sign classification system using cascade convolutional neural network, 2017. [26] Continental. Traffic Sign Recognition. Available on- line: 2017. http://www.contionline.com/generator/ www/de/en/continental/automotive/general/chassis/ safety/hidden/verkehrszei chenerkennung_en.html [27] Choi, Y.; Han, S.I.; Kong, S.-H.; Ko, H. Driver status monitoring systems for smart vehicles using physi- ological sensors: A safety enhancement system from automobile manufacturers. IEEE Signal Process. 2016: 22–34. [28] Dean, J., & Ghemawat, S. MapReduce: simplified data processing on large clusters. Communications of the ACM, 2008, 51(1), 107-113. [29] Kim, J. Y., Kim, L. S., & Hwang, S. H. An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE transactions on circuits and systems for video technology, 2001, 11(4), 475-484. [30] Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A. A., & Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 2007, 53(2). [31] Olga, R., Jia, D., Hao S., Jonathan, K., Sanjeev, S., Sean, M., Zhiheng, H., Andrej, K., Aditya, K., Michael, B., Alexander, C. B., and Li, F. Im- ageNet Large Scale Visual Recognition Chal- lenge. IJCV, 2015. [32] Hinton, Geoffrey E, Srivastava, Nitish, Krizhevsky, DOI: https://doi.org/10.30564/aia.v1i1.569
  • 14. 10 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan R. Improving neural networks by preventing co-ad- aptation of feature detectors. arXiv preprint arX- iv:1207.0580, 2012. [33] J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel, Man vs. computer: Benchmarking machine learning algo- rithms for traffic sign recognition, Neural Networks. 2012, ISSN 0893-6080. http://www.sciencedirect.com/science/article/pii/ S0893608012000457 [34] Radu Timofte, Karel Zimmermann, Luc van Gool, Multi-view traffic sign detection, recognition, and 3D localisation, IEEE Workshop on Applications of Computer Vision, WACV, 2009. [35] Fredrik, L. and Michael, F., Using Fourier Descrip- tors and Spatial Models for Traffic Sign Recogni- tion, In Proceedings of the 17th Scandinavian Con- ference on Image Analysis, SCIA, LNCS 6688, 2011: 238-24. [36] CireşAn, Dan, et al. "Multi-column deep neural network for traffic sign classification." Neural net- works 2012, 32: 333-338. [37] Gecer, B., Azzopardi, G., & Petkov, N. Color-blob- based COSFIRE filters for object recognition. Image and Vision Computing, 2017, 57: 165-174. [38] Sermanet, P., & LeCun, Y. Traffic sign recognition with multi-scale convolutional networks. In Neural Networks (IJCNN), the 2011 International Joint Con- ference on. IEEE, 2011: 2809-2813. DOI: https://doi.org/10.30564/aia.v1i1.569
  • 15. 11 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.608 Artificial Intelligence Advances https://ojs.bilpublishing.com/index.php/aia ARTICLE GFLIB: an Open Source Library for Genetic Folding Solving Optimi- zation Problems Mohammad A. Mezher* Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA ARTICLE INFO ABSTRACT Article history Received: 8 March 2019 Accepted: 16 April 2019 Published Online: 30 April 2019 This paper aims at presenting GFLIB, a Genetic Folding MATLAB toolbox for supervised learning problems. In essence, the goal of GFLIB is to build a concise model of supervised learning, and a free open source MATLAB toolbox for performing classification and regres- sion. The GFLIB is specifically designed for most of the traditionally used features, to evolve in applications of mathematical models. The toolbox suits all kinds of users; from the users who implemented GFLIB as “black box”, to advanced researchers who want to generate and test new functionalities and parameters of GF algorithm. The toolbox and its documentation are freely available for download at: https://github.com/ mohabedalgani/gflib.git Keywords: GF toolbox GF Algorithm Evolutionary algorithms Classification Regression Optimization LIBSVM *Corresponding Author: Mohammad A. Mezher, Dept. of Computer Science, Fahd Bin Sultan University, Tabuk, KSA; Email: mmezher@fbsu.edu.sa 1. Introduction A ll evolutionary algorithms[1] are biologically stimulated, by using the “survival fittest” con- cept found with the aid of Darwinian evolution. GF algorithm is one of the EA relative member which is used to resolve complicated problems through random- ly producing populations of computer programs. Every computer program (Chromosome) undergoes a number of natural adjustments called crossover and mutation, to create a brand-new population. These operators then could be iterated to generate the fittest chromosome which are evaluated using one of the performance measurements. GF algorithm is a member of the Evolutionary algorithm’s family, but it uses a simple floating-point system for genes, in formulating the GF chromosomes. Certainly, there are quite a number of open source evo- lutionary algorithms toolboxes used for MATLAB[2, 3] , but none specific for genetic folding algorithm. GFLIB looks forward to providing such a free open source toolbox that can be used and developed by others. Accordingly, the GFLIB toolbox was designed from scratch and adopted to
  • 16. 12 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 ensure code reusability and clarity. The end result was to deal with a wide variety of machine learning usage prob- lems. The need for a fast and easy way is to try different dataset on a distinct range of parameters. The GFLIB ex- amined on different MATLAB versions and computer sys- tems, namely version (R2017b) for Windows and (R2017a) for Mac. This standalone toolbox will offer alternatives for us- ers/researchers to help them decide on both training and testing data set with a number of k-folding cross-vali- dation, mathematics operators, crossover and mutation operators, crossover and mutation rates, kernel types, and various number of GF parameters. Furthermore, this toolbox will offer an output option to prevent results in different formats and figures such as; roc curve, structural complexity, fitness values, and mean square errors. In other words, the aim of building a standalone su- pervised learning toolbox is to spread GF algorithm in all data set fall within the classification and regression prob- lems. 2. GFLIB Structure 2.1 Previous Version of GF Toolbox Structure The old version of GFLIB was first introduced having the toolbox rely completely[4] on GP Toolbox[2] . The toolbox contained GF algorithms for supervised classification and regression problems, but it was aligning the structure des- ignated for the GP. At that time, the GF toolbox was lack- ing from holding unique encoding and decoding mecha- nisms fully functioning as intended for being integrated with GP Toolbox. The development of GF toolbox, there- fore, was oriented to the optimization and integration of the existing GP toolbox. The implementation of GF tool- box was done using MATLAB and the GP package. The idea was to encode and decode using the GF tree wherein, the GFLIB was built using GF mechanism shown in [5] . 2.2 Current Version of GFLIB Toolbox Structure Although the main GF structure was demonstrated in detail in [4, 5] , where the paper will be mainly designed to be highlighted on the structure of GFLIB toolbox only. GFLIB is a research MATLAB project that is essentially intended to offer the user with a complete toolbox without the need to know how GF algorithm works on a specific dataset. The recent developed GFLIB toolbox additional- ly, will grant researchers an entire control in comparison with different well-known evolutionary algorithms. The variety of options which GFLIB presents can be used as a very important tool for researchers, students, and experts who are interested in testing their personal dataset. 2.2.1 Data Structures GFLIB provides an easy way to add a dataset in a text format. The text files may be found in both UCI dataset [6] and LIBSVM dataset [7] . GFLIB is mainly supporting the .txt data type which is found in the same style of UCI dataset only. The main data structures in the GFLIB Toolbox are genotype and phenotypes which represents the GF chro- mosomes. The chromosomes present in GF are considered to be the main structure in the algorithm. The GF chromo- some consists three-parts: an index number of the gene in a chromosome which represents the father, and the two points inside the gene which represents the children. Then the GF chromosome structure encodes an en- tire population in a single float-number of formats ls.rs, whereas lc is the left child number and rc is the right child number. Phenotypes are stored in a structure of a deter- mined number of populations. The ith population pop (i) consists of chromstr and chromnum, chromstr is formulat- ed for the operator name and chromnum is formulated for the GF encoding number and both represent the lc and the rc. The root operator and GF number must be scalar. In all of these GF structures, each GF number corresponds to a particular gene either for the right child chromosome, or the left child chromosome respectively. In general, the main purpose of the encoding and de- coding process of GF chromosome is to have an arithme- tic understanding. GF encodes any arithmetic operation by dividing it to left and right sides. Each side is divided into other valid genes to formulate a GF chromosome. The encoding process depends on the number of operands the arithmetic operations used. At first, two-operands operators’ term is (e.g. the minus operator) placed at the first gene, referring to other operators repeatedly to end up with terminals. However, the operator types called by a father gene are; two children (two operands), one child (one operand) and no child (terminal). In the meantime, to decode a chromosome, take the first gene which has two divisions (children) with respec- tive operands; ls child and rs child. Repeatedly, for each father, a number of children to be called every time until a kernel function is represented. The decoding/encoding process [4, 5, 8, 9] executes the folding father operator (e.g. plus) over the ls child (minus) and the rs child (multiply). The folding mechanisms develops a new algorithm known as Genetic Folding algorithm. The three datasets used here for comparative analysis includes; Iris dataset (multi-classification problem), a Heart dataset (binary classification problem), and Hous- ing dataset (regression problem). The Iris dataset is a DOI: https://doi.org/10.30564/aia.v1i1.608
  • 17. 13 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 dataset made by biologist Ronald Fisher, used in 1936 as an example of linear discriminant analysis. There are 50 samples from every of 3 species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from every sample: the length and the width of the sepals and petals, in centimetres. [10] The second dataset is the Heart dataset (the part obtained from Cleveland Clinic Foundation), using a subset of 14 at- tributes. The purpose is to detect heart diseases in a patient. Its integer value goes from 0 (no presence) to 4. [6] The last dataset for testing the regression problems is “Housing dataset”. The Housing dataset has a median val- ue of the house price along with 13 other parameters that could potentially be related to housing prices. The aim of the dataset is to predict a linear regression model by estimating the median price of owner-occupied homes in Boston. 2.2.2 GFLIB Toolbox Structure The toolbox provides algorithms like SVC, SVR, and Genetic Folding Algorithm. It provides easy to use MATLAB files, which takes in input basic parameters for each algorithm based on the selected file. For example, in regression problems, there is a file (regress.m) to enter kernel type, number of k-fold, number of population and the maximum number of generations to be considered. The list of parameters users can input and uniformed for classification and regression problems are shown in below Table 1: Table 1. List of Parameters in GFLIB Name Definition Values Mutprob mutation probability A float value Crossprob crossover probability A float value Maxgen max generation An integer value Popsize population size An integer value Type problem type multi,binary,regress Data Dataset *.txt Kernel Kernel Type rbf,linear,polynomial,gf Crossval Crossvalidation An integer value Oplist operators and oper- ands ‘Plus_s’,’Minus_s’, ‘Plus_v’,’Minus_v’,’Sine’, ‘Cosine’,’Tanh’,’Log’, ‘x’,’y’ Oplimit length of chromosome An integer value The main directory of GFLIB contains a set of main purposes of GFLIB .m files, described in details in this section, and the two following subdirectories; • @data which contains the folder of @binary for da- taset, @multi for multiclassification dataset and @regress for regression dataset. The folder to manipulate or add more dataset to these subdirectories. • @libsvm, whose functions, discussed in [7] and being integrated into the toolbox to play the SVM role. • Binary, multi and regress files, which forms the ba- sic use of each problem type respectively. The list of figures shown in Table 2 is designed and integrated into the GFLIB toolbox for the sake of compar- ison with other algorithms and toolboxes. Table 2. List of GFLIB Figures Shown in the Toolbox Name Type Population Diversity Fitness distribution vs. Generation Accuracy Accuracy value vs. Generation Structure Complexity Tree Depth/size vs. Generation Tree Structure GP tree structure GF Chromosome GF Chromosome structure In the developed GFLIB toolbox, the focus was on applying supervised learning to GFLIB toolbox for a re- al-world problem shown in Table III using LIBSVM as described in Figure 1. However, the choice on which particular dataset type to be used will be determined by the user referee to it in the path, the GF algorithm will run accordingly. Also, once the user decides on the GF parameters to run with, the right GF algorithm (classifier or regression) will run con- sequentially. The GA Toolbox was built using GF structs (chromo- somes) for the purpose of implementing the core of GF encoding and decoding mechanisms. Here, the major functions of the GFLIB Toolbox are outlined: (1) Population representation and initialisation: genpop, initpop The GFLIB Toolbox supports floating-point chromo- some representation. The floating-point was initialized by the Toolbox function, to create a floating-point GF chro- mosome, initpop. A genpop is provided to build a vector describing the populations and figures statistics. (2) Fitness assignment: calcfitnes, kernel, kernelvalue The fitness function transforms the raw objective func- tion equations found, using GF algorithm into non-nega- tive values. However, kernelvalue which will be repeat- edly used for all individuals in the population, kernel. The Toolbox supports both libsvm [7] package and the fitrsvm [11] function in MATLAB. Using both, GFLIB could success- fully generate models that are capable of fitting the aba- lone data set. The result of the libsvm (using the svmtrain function) was used along with svmpredict, to successfully predict the different input parameters. The GF algorithm included eight arithmetic operators in the toolbox. The DOI: https://doi.org/10.30564/aia.v1i1.608
  • 18. 14 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 arithmetic operators shown in Table 1 are either one oper- and operator (sine, cosine, tanh, and log) or two operands operator (plus, minus). (3) Genetic Folding operators: crossover, mutation The GFLIB supports two types of operators by dividing the population size into two-equal sizes. Each half-size will undergo one type of operator. The GFLIB operators are one-point crossover, two-point crossover, and swap mutation operators. (4) Selection operators: selection This function selects a given number of individuals from the current population according to their fitness and returns a row structs to their indices. Currently, roulette wheel selection method was conducted for GFLIB toolbox. The selection methods particularly, are required to balance between the quality of solutions and genetic diversity. (5) Performance figures: genpop The list of figures included to demonstrate the perfor- mance of the GF algorithm is; the ROC curve (only for bi- nary), expression tree, fitness values, population diversity, accuracy verses complexity, and structure complexity. The GFLIB also includes well-known kernel functions in order to differentiate comparisons easily. The file also prints the best GF chromosome in two different formats; genes num- bers and operator string. Fig 1. GFLIB life Cycle 3. GF Algorithm Using Generative Models Genetic Folding (GF) [4, 5, 8, 9] is a novel algorithm stimulat- ed by means of folding mechanism, inspired by the RNA sequence. GF can represent an NP problem by a simple array of floating number instead of using a complex tree structure. First, GF generates an initial population com- pound of basic mathematics operations randomly. Then, valid chromosomes (expression) can be evaluated. GF as- signed a fitness value for every chromosome depending on the fitness function being developed. The chromosome is then selected by the roulette wheel. After which the fittest chromosome will be subjected to the genetic operators in order to generate a new population in an independent way. In every population, the chromosomes are also subjected to a filter to test the validity of the chromosome. The ge- netic operators used to generate a new population for the next generation. The entire procedure is repeated until the optimum chromosome (kernel) is achieved. 4. Experiments on LIBGF This paper first shows GFLIB methods work on binary and multi-classification problems; then carries out a re- gression problem using GFLIB methods. Three datasets are chosen as testing data for the two types of experi- ments. Part of their properties is included in Table III and Table VI for classification and regression respectively. Amongst them, the same parameters from k-folding to operator list are for experiments conducted. Other well- known kernels are included for the sake of comparison with GFLIB. However, the list of datasets was used in both binary, multi-classification, and regression problems brought from UCI dataset[6] . 4.1 LIBGF for Classification Problems The classification dataset included in GFLIB shown in Ta- ble 3 includes the respective details. Table 3. Classification Datasets Used in the GFLIB Name Type Size Credit approval Binary 690*15 Statlog German Credit Binary 1000*20 Heart Scale Binary 270*13 Ionosphere Binary 351*34 Sonar Scale Binary 208*60 Spam Binary 4601*57 Iris Scale Multi 150*4 Zoo Multi 101*18 The list of parameters’ value used in the experiments DOI: https://doi.org/10.30564/aia.v1i1.608
  • 19. 15 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 for both binary and multiclassification problems is shown in table 4. Table 4. List of Classification Parameter Values Name Definition mutprob 0.1 crossprob 0.5 maxgen 20 popsize 50 type Binary, multi data In table III kernel GF, rbf,linear,polynomial crossval 10-fold oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’ oplimit 20 %length of chromosome 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 generation 0 10 20 30 40 50 60 70 80 90 100 fitness/accuracy % Fitness maximum:97.297297 average:82.378378 median:97.297297 avg-std:50.505797 avg+std:114.250959 bestsofar:100.000000 Figure 2. Classification Fitness Values The best chromosome string found using GFLIB for the iris dataset is: Plus_s Sine Plus_v X Y Sine Sine Y Y X X And the best chromosome GF number formed for the above-mentioned string was: 2.3 4.5 6.7 0.4 0.5 8.9 10.11 0.8 0.9 0.10 0.11 The maximum fitness (Accuracy) found using GFLIB in all generations for the iris dataset was 100.00 % 4.2 LIBGF for Regression Problems For all figure’s types except ROC curve, the experiment was tested on running the algorithm for 20 generations with 10 cross validation. The best performance was the smallest value of the mean square error found of the ob- jective function obtained over all function evaluations. For 50 population conducted at each combination of a half-mutation size, a half-crossover size, 0.1 a mutation rate, and 0.5 a crossover rate. Thus, for each generation, 20 combinations of operators are experimented to form a valid GF chromosome. The GF operators’ rates are shown in Table 6. In Table 5, the list of regression datasets included in GFLIB is shown in and associated with a brief description of the dimensionality. Table 5. Regression Datasets Used in the GFLIB Name Type Size Abalone Regression 4177*8 Housing Regression 506*13 MPG Regression 392*6 Table 6 shows the list of parameters and values used to run a regression test on Housing dataset: Table 6. List of Regression Parameter Values Name Definition mutprob 0.1 crossprob 0.5 maxgen 20 popsize 50 type regression data In table VI kernel GF, rbf,linear,polynomial crossval 10-fold oplist ‘Plus_s’,’Minus_s’,’Multi_s’,’Plus_v’,’Minus_v’,’x’,’y’ oplimit 20 %length of chromosome The best performance value found was with the MSE value of 0.000121 in all generations as shown in Figure 3. In Figure 4 and Figure 5, the results conducted using the GFLIB to demonstrate the variety of results shown using GFLIB toolbox. The population diversity figure, the figure plots in dots the highest and lowest fitness values found in a population. The structure complexity figure plots the folding depth of the best GF chromosome found in each generation. The size of each folding counted based on the number of calling occurred by the first number formulated in a GF chromosome. A GF chromosome structure has been well-defined to represent a structural folding of a GF chromosome. Then, the GF chromosome is extracted and arranged as a tree structure of real numbers. The GF encoding part of the toolbox is used to evolve the tree-structure of a program whereas the GF decoding part of the toolbox is applied to determine the string of the structural chromosome. Exper- imental results have shown the promise of the developed approach. DOI: https://doi.org/10.30564/aia.v1i1.608
  • 20. 16 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 0 1 2 3 4 5 6 7 8 9 10 generation -4 -2 0 2 4 6 8 10 12 14 16 MSE 10 -5 Fitness maximum:0.000086 average:0.000082 median:0.000086 avg-std:0.000048 avg+std:0.000116 bestsofar:0.000121 Figure 3. Regression Fitness Value 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 generation 0 10 20 30 40 50 60 70 80 90 100 fitness distribution Population Diversity a. Population Diversity Plus_s Sine x y Plus_v Sine y y Sine x x b. GF Tree Structure 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 generation 4 5 6 7 8 9 10 11 tree depth/size Structure Complexity maximum depth:4 bestsofar depth:4 bestsofar size:11 c. GF Structure Complixity Figure 4. GFLIB Toolbox Ran for Iris Multiclassification Dataset a. Population Diversity Minus_s Minus_s y Minus_v x x Log Minus_s x Cosine Sine b. GF Tree Structure DOI: https://doi.org/10.30564/aia.v1i1.608
  • 21. 17 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 0 1 2 3 4 5 6 7 8 9 10 generation 4 5 6 7 8 9 10 11 tree depth/size Structure Complexity maximum depth:4 bestsofar depth:4 bestsofar size:11 c. Structure Complixity Figure 5. GFLIB Toolbox Ran for Housing Regression Dataset The best GF string found using GFLIB for the iris data- set is: Minus_s Minus_s Log Y Minus_v Minus_s Sine X X X cosine And the best GF number formed for the above-men- tioned string was: 2.3 4.5 6.7 0.4 8.9 10.11 0.7 0.8 0.9 0.10 0.11 5. Conclusion GFLIB toolbox is presented and built using MATLAB, for users and researchers who are interested in solving real NP problems. The key feature of this toolbox is the structure of the GF chromosome, and the encoding and decoding processes included in the toolbox. In this GFLIB toolbox, eleven well-known UCI datasets are studied and implemented with their relative performance analysis; ROC curve, fitness values, structural analysis, tree struc- ture, and population diversity. These datasets can be cate- gorised into two categories: classification and regression. All figures are comparable with another set of three well- known kernel functions. GFLIB toolbox of any category allows users to select their parameter choice. Balanced parameters of GF chromosome must be considered, to maintain the genetic diversity within the population of candidate solutions throughout generations. But, on the other hand, the MATLAB GFLIB files tends to facilitate the development time of the toolbox. In this paper, the GFLIB is being compared with three well-known kernels. In future researches, I intend to com- pare GFLIB with a GA and GP alone as well. I also intend to compare the toolbox with other kinds of hybrid meth- ods, such as the hybrid decision tree/instance. References [1] Seyedali Mirjalili. Evolutionary Algorithms and Neu- ral Networks Theory and Applications. Springer international Publishing; June 2018. [2] Sara Silva and Jonas Almeida, “Gplab-a genetic pro- gramming toolbox for matlab,” In Proc. of the Nordic MATLAB Conference, pp. 273--278, 2005. [3] A.J. Chipperfield and P.J. Fleming, “The MATLAB genetic algorithm toolbox”, IEE Colloquium on Applied Control Techniques Using MATLAB, UK, 1995 [4] Mezher, Mohammad and Abbod, Maysam. (2010). Genetic Folding: A New Class of Evolutionary Algo- rithms. 279-284. [5] Mohd Mezher, Maysam Abbod. Genetic Folding: An Algorithm for Solving Multiclass SVM Prob- lems. Applied Soft Computing, Elsiver Journal. 41(2):464-472. 2014. [6] C L Blake, C J Merz. UCI repository of machine learning databases University of California, Irvine, Department of Information and Computer Sciences. 1998. [7] Chang, Chih-Chung and Lin, Chih-Jen. LIBSVM: A library for support vector machines. ACM Transac- tions on Intelligent Systems and Technology. 2(3): 1-27. 20011. [8] Mohd Mezher, Maysam Abbod. Genetic Folding: A New Class of Evolutionary Algorithms. October 2010. [9]Mohd Mezher, Maysam Abbod. A New Genetic Fold- ing Algorithm for Regression Problems. Proceedings - 2012 14th International Conference on Modelling and Simulation, UKSim. 46-51. 2012. [10] R. A. Fisher (1936). “The use of multiple measure- ments in taxonomic problems”. Annals of Eugenics. 7 (2): 179–188. [11] Statistics and Machine Learning Toolbox Users guide. 2018b,the MathWorks, Inc., Natick, Massa- chusetts, United States. DOI: https://doi.org/10.30564/aia.v1i1.608
  • 22. 18 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 Artificial Intelligence Advances https://ojs.bilpublishing.com/index.php/aia ARTICLE Quantum Fast Algorithm Computational Intelligence PT I: SW / HW Smart Toolkit Ulyanov S.V.* State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia ARTICLE INFO ABSTRACT Article history Received: 12 March 2019 Accepted: 18 April 2019 Published Online: 30 April 2019 A new approach to a circuit implementation design of quantum algorithm gates for quantum massive parallel fast computing implementation is pre- sented. The main attention is focused on the development of design meth- od of fast quantum algorithm operators as superposition, entanglement and interference which are in general time-consuming operations due to the number of products that have to be performed. SW & HW support sophisticated smart toolkit of supercomputing accelerator of quantum algorithm simulation is described. The method for performing Grover’s interference without product operations as Benchmark introduced. The background of developed information technology is the "Quantum / Soft Computing Optimizer" (QSCOptKBTM) software based on soft and quantum computational intelligence toolkit. Quantum genetic and quantum fuzzy inference algorithm gate design considered. The quantum information technology of imperfect knowledge base self-organization design of fuzzy robust controllers for the guaranteed achievement of intelligent autonomous robot the control goal in unpredicted control situ- ations is described. Keywords: Quantum algorithm gate Superposition Entanglement Interference Quantum simulator *Corresponding Author: Ulyanov S.V., State University "Dubna", Universitetskaya Str.19, Dubna, Moscow Region, 141980, Russia; Email: ulyanovsv@mail.ru 1. Introduction: Role of Quantum Synergetic Effects in AI and Intelligent Control Models R. Feynman and Yu. Manin, independently, suggested and correctly shown that quantum computing can be effectively applied for simu- lation and searching of solutions of classically intractable quantum systems problems using quantum programmable computer (as physical devices). Recent research shows successful engineering application of end-to-end quantum computing information technologies (as quantum sophisti- cated algorithms and quantum programming) in searching of solutions of algorithmic unsolved problems in classical dynamic intelligent control systems, artificial intelligence, intelligent cognitive robotics etc. Concrete developments are the cognitive “man-robot” interactions in collective multi-agent systems, “brain-com- puter-device” interface of autism children supporting with robots for service use, and so on. These applications are examples successful result applications of efficient clas- sical simulation of quantum control algorithms in the al- gorithmic unsolved problems of classical control systems robustness in unpredicted control situations. Related works. Many interesting results are published as fundamentals and applications of quantum / classical hybrid approach to design of different smart classical or quantum dynamic systems. For example, an error mitiga- tion technique and classical post-processing can be con-
  • 23. 19 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/aia.v1i1.619 veniently applied, thus offering a hybrid quantum-clas- sical algorithm for currently available noisy quantum processors [1] or Quantum Triple Annealing Minimization (QTAM) algorithm utilizes the framework of simulated annealing, which is a stochastic point-to-point search method: The quantum gates that act on the quantum states formulate a quantum circuit with a given circuit height and depth [2] . A new local fixed-point iteration plus global sequence acceleration optimization algorithm for general variational quantum circuit algorithms in [3] is described. The basic requirements for universal quantum computing have all been demonstrated with ions and quantum algo- rithms using few-ion-qubit systems have been implement- ed [4] . Quantum computing is finding a vital application in providing speed-ups for machine learning problems, critical in “big data” world. Machine learning already per- meates many cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ma- chine learning optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents[5] . In [6] the system PennyLane as a Python 3 software framework for optimization and machine learning of quantum and hybrid quantum / classical computations is introduced. A plugin system makes the framework compatible with any gate-based quantum simulator or hardware and provided plugins for Strawberry Fields, Rigetti Forest, Qiskit, and ProjectQ, allowing PennyLane optimizations to be run on publicly accessible quantum devices provided by Rigetti and IBM Q. On the classical front, PennyLane interfaces with accelerated machine learning libraries such as Ten- sorFlow, PyTorch, and auto grad. PennyLane can be used for the optimization of variational quantum eigensolvers, quantum approximate optimization, quantum machine learning models, and many other applications. The first industry-based and societal relevant applications will be as a quantum accelerator. It is based on the idea that any end-application contains multiple parts and the properties of these parts are better executed by a particular acceler- ator which can be either an FPGA, a GPU or a TPU. The quantum accelerator added as an additional coprocessor. The formal definition of an accelerator is indeed a co-pro- cessor linked to the central processor and that executes much faster certain parts of the overall application [7] . Limited quantum memory is one of the most important constraints for near-term quantum devices. Understanding whether a small quantum computer can simulate a larger quantum system, or execute an algorithm requiring more qubits than available, is both of theoretical and practical importance and in [8] is discussed. One prominent platform for constructing a multi-qubit quantum processor involves superconducting qubits, in which information is stored in the quantum degrees of freedom of nanofabricated, anharmonic oscillators constructed from superconduct- ing circuit elements. The requirements imposed by larger quantum processors have shifted of mindset within the community, from solely scientific discovery to the devel- opment of new, foundational engineering abstractions as- sociated with the design, control, and readout of multi-qu- bit quantum systems. The result is the emergence of a new discipline termed quantum engineering, which serves to bridge the basic sciences, mathematics, and computer science with fields generally associated with traditional engineering [9, 10] . Moreover, new synergetic effects defined and extract- ed from the measurement of quantum information (that hidden in classical control states of traditional controllers with time-dependent coefficient gain schedule) are the information resource for the increasing of the control sys- tem robustness and guarantee the achievement of control goal in hazard situations. The background of this syner- getic effect is the creation of new knowledge from exper- imental response signals of imperfect knowledge bases on unpredicted situations using quantum algorithm of knowledge self-organization as quantum fuzzy inference. The background of developed information technology is the "Quantum / Soft Computing Optimizer" (QSCOptKB TM) software based on soft and quantum computational intelligence toolkit. Algorithmic constraints on mathematical models of data processing in classical form of computing (based on Church-Turing thesis and using background of classical physics laws) are dramatically differs from physical con- straints on resources limitation in data information pro- cessing models that based on quantum mechanical models such as information transmission, information bounds on the extraction of knowledge, amount of quantum acces- sible experimental information, quantum Kolmogorov’s complexity, speed-up quantum limit of data processing, quantum channel capacity etc. Meaning exploring of the Landauer’s thesis as “Information is physical” has pre- pared as result the background for changing, clarification and expanding the Church-Turing thesis, and introduce the R&D idea of quantum computing exploring and quan- tum computer development for successful solving many classically algorithmic unsolved (intractable in classical mean) problems. The classification of quantum algorithms is demonstrat- ed on Fig. 1.
  • 24. 20 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Quantum Algorithms Decision Making Searching Deutsch’s Deutsch-Jozsa’s Grover’s Shor’s Quantum Genetic Search Algorithm Robust Knowledge Base Design for Fuzzy Controllers on QFI Quantum Fuzzy Control Quantum Fuzzy Modelling System A l g o r i t h m L i b r a r y Design system Figure 1. Classification of Quantum Algorithms and In- terrelations with Quantum Fuzzy Control Quantum algorithms are in general random: decision making quantum algorithms of Deutch-Jozsa and quan- tum search algorithms (QSA) of Shor and Grover are ex- amples of successful applications of quantum effects and constraints from introduction new classes computational basis quantum operators as superposition, entanglement and interference that are absent in classical computational models. These effects given the possibility to introduce new types of computation as quantum parallel massive computing using superposition operator, operator of en- tanglement (super-correlation or quantum oracle) created the possibility of “good” (in general unknown) solution search and operator of quantum interference help extract searching “good” solutions with maximal amplitude probability . All of these operators are reversible, clas- sical irreversible operator of measurement (as example, coin) extract the result of quantum algorithm computing. Note, that quantum effects that described above absent in classical models of computation and demonstrated the ef- fectiveness of quantum constraints in classical models of computations. Figure 2 demonstrate the computing analogy between soft and quantum algorithms and its operators that are used in quantum soft computing information technology. Superposition n Interference … Main problem Solution method Solution method Global optimization GA structures Quantum search algorithms Analogies Search spaces Classical approach Fitness function Minimum entropy production Quantum approach Mutation Crossover Selection Initial position (0,1) Changing of probability choice 0 1 0 0 1 1 0 .… 0 1 1 .… 0 1 1 0 Binary code Classical states 1 0 0 and 1 0 1     = =         Superposition ( ) 1 0 1 2 ± Generation& creation of entanglement states Generation& creation of superposition states Superposition of solutions & oracle measurements Initial position 1 0 0   =     Solution space 2 1 n Solution space 2 n 1 1 Quantum Fourier transform One qubit rotation gate Controlled-Not two-qubit gate Solution space 2 1 N (Reproduction) GA operators GA operations General solution space Quantum operators Quantum operations ⊗ Disentanglement ⊗ ⊗ ⊗ Expert decision making Entanglement Figure 2. Interrelations between Soft and Quantum Oper- ators in Genetic and Quantum Algorithms From quantum programming a quantum computer point view there no exist currently the general methodology of quantum computing and simulation of dynamic systems but it was developed many proposals of quantum simula- tors (see, for example, the large list of quantum simulators available on [https://quantiki.org/wiki/list-qc-simulators]). Remark. The purpose of this article is concerned with the problem of discovering new QAs. Same as D-Wave, processor supercomputing processes in a quantum com- puter can be described as a synergetic union of hybrid quantum / classical HW, and quantum SW with quantum soft support of quantum programming. Remark. To understand more clearly the fundamental capabilities and limitations of quantum computation we are to discover efficient QAs for interesting engineering problems as intelligent cognitive control systems. One the most important open problem in computer sci- ence is to estimate the possibility of quantum speed-up for the search of computational problems solution. Oracular, or black-box, problems are the first exam- ples of problems that can be solved faster with a quantum computer than with a classical computer. The computer in the black box model is given access to oracle (or a black box) that can be queried to acquire information about the problem. To find the solution to the problem using as few queries to the oracle as possible is the computation goal [11-13] . 1.1 Goal and Problem Solving This article consider the design possibility a family of quantum decision-making and search algorithms (QA’s) DOI: https://doi.org/10.30564/aia.v1i1.619
  • 25. 21 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 (see, Fig. 1) that it is the background of quantum com- putational intelligence for solving the problems of Big & Mining data, deep quantum machine learning (based on quantum neural network), global optimization in intelli- gent quantum control (using quantum genetic algorithms) etc. (see, in details Pt II). 1.2 Method of Solution and Smart Toolkit The presented method and relative hardware implements matrix and algorithmic forms of quantum operators that are used in a QA (entanglement or oracle operators, and interference operator as in second and third steps of QA implementation) that increasing computational speed- up with respect to the corresponding SW realization of a traditional and a new QSA. A high level structure of a generic entanglement block that uses logic gates as analogy elements is described. Method for perform- ing Grover interference without products is introduced [14, 15] . QUANTUM ALGORITHM ACCELATOR COMPUTING: SW / HW SUPPORT A. General Structure of Quantum Algorithm The problem solved by a QA can be stated in the sym- bolic form: Input A function f: {0,1}n →{0,1} m Problem Find a certain property of function f A given function f is the map of one logical state into another and QA estimate qualitative properties of function f . General description of QA on Fig. 3 is demonstrated (physically the type of operator F U describes the qualita- tive properties of the function f ). Figure 4 shows the steps of QA that includes almost of described qualitative peculiarities of function f and physical interpretation of applied quantum operators. In the scheme diagram of Fig. 5 the structure of a QA is outlined. |x> H UF |0> Input Superposition Entanglement Interference Output H |0> INT . . . n |x> m . . . . . . . . . h S S h h h Repeated k times . . . . . . M E A S U R E M E N T bit bit bit bit Figure 3. General Description of QAG Qualitative properties of function Problem Quantum Fourier transformation Problem oriented operator Hadamard transformation Answer QAG design Quantum oracle as black box Coding of function properties Qualitative properties of function Classical input QC output SCO Quantum KB optimizer ( )( ) ( ) Interference Quantum oracle Superposition fin initial ψ ψ =     Quantum massive parallel computing Figure 4. General Structure of QA Encoder f→F ; F→UF f INPUT UF Quantum Block Basis Vectors Decoder Answer OUTPUT Binary strings level Complex Hilbert space Map Table and Interpretation Spaces Figure 5. Scheme Diagram of QA - structure As above mentioned QA estimates (without numerical computing) the qualitative properties of the function f . Thus with QAs we can study qualitative properties of function f without quantitative estimation of function values. For example, Fig. 6 represents the general approach to Grover’ QAG design. Figure 6. Circuit and Quantum Gate Representation of Grover’s QSA DOI: https://doi.org/10.30564/aia.v1i1.619
  • 26. 22 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 As a termination condition criterion minimum-entropy based method is adopted [13] . The structure of a QAG in Fig. 3 in general form de- fined as following: ( ) 1 h n n m F QAG Int I U H S +     = ⊗ ⋅ ⋅ ⊗     (1) Where I is the identity operator; S is equal to I or H and dependent on the problem description. Fast algorithms design to simulate most of known QAs on classical computers [15-17] and computational intelli- gence toolkit is following: 1) Matrix based approach; 2) Model representations of quantum operators in fast QAs; 3) Algorithmic based approach, when matrix elements are calculated on “demand”; 4) Problem-oriented approach, where we succeeded to run Grover’s algorithm with up to 64 and more qubits with Shannon entropy calculation (up to 1024 without termination condition); 5) Quantum algorithms with reduced number of operators (entangle- ment-free QA, and so on). Remark. In this article we describe briefly main blocks [13-17] in Fig. 6: i) unified operators; ii) problem-oriented operators; iii) Benchmarks of QA simulation on classical computers; and iv) quantum control algorithms based on quantum fuzzy inference (QFI) and quantum genetic al- gorithm (QGA) as new types of QSA (see, more in details Part II of this article). Let us consider matrix based and problem-oriented approaches to simulate most of known QAs on classical computers and small quantum computer. I. Quantum operator’s description: SW&HW smart toolkit support We consider from simulation viewpoint the structure of quantum operators as superposition, entanglement and interference[14,16,18,19,23-26] in matrix based approach. Superposition operators of QA’s. The superposition operator consists in general form of the combination of the tensor products Hadamard H op- erators with identity operator I : 1 1 1 0 1 , 1 1 0 1 2 H I     = =     −     . The superposition operator of most QAs can be ex- pressed (see Fig. 3 and Eq. (1)) as: 1 1 n m i i Sp H S = =     = ⊗ ⊗ ⊗        , Where n and m are the numbers of inputs and of outputs respectively. Numbers of outputs m as well as structures of corresponding superposition and interference operators in [12, 13] for different QAs presented. Elements of the Walsh-Hadamard operator could be obtained as following: ( ) * / 2 / 2 , 1, if is even 1 1 1, if is odd 2 2 i j n n n i j i j H i j ∗ −    = =    − ∗  (2) Where 0,1,...,2 , 0,1,...,2 n n i j = = . Its elements could be obtained by the simple replication according to the rule presented in Eq. (2). Interference operators of main QA’s Interference operators for Grover’s algorithm[18, 19] writ- ten as a block matrix: /2 , /2 /2 /2 1 2 , 1 1 1 1 , , 2 2 2 Grover n n n i j n n n i j i j Int D I I I I i j I I I i j = ≠     = ⊗ = − ⊗ =       − =      − + ⊗ ⊗ =      ≠      , (3) where 0,...,2 1, 0,...,2 1 n n i j = − = − , n D refers to diffu- sion operator: [ ] 1 ( ) / 2 , ( 1) 2 AND i j n n i j D = − = [4,8] . Note that with bigger number of qubits, gain coefficient will become smaller. Entanglement operators of main QA’s Operators of entanglement in general form are the part of QA and the information about the function (being ana- lyzed) is coded as “input-output” relation. In the general approach for coding binary functions into corresponding entanglement gates arbitrary binary function considered as: { } { } : 0,1 0,1 , n m f → such that 0 1 0 1 ( ,..., ) ( ,..., ) n m f x x y y − − = . Firstly irreversible function f transfer into reversible function F , as following: { } { } : 0,1 0,1 , m n m n F + + → and ( ) 0 1 0 1 0 1 0 1 0 1 ,..., , ,..., ( ,..., , ( ,..., ) ( ,..., )) n m n n m F x x y y x x f x x y y − − − − − = ⊕ , where ⊕ denotes addition modulo 2. This transfor- mation create unitary quantum operator and performs the similar transformation. With reversible function F it is possible design an entanglement operator matrix accord- ing to the following rule: [U iff F j i i j F ]i j B B , = = ∈ 1 ( ) , , 0,..,0;1,..,1; B B        n m n m + +  B denotes binary coding. A diagonal block matrix of the form: UF =             M 0 0  M 0 2 1 n − is actually resulted entanglement operator. Each block , 0,...,2 1 n i M i = − , can be obtained as fol- lowing 1 0 , iff ( , ) 0 , iff ( , ) 1 m i k I F i k M C F i k − = =  = ⊗  =  (4) DOI: https://doi.org/10.30564/aia.v1i1.619
  • 27. 23 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 And consists of m tensor products of I or of C op- erators, where C stays for NOT operator. Note that entanglement operator (4) is a sparse matrix and according to this property, the simulation of entangle- ment operation accelerated. II. QA computing accelerator: SW&HW support Figure 7 shows the structure of intelligent quantum computing accelerator. Software Package Software Package PC •S.C. Optimizer •Q.S.C. Optimizer •Q.G.S.Algorithm •Q.G. Design •Grover’s Gate •Shor’s Gate •General purpose Selection Crossover G.A. Controller Mutation G.A. Acc. H.W. Sup. Ent. Int. Quantum Gate Controller Quantum Operators Q.C. Accelerator H.W. User’s Control System User’s Control System Figure 7. Intelligent Quantum Soft Computing Accelera- tor Structure HW of quantum computing accelerator is based on standard silicon element background. QA structure implementation for HW and MatLab is on Fig. 8 demonstrated (see, Fig. 23). a Output Input Background of HW implementation Intelligent computation operators Digital computation of Shannon entropy Stop Criterion Superposition Interference Entanglement P r e - I n t e r Figure 8. QA Structure Presentation for HW (a) and Mat- Lab (b) Implementations Different structures of QA can be realized as shown in Table 1 below. Table 1. Quantum Gate Types for QA’s Structure Design Title Type of Algorithm Symbolic Form of QAG: ( )  1 h m n m F Superposition Entanglement Interference Int I U H S +       ⊗ ⋅ ⋅ ⊗                     Deutsch- Jozsa (D. – J.) m=1, S=H(x=1) Int=n H k=1 h=0 ( ) ( ) . . 1 n D J n F H I U H − + ⊗ ⋅ ⋅ Simon (Sim) m=n,S=I (x=0)Int=n H k=O(n) h=0 ( ) ( ) n n Sim n n F H I U H I ⊗ ⋅ ⋅ ⊗ Shor (Shr) m=n, S=I (x=0)Int=QFTn k=O(Poly(n)) h=0 ( ) ( ) n Shr n n n F QFT I U H I ⊗ ⋅ ⋅ ⊗ Grover (Gr) m=1, S=H(x=1) Int=Dn k=1, h=O(2n/2 ) ( ) ( ) 1 Gr n n F D I U H + ⊗ ⋅ ⋅ 1.3 Information Analysis of QA and Criterion for Solution of the QSA-termination Problem The communication capacity gives an index of efficiency of a quantum computation [19] . The measure of Shannon information entropy is used for optimization of the termi- nation problem of Grover’s QSA. Information analysis of Grover’s QSA based on of Eq. (5), gives a lower bound on necessary amount of entanglement for searching of suc- cess result and of computational time: any QSA that uses the quantum oracle calls { } s O as 2 I s s − must call the oracle at least 1 1 2 log e P T N N π π   − ≥ +     times to achieve a probability of error e P [20] . The information intelligent measure of QA as ( ) T ψ ℑ of the state ψ is[12, 21] : ( ) ( ) ( ) 1 . Sh VN T T T S S T ψ ψ ψ − ℑ = − (6) With respect to the qubits in T and to the basis { } 1 n B i i = ⊗ ⊗  The measure (6) is minimal (i.e., 0) when ( ) Sh T S T = y and ( ) 0 VN T S = y , it is maximal (i.e., 1) when ( ) ( ) Sh VN T T S S = y y . Thus the intelligence of the QA state is maximal if the gap between the Shannon and the von Neumann entropy for the chosen result qubit is mini- mal. Information QA-intelligent measure (6) and interrela- tions between information measures in Table 1 are used together with the step-by-step natural majorization princi- ple for solution of QA-termination problem and interrela- tions between information measures ( ) ( ) Sh VN T T S S ψ ψ ≥ are used together with entropic relations of the step-by-step natural majorization principle for solution of QA-termi- DOI: https://doi.org/10.30564/aia.v1i1.619
  • 28. 24 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 nation problem [12] . From Eq. (6) we can see that (for pure states) ( ) ( ) ( ) max 1 min Sh VN T T T S S T ψ ψ ψ   −   ℑ −      ( ) ( ) min , 0 Sh VN T T S S ψ ψ =  , (7) i.e. from Eq. (6) the principle of Shannon entropy min- imum is as follows. Figure 9 shows digital block of Shannon entropy min- imum calculation and the main idea of the termination criterion based on this minimum of entropy [13, 14] . (a) b Scheme background for SW implementation Search space of solutions Intelligent computation operators Information stopping criteria Measurement of result SW additional functions (b) Figure 9. Digital Block of Shannon Entropy Minimum Calculation (a) and MatLab (b) Implementations Number of iterations of QA defined during the calcula- tion process of minimum entropy search. The structure of HW implementation of main quantum operators. Figure 10 shows the structure of superposition and in- terference operator simulation. Shor Grover Deutsch-Jozsa Common Part Superposition Operator Quantum Algorithm 1 1 n n H H H + = ⊗ 1 1 n n H H H + = ⊗ n n n n H I H I ⊗ = ⊗ nH nH nH Shor Grover D.-Jozsa Common Part Interference Operator Quantum Algorithm ( ) 0 phase n n n QFT I H I = ⇒ ⊗ ⊗ n D I ⊗ = H I ⊗ H I ⊗ H I ⊗ nH I n H I ⊗ = ⋅ ⊗ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 n n H H I ×   −                     − ⊗             Sup. Int. I ⊗ n H Level 1 Level 2 Level 3 Software Software Hardware n H ⋅ H I ⊗ Controller Figure 10. Computation of Superposition and Interference Operators The superposition state is created by appli- cation of Hadamard matrix to column vector as [ ] ( ) T 1 1 0 1 1 0 1 1 0 1       − = = + = −       − −       . According to this rule of quantum computing the superposition model- ing circuit is developed [16] . Figure 11 shows the superposition modeling circuit. The first operations needed are H|0>, H|0> and H|1>. Neglecting the factor 1/20.5, it can be written: H UF |0> Input Superposition Entanglement H |0> . . . n |1> H . . . h11=1 1 1 + + + -- h21=1 1 |h22|=1 0 1 0 h12=1 0 0 [1 --1]T |0> |0>             − ⊗             − ⊗             − 1 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 Direct product can be performed via AND gates. In fact (2 qubits) Final superposition state 0 ) 0 1 ( 0 * 1 ; 1 ) 1 1 ( 1 * 1 ; 1 1 1 1 * 1 = ∧ = − = ∧ − = − = ∧ = Figure 11. Superposition (Qubit) Modeling Circuit Qubits simulation circuits with tensor product on Fig. 12 is shown. Note: no multipliers are introduced 2-qubit superposition 1 0 1 1 1 1 0 0 1 0 1 1 0 0                           = 0 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0             = 0 1 0 1       = ⊗             = ⊗       0 0 1 or 1 1 A A A A A 3-qubit superposition Figure 12. Qubits Simulation Circuits with Tensor Prod- uct DOI: https://doi.org/10.30564/aia.v1i1.619
  • 29. 25 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 Figure 13 shows the computation of entanglement op- erators. PC S u p e r p o s i t i o n E n t a n g l e m e n t Interference Quantum Gate Hardware Accelerator Quantum Operators a c Entanglement operators of quantum algorithms: a - Deutsch-Jozsa’s; b – Grover’s; c – Shor’s b         = 1 0 0 1 I         = 0 1 1 0 C               = ⊗ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 I I               = ⊗ 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 C I               = ⊗ 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 I C               = ⊗ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 C C               0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Figure 13. The Computation of Entanglement Operators Figure 14 shows the entanglement creation circuit. f(x) 0 1 0 0 Idea: to avoid encoding steps by acting directly on entanglement output vector via function f. The output of entanglement can be realized by using couples of XOR gates: g1 g2 g3 g4 g5 g6 g7 g8 00 01 10 11 y1 y2 y3 y4 y5 y6 y7 y8 Superposition Output Entanglement Output Figure 14. The Entanglement Creation Circuit Thus it is possible to obtain output of entanglement G=UF ×Y without calculate matrix product and have only knowledge of corresponding row of diagonal UF matrix (see, Fig. 13). Finally output vector G can write as following (Fig. 15):      − + + = = , 0 ) 1 ( 2 1 ) ( , 2 1 2 / elsewhere j x f i if g n j n i 11 C⊗C For n = 2 10 01 00 f(x) C⊗I I⊗C I⊗I Mi Example of UF Figure 15: Equivalent form of Output Vector G Figure 16 shows the entanglement circuit realization. DOI: https://doi.org/10.30564/aia.v1i1.619
  • 30. 26 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 . 000 001 010 011 100 101 110 111 5V 0V J5 J10 J6 J11 J7 J12 J8 J13 Max333: Maxim analogue switches Connectors Binary function Figure 16. Entanglement Circuit Realization Figure 17 shows the circuit realization of interference operator according to the scheme in Fig. 10. I-b:Pre– interference . Let us consider the output V of theentanglement block. V=[v1 v2 …… vi……v2 n+1] Infact, if Y istheinterference output vector, itselementsyi are TL081OPAMP (not implemented, being even = -odd)        − − = ∑ ∑ = − = − − even i for v v odd i for v v y i j j n i j j n i n n , 2 1 , 2 1 2 1 2 1 2 1 1 2 1 I-c: Interference . TL084OPAMP (not implemented, being even = -odd)        − − = ∑ ∑ = − = − − even i for v v odd i for v v y i j j n i j j n i n n , 2 1 , 2 1 2 1 2 1 2 1 1 2 1 ith element processing unit Inti vi yi Figure 17. Interference Circuit Realization Let us consider briefly applications of QAG design ap- proach in highly structured QSA; and in AI, informatics, computer sciences and intelligent control problems (see Part II). SIMULATION OF QA - COMPUTING ON CLAS- SICAL COMPUTER We discuss the general outline of the Grover’s QAs us- ing the quantum gate (QAG) as ( ) ( ) 1 h Gr n n F QAG D I U H ⊗ +   = ⊗ ⋅ ⋅   (7) General method design of QAGs in [13, 14] is developed and is briefly described. Figure 18a represents QAG of Grover’s algorithm (7) as control system, and Fig. 18b describe a general struc- ture scheme of Grover's QSA (see, Fig. 1 and Table 1) [13] . 0 1 n ⊗ Superposition 1 n H ⊗ + Information Source Unmarked States Entanglement F U Quantum Oracle R.S. ε Information Optimization Wise Controller * u Termination Interference n D I ⊗ Control Object Local Control Feed-back Global Information Feed-back POV Measure Measurement Process Decision-Making Feed-Forward Answer Information Comparator Physical Comparator Marked States ( ) min Sh vN S S − Qualitative Properties Initial States POV : Positive Operator-Valued (a) Grover Quantum Gate The output is Φ = [(Dn ⊗I) ⋅UF ]h ⋅ ( n+1H)       − = 1 1 1 1 2 1 H                =         = 1 0 1 0 1 0 1 0 1 0 c c i + = ϕ Basis qubits    ≠ = − = − − j i j i d n n ij 1 1 2 / 1 1 2 / 1 With: |1> H UF |0> INPUT STEP 1 STEP 2 STEP 3 OUTPUT H H |0> Dn . . . n h h h bit bit bit (b) Figure 18. General Structure Scheme of Grover QSA The Hadamard gates (Step 1) are the basic components for the superposition operation, the operator F U (Step 2) performs entanglement operation and n D (Step 3) is the diffusion matrix related to the interference operation. Our purpose is to realize some classical circuits (i.e. circuits DOI: https://doi.org/10.30564/aia.v1i1.619
  • 31. 27 Artificial Intelligence Advances | Volume 01 | Issue 01 | April 2019 Distributed under creative commons license 4.0 composed of classical gates AND, NAND, XOR etc.) that simulate the quantum operations of Grover QSA. To this aim all quantum operators must be expressed in terms of functions easily and efficiently described by classical components. When we try to make the HW components that perform this basic operations according to the classi- cal scheme we encounter two main difficulties. High-level gate design of Grover’s QSA (Model based approach) In this section we present a new model based HW im- plementing the functional steps of Grover’s QSA from a high-level gate design point of view. According to the high-level scheme in Eq. (7) introduced in Fig. 4, the pro- posed circuit can be divided into two main parts. Part I: (Analogue) Step-by-step calculation of output values. This part is divided into the following subparts: I-a: Superposition; I-c: Pre-Interference (for vector’s approach); I-b: Entanglement; I-d: Interference Part II: (Digital) Entropy evaluation, vector storing for iterations and output visualization. This part also provides initial superposition of basis vectors 0 and 1 . Figure 19 shows a general structure scheme of the HW realization for the Grover’s QSA-circuits and itself can be considered as a classical prototype of intelligent control quantum system. Figure 19. A General HW-scheme of the Grover’s QSA Example. The most interesting novelty involves the structure of interference: in fact the generic element i v (interference output) can be written in function of i g (en- tanglement output) as the following 2 2 1 1 1 2 2 1 1 1 , 2 1 , 2 n n j i n j i j i n j g g for i odd v g g for i even − − = − =  −   =   −   ∑ ∑ (8) Figures 20a and 20b show the Simulink schematic de- sign and circuit realization of superposition, entanglement and interference operator’s blocks of the Grover’s QAG. I-a II I I-c I-b Analogue Part Digital Part: Stop Criterion Minimum of Shannon entropy Superposed input Main board Entanglement Interference CPLD Board Figure 20a. Simulink Scheme of 3-qubits Grover Search System Superposition Superposition Entanglement Entanglement Interference Interference Pre Pre- -Interference Interference Figure 20b. Pre Prototype Scheme Circuit of Grover’s QAG Referring to Fig. 19, pre-interference operation evalu- ates a weighted sum of odd (even) output elements of en- tanglement, while interference itself uses this contribution in order to provide (by means of difference with i g ) the respective i v . This simple (but powerful) result in Eq. (8) has several consequences. Figure 21 shows experimental HW evolution of Gro- ver’s quantum search algorithm for three qubits. Main Board CPLD Board Entire Board Figure 21. HW Realization of Grover QSA DOI: https://doi.org/10.30564/aia.v1i1.619