The fingerprint is one kind of biometric. This biometric unique data have to be processed well and secure. The problem gets more complicated as data grows. This work is conducted to process image fingerprint data with a memetic algorithm, a simple and reliable algorithm. In order to achieve the best result, we run this algorithm in a parallel environment by utilizing a multi-thread feature of the processor. We propose a high-performance computing memetic algorithm (HPCMA) to process a 7200 image fingerprint dataset which is divided into fifteen specimens based on its characteristics based on the image specification to get the detail of each image. A combination of each specimen generates a new data variation. This algorithm runs in two different operating systems, Windows 7 and Windows 10 then we measure the influence of data size on processing time, speed up, and efficiency of HPCMA with simple linear regression. The result shows data size is very influencing to processing time more than 90%, to speed up more than 30%, and to efficiency more than 19%.
Face recognition for presence system by using residual networks-50 architectu...IJECEIAES
Presence system is a system for recording the individual attendance in the company, school or institution. There are several types presence system, including the manually presence system using signatures, presence system using fingerprints and presence system using face recognition technology. Presence system using face recognition technology is one of presence system that implements biometric system in the process of recording attendance. In this research we used one of the convolutional neural network (CNN) architectures that won the imagenet large scale visual recognition competition (ILSVRC) in 2015, namely the Residual Networks-50 architecture (ResNet-50) for face recognition. Our contribution in this research is to determine effectiveness ResNet architecture with different configuration of hyperparameters. This hyperparameters includes the number of hidden layers, the number of units in the hidden layer, batch size, and learning rate. Because hyperparameter are selected based on how the experiments performed and the value of each hyperparameter affects the final result accuracy, so we try 22 configurations (experiments) to get the best accuracy. We conducted experiments to get the best model with an accuracy of 99%.
Hybrid deep learning model using recurrent neural network and gated recurrent...IJECEIAES
This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Stora...IJERA Editor
Today’s image capturing technologies are producing High Definitional-scale images which are also heavier on memory, which has prompted many users into cloud storage, cloud computing is an service based technology and one of the cloud service is Data Storage as a Service (DSaaS), two parties are involved in this service the Cloud Service Provider and The User, user stores his vital data onto the cloud via internet example: Dropbox. but a bigger question is on trustiness over the CSP by user as user data is stored remote devices which user has no clue about, in such situation CSP has to create a trust worthiness to the costumer or user, in these paper we addressed the mention insecurity issue with a well defined trusted image Storing and retrieval framework (TISR) using compress sensing methodology.
Face recognition for presence system by using residual networks-50 architectu...IJECEIAES
Presence system is a system for recording the individual attendance in the company, school or institution. There are several types presence system, including the manually presence system using signatures, presence system using fingerprints and presence system using face recognition technology. Presence system using face recognition technology is one of presence system that implements biometric system in the process of recording attendance. In this research we used one of the convolutional neural network (CNN) architectures that won the imagenet large scale visual recognition competition (ILSVRC) in 2015, namely the Residual Networks-50 architecture (ResNet-50) for face recognition. Our contribution in this research is to determine effectiveness ResNet architecture with different configuration of hyperparameters. This hyperparameters includes the number of hidden layers, the number of units in the hidden layer, batch size, and learning rate. Because hyperparameter are selected based on how the experiments performed and the value of each hyperparameter affects the final result accuracy, so we try 22 configurations (experiments) to get the best accuracy. We conducted experiments to get the best model with an accuracy of 99%.
Hybrid deep learning model using recurrent neural network and gated recurrent...IJECEIAES
This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.
Efficient Image Compression Technique using Clustering and Random PermutationIJERA Editor
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Stora...IJERA Editor
Today’s image capturing technologies are producing High Definitional-scale images which are also heavier on memory, which has prompted many users into cloud storage, cloud computing is an service based technology and one of the cloud service is Data Storage as a Service (DSaaS), two parties are involved in this service the Cloud Service Provider and The User, user stores his vital data onto the cloud via internet example: Dropbox. but a bigger question is on trustiness over the CSP by user as user data is stored remote devices which user has no clue about, in such situation CSP has to create a trust worthiness to the costumer or user, in these paper we addressed the mention insecurity issue with a well defined trusted image Storing and retrieval framework (TISR) using compress sensing methodology.
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
A survey on context aware system & intelligent Middleware’sIOSR Journals
Abstract: Context aware system or Sentient system is the most profound concept in the ubiquitous computing.
In the cloud system or in distributed computing building a context aware system is difficult task and
programmer should use more generic programming framework. On the basis of layered conceptual design, we
introduce Context aware systems with Context aware middleware’s. On the basis of presented system we will
analyze different approaches of context aware computing. There are many components in the distributed system
and these components should interact with each other because it is the need of many applications. Plenty
Context middleware’s have been made but they are giving partial solutions. In this paper we are giving analysis
of different middleware’s and comprehensive application of it in context caching.
Keywords: Context aware system, Context aware Middleware’s, Context Cache
Information Upload and retrieval using SP Theory of IntelligenceINFOGAIN PUBLICATION
In today’s technology Cloud computing has become an important aspect and storing of data on cloud is of high importance as the need for virtual space to store massive amount of data has grown during the years. However time taken for uploading and downloading is limited by processing time and thus need arises to solve this issue to handle large data and their processing. Another common problem is de duplication. With the cloud services growing at a rapid rate it is also associated by increasing large volumes of data being stored on remote servers of cloud. But most of the remote stored files are duplicated because of uploading the same file by different users at different locations. A recent survey by EMC says about 75% of the digital data present on cloud are duplicate copies. To overcome these two problems in this paper we are using SP theory of intelligence using lossless compression of information, which makes the big data smaller and thus reduces the problems in storage and management of large amounts of data.
A Survey of Machine Learning Techniques for Self-tuning Hadoop Performance IJECEIAES
The Apache Hadoop framework is an open source implementation of MapReduce for processing and storing big data. However, to get the best performance from this is a big challenge because of its large number configuration parameters. In this paper, the concept of critical issues of Hadoop system, big data and machine learning have been highlighted and an analysis of some machine learning techniques applied so far, for improving the Hadoop performance is presented. Then, a promising machine learning technique using deep learning algorithm is proposed for Hadoop system performance improvement.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Weeds detection efficiency through different convolutional neural networks te...IJECEIAES
The preservation of the environment has become a priority and a subject that is receiving more and more attention. This is particularly important in the field of precision agriculture, where pesticide and herbicide use has become more controlled. In this study, we propose to evaluate the ability of the deep learning (DL) and convolutional neural network (CNNs) technology to detect weeds in several types of crops using a perspective and proximity images to enable localized and ultra-localized herbicide spraying in the region of Beni Mellal in Morocco. We studied the detection of weeds through six recent CNN known for their speed and precision, namely, VGGNet (16 and 19), GoogLeNet (Inception V3 and V4) and MobileNet (V1 and V2). The first experiment was performed with the CNNs architectures from scratch and the second experiment with their pre-trained versions. The results showed that Inception V4 achieved the highest precision with a rate of 99.41% and 99.51% on the mixed image sets and for its version from scratch and its pre-trained version respectively, and that MobileNet V2 was the fastest and lightest with its size of 14 MB.
A Comprehensive review of Conversational Agent and its prediction algorithmvivatechijri
There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
DATA COMPRESSION USING NEURAL NETWORKS IN BIO-MEDICAL SIGNAL PROCESSINGcscpconf
Heart is one of the vital parts of human body, which maintains life line. In this paper, an efficient composite method has been developed for data compression of ECG signals. ECG
waveforms reflect most of the heart parameters closely related to the mechanical pumping of the heart and can therefore, be used to infer cardiac health. After carrying out detailed studies of different data compression algorithms, we used back propagation algorithm to analyse the artificial neural networks. Twelve significant features are extracted from an hocardiogram (ECG). The features of samples are used as input to the neural network. Finally the samples which are used in the database are trained and tested using the Back Propagation Algorithm.
The efficiency is observed to be 99.5%. Dual three-layer neural networks with only a few units in the hidden layer are used. It is further observed that input signals are same as supervised
signals used in the networks. Back-propagation is used for the learning process.
Synchronization of the GPS Coordinates Between Mobile Device and Oracle Datab...idescitation
The article describes an architecture and implementation of module for
acquiring a synchronization of GPS data between mobile device and central database
system. The process of data exchange is inspired by SAMD algorithm. The article
sequentially presents a solution of individual system components. Special attention is paid to
the exchange data format. The processing of the exchanged data is also described in detail.
The resulting solution was deployed and tested in a real production environment.
A modified algorithm for estimating the limits of the dual problem solution with
the branching order determination for solving the tasks of providing cyber security
and protection of information in information and communication transport systems
(ICTS) is proposed. Effective influence of the prior branching order determination
Support Vector Machine–Based Prediction System for a Football Match Resultiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Parallel Computing-a Paradigm to achieve High PerformanceAM Publications
Over last few years there has been rapid changes found in computing field.today, we are using the latest
upgrade system which provides the faster output and high performance. User view towards computing is only to
get the correct and fast result. There are many techniques which improves the system performance. Today’s
widely use computing method is parallel computing. Parallel computing, including foundational and theoretical
aspects, systems, languages, architectures, tools, and applications. It will address all classes of parallelprocessing
platforms including concurrent, multithreaded, multicore, accelerated, multiprocessor, clusters, and
supercomputers. This paper reviews the overview of parallel processing to show how parallel computing can
improve the system performance.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
A survey on context aware system & intelligent Middleware’sIOSR Journals
Abstract: Context aware system or Sentient system is the most profound concept in the ubiquitous computing.
In the cloud system or in distributed computing building a context aware system is difficult task and
programmer should use more generic programming framework. On the basis of layered conceptual design, we
introduce Context aware systems with Context aware middleware’s. On the basis of presented system we will
analyze different approaches of context aware computing. There are many components in the distributed system
and these components should interact with each other because it is the need of many applications. Plenty
Context middleware’s have been made but they are giving partial solutions. In this paper we are giving analysis
of different middleware’s and comprehensive application of it in context caching.
Keywords: Context aware system, Context aware Middleware’s, Context Cache
Information Upload and retrieval using SP Theory of IntelligenceINFOGAIN PUBLICATION
In today’s technology Cloud computing has become an important aspect and storing of data on cloud is of high importance as the need for virtual space to store massive amount of data has grown during the years. However time taken for uploading and downloading is limited by processing time and thus need arises to solve this issue to handle large data and their processing. Another common problem is de duplication. With the cloud services growing at a rapid rate it is also associated by increasing large volumes of data being stored on remote servers of cloud. But most of the remote stored files are duplicated because of uploading the same file by different users at different locations. A recent survey by EMC says about 75% of the digital data present on cloud are duplicate copies. To overcome these two problems in this paper we are using SP theory of intelligence using lossless compression of information, which makes the big data smaller and thus reduces the problems in storage and management of large amounts of data.
A Survey of Machine Learning Techniques for Self-tuning Hadoop Performance IJECEIAES
The Apache Hadoop framework is an open source implementation of MapReduce for processing and storing big data. However, to get the best performance from this is a big challenge because of its large number configuration parameters. In this paper, the concept of critical issues of Hadoop system, big data and machine learning have been highlighted and an analysis of some machine learning techniques applied so far, for improving the Hadoop performance is presented. Then, a promising machine learning technique using deep learning algorithm is proposed for Hadoop system performance improvement.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Weeds detection efficiency through different convolutional neural networks te...IJECEIAES
The preservation of the environment has become a priority and a subject that is receiving more and more attention. This is particularly important in the field of precision agriculture, where pesticide and herbicide use has become more controlled. In this study, we propose to evaluate the ability of the deep learning (DL) and convolutional neural network (CNNs) technology to detect weeds in several types of crops using a perspective and proximity images to enable localized and ultra-localized herbicide spraying in the region of Beni Mellal in Morocco. We studied the detection of weeds through six recent CNN known for their speed and precision, namely, VGGNet (16 and 19), GoogLeNet (Inception V3 and V4) and MobileNet (V1 and V2). The first experiment was performed with the CNNs architectures from scratch and the second experiment with their pre-trained versions. The results showed that Inception V4 achieved the highest precision with a rate of 99.41% and 99.51% on the mixed image sets and for its version from scratch and its pre-trained version respectively, and that MobileNet V2 was the fastest and lightest with its size of 14 MB.
A Comprehensive review of Conversational Agent and its prediction algorithmvivatechijri
There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
The huge volume of text documents available on the internet has made it difficult to find valuable
information for specific users. In fact, the need for efficient applications to extract interested knowledge
from textual documents is vitally important. This paper addresses the problem of responding to user
queries by fetching the most relevant documents from a clustered set of documents. For this purpose, a
cluster-based information retrieval framework was proposed in this paper, in order to design and develop
a system for analysing and extracting useful patterns from text documents. In this approach, a pre-
processing step is first performed to find frequent and high-utility patterns in the data set. Then a Vector
Space Model (VSM) is performed to represent the dataset. The system was implemented through two main
phases. In phase 1, the clustering analysis process is designed and implemented to group documents into
several clusters, while in phase 2, an information retrieval process was implemented to rank clusters
according to the user queries in order to retrieve the relevant documents from specific clusters deemed
relevant to the query. Then the results are evaluated according to evaluation criteria. Recall and Precision
(P@5, P@10) of the retrieved results. P@5 was 0.660 and P@10 was 0.655.
DATA COMPRESSION USING NEURAL NETWORKS IN BIO-MEDICAL SIGNAL PROCESSINGcscpconf
Heart is one of the vital parts of human body, which maintains life line. In this paper, an efficient composite method has been developed for data compression of ECG signals. ECG
waveforms reflect most of the heart parameters closely related to the mechanical pumping of the heart and can therefore, be used to infer cardiac health. After carrying out detailed studies of different data compression algorithms, we used back propagation algorithm to analyse the artificial neural networks. Twelve significant features are extracted from an hocardiogram (ECG). The features of samples are used as input to the neural network. Finally the samples which are used in the database are trained and tested using the Back Propagation Algorithm.
The efficiency is observed to be 99.5%. Dual three-layer neural networks with only a few units in the hidden layer are used. It is further observed that input signals are same as supervised
signals used in the networks. Back-propagation is used for the learning process.
Synchronization of the GPS Coordinates Between Mobile Device and Oracle Datab...idescitation
The article describes an architecture and implementation of module for
acquiring a synchronization of GPS data between mobile device and central database
system. The process of data exchange is inspired by SAMD algorithm. The article
sequentially presents a solution of individual system components. Special attention is paid to
the exchange data format. The processing of the exchanged data is also described in detail.
The resulting solution was deployed and tested in a real production environment.
A modified algorithm for estimating the limits of the dual problem solution with
the branching order determination for solving the tasks of providing cyber security
and protection of information in information and communication transport systems
(ICTS) is proposed. Effective influence of the prior branching order determination
Support Vector Machine–Based Prediction System for a Football Match Resultiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Parallel Computing-a Paradigm to achieve High PerformanceAM Publications
Over last few years there has been rapid changes found in computing field.today, we are using the latest
upgrade system which provides the faster output and high performance. User view towards computing is only to
get the correct and fast result. There are many techniques which improves the system performance. Today’s
widely use computing method is parallel computing. Parallel computing, including foundational and theoretical
aspects, systems, languages, architectures, tools, and applications. It will address all classes of parallelprocessing
platforms including concurrent, multithreaded, multicore, accelerated, multiprocessor, clusters, and
supercomputers. This paper reviews the overview of parallel processing to show how parallel computing can
improve the system performance.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and
integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to
seamlessly handle and scale very large amount of unstructured and structured data from diversified and
heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing
component; 3) the ability to automatically select the most appropriate libraries and tools to compute and
accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high
learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different
application problem domains, with high accuracy, robustness, and scalability. This paper highlights the
research methodologies and research activities that we propose to be conducted by the Big Data
researchers and practitioners in order to develop and support seamless automation and integration of
machine learning capabilities for Big Data analytics.
SEAMLESS AUTOMATION AND INTEGRATION OF MACHINE LEARNING CAPABILITIES FOR BIG ...ijdpsjournal
The paper aims at proposing a solution for designing and developing a seamless automation and integration of machine learning capabilities for Big Data with the following requirements: 1) the ability to seamlessly handle and scale very large amount of unstructured and structured data from diversified and heterogeneous sources; 2) the ability to systematically determine the steps and procedures needed for
analyzing Big Data datasets based on data characteristics, domain expert inputs, and data pre-processing component; 3) the ability to automatically select the most appropriate libraries and tools to compute and accelerate the machine learning computations; and 4) the ability to perform Big Data analytics with high learning performance, but with minimal human intervention and supervision. The whole focus is to provide
a seamless automated and integrated solution which can be effectively used to analyze Big Data with highfrequency
and high-dimensional features from different types of data characteristics and different application problem domains, with high accuracy, robustness, and scalability. This paper highlights the research methodologies and research activities that we propose to be conducted by the Big Data researchers and practitioners in order to develop and support seamless automation and integration of machine learning capabilities for Big Data analytics.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
High Performance Computing for Satellite Image Processing and Analyzing – A ...Editor IJCATR
High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
An Architecture for Simplified and Automated Machine Learning IJECEIAES
Recently, machine learning has been adopted by businesses to analyze their vast data in order to make strategic decision. However, knowledge in machine learning and technical skill are usually required to prepare data and perform machine learning tasks. This obstacle prevents smaller businesses with no technical knowledge to utilize machine learning. In this paper, we propose an architecture for simplified and automated machine learning process currently supporting the data classification task. The architecture includes a method for characterizing datasets, which allows for simplifying and automating machine learning model and hyperparameter selection based on historical execution configurations. Users can simply upload their datasets via a web browser, and the system will determine the possible models and their hyperparameter configurations for the users to choose from. The prototype shows the feasibility of the proposed architecture. Although the accuracy is still limited by the small execution history and the cleanliness of the input datasets, the architecture can minimize user involvement in the machine learning process so that non-technical users can perform data classification through a web browser without installing any software.
Design and implementation of microprocessor trainer bus systemIJARIIT
This paper presents a part of a microprocessor trainer system. This paper has six modules. All modules are connected
on the bus paths. Control signal such as Direct Memory Access (DMA), I/O Module and memory Modules are attached to the
bus. In this paper, the bus has four lines of the bus. They are a line of the address, data, control (Memory Read/ Write and I/O
Read/Write) and power. The address bus and data bus are 16 bits. Several Microcontrollers are in this paper. PIC 16f877 is used
in a DMA module (direct memory access) and I/O module. PIC 74LS573 is applied as Latch, PIC74LS244 is used as a bus driver
and PIC74LS255 is applied as a bus transceiver. PIC18f452 is used in CPU module. Each type of bus has its own requirements
and properties.
Similar to The influence of data size on a high-performance computing memetic algorithm in fingerprint dataset (20)
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
Deep neural networks have accomplished enormous progress in tackling many problems. More specifically, convolutional neural network (CNN) is a category of deep networks that have been a dominant technique in computer vision tasks. Despite that these deep neural networks are highly effective; the ideal structure is still an issue that needs a lot of investigation. Deep Convolutional Neural Network model is usually designed manually by trials and repeated tests which enormously constrain its application. Many hyper-parameters of the CNN can affect the model performance. These parameters are depth of the network, numbers of convolutional layers, and numbers of kernels with their sizes. Therefore, it may be a huge challenge to design an appropriate CNN model that uses optimized hyper-parameters and reduces the reliance on manual involvement and domain expertise. In this paper, a design architecture method for CNNs is proposed by utilization of particle swarm optimization (PSO) algorithm to learn the optimal CNN hyper-parameters values. In the experiment, we used Modified National Institute of Standards and Technology (MNIST) database of handwritten digit recognition. The experiments showed that our proposed approach can find an architecture that is competitive to the state-of-the-art models with a testing error of 0.87%.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
One way to prevent and reduce the spread of the covid-19 pandemic is through physical distancing program. This research aims to develop a prototype contactless transaction system using digital payment mechanisms and QR code technology that will be applied in traditional markets. The method used in the development of electronic market systems is a prototype approach. The application of QR code and digital payments are used as a solution to minimize money exchange contacts that are common in traditional markets. The results showed that the system built was able to accelerate and facilitate the buying and selling transaction process in traditional market environment. Alpha testing shows that all functional systems are running well. Meanwhile, beta testing shows that the user can very well accept the system that was built. The results of the study also show acceptance of the usefulness of the system being built, as well as the optimism of its users to be able to take advantage of this system both technologically and functionally, so its can be a part of the digital transformation of the traditional market to the electronic market and has become one of the solutions in reducing the spread of the current covid-19 pandemic.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
A double-layer loaded on the octagon microstrip yagi antenna (OMYA) at 5.8 GHz industrial, scientific and medical (ISM) Band is investigated in this paper. The double-layer consist of two double positive (DPS) substrates. The OMYA is overlaid with a double-layer configuration were simulated, fabricated and measured. A good agreement was observed between the computed and measured results of the gain for this antenna. According to comparison results, it shows that 2.5 dB improvement of the OMYA gain can be obtained by applying the double-layer on the top of the OMYA. Meanwhile, the bandwidth of the measured OMYA with the double-layer is 14.6%. It indicates that the double-layer can be used to increase the OMYA performance in term of gain and bandwidth.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
Given the massive potentials of 5G communication networks and their foreseeable evolution, what should there be in 6G that is not in 5G or its long-term evolution? 6G communication networks are estimated to integrate the terrestrial, aerial, and maritime communications into a forceful network which would be faster, more reliable, and can support a massive number of devices with ultra-low latency requirements. This article presents a complete overview of potential 6G communication networks. The major contribution of this study is to present a broad overview of key performance indicators (KPIs) of 6G networks that cover the latest manufacturing progress in the environment of the principal areas of research application, and challenges.
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
block diagram and signal flow graph representation
The influence of data size on a high-performance computing memetic algorithm in fingerprint dataset
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 4, August 2021, pp. 2110~2118
ISSN: 2302-9285, DOI: 10.11591/eei.v10i4.2760 2110
Journal homepage: http://beei.org
The influence of data size on a high-performance computing
memetic algorithm in fingerprint dataset
Priati Assiroj1
, Harco Leslie Hendric Spits Warnars2
, Edi Abdurachman3
, Achmad Imam
Kistijantoro4
, Antoine Doucet5
1,2,3
Computer Science Department, BINUS Graduate Program-Doctor of Computer Science, Bina Nusantara University,
Jakarta 11480, Indonesia
4
School of Electrical Engineering and Informatics, Institut Teknologi Bandung, West Java 40132, Indonesia
5
Laboratoire L3i-Université de La Rochelle, Avenue Michel Crépeau, F-17 042 La Rochelle Cedex 1, France
Article Info ABSTRACT
Article history:
Received Dec 31, 2020
Revised Apr 29, 2021
Accepted Jun 1, 2021
The fingerprint is one kind of biometric. This biometric unique data have to
be processed well and secure. The problem gets more complicated as data
grows. This work is conducted to process image fingerprint data with a
memetic algorithm, a simple and reliable algorithm. In order to achieve the
best result, we run this algorithm in a parallel environment by utilizing a
multi-thread feature of the processor. We propose a high-performance
computing memetic algorithm (HPCMA) to process a 7200 image fingerprint
dataset which is divided into fifteen specimens based on its characteristics
based on the image specification to get the detail of each image. A
combination of each specimen generates a new data variation. This algorithm
runs in two different operating systems, Windows 7 and Windows 10 then we
measure the influence of data size on processing time, speed up, and
efficiency of HPCMA with simple linear regression. The result shows data
size is very influencing to processing time more than 90%, to speed up more
than 30%, and to efficiency more than 19%.
Keywords:
Biometric recognition
Fingerprint identification
High performance computing
Memetic algorithm
This is an open access article under the CC BY-SA license.
Corresponding Author:
Priati Assiroj
Computer Science Department, Binus Graduate Program-Doctor of Computer Science
Bina Nusantara University
Jl. Raya Kebon Jeruk No.27, DKI Jakarta 11480, Indonesia
Email: priati@binus.ac.id
1. INTRODUCTION
Nowadays, the growth of data and information cause scientists and researchers from various fields
enter to an era that the requirement of computation resources and data storage capacity exceeds the available
capacity. Scientists and researchers are more aware to utilize the computer system in their researches. This
condition causes more effort to create the systems that available to run in large-scale computation to process
the big data.
Fingerprint identification becomes an interesting research topic for two decades [1]. In this work, we
use a memetic algorithm that runs in a parallel system to identify fingerprints. Parallel computation is a
computation technique that runs by utilizing several computer resources simultaneously, actually caused by
the required computation is very large such as to process big data or in a large computation process. In this
computation model, the problem complexities are divided into smaller parts and run in a parallel
environment.
The data that have a high complexity is fingerprint data and its problem is equal to the amount of
fingerprint dataset, it needs a superfast process in identification. The memetic algorithm [2] is an
2. Bulletin of Electr Eng & Inf ISSN: 2302-9285
The influence of data size on a high-performance computing … (Priati Assiroj)
2111
improvement of the evolutionary algorithm with a separate local search [3]. A memetic algorithm is a simple
algorithm with reliable performance [4], [5], generates high-quality solutions to solve problems in the real
world [6]-[8].
The speed is a reason for the selected algorithm, the faster will be selected than the slower algorithm
[9]. To process a high scale and big data in a reasonable time, we need a high-performance computation
system. The effective and efficient time to simulate, compute and the process is a must, besides the quality
and accuracy of the generated information must be maintained. The board of management in an organization
needs fast and high-quality information to make a decision in the production process and to purchase raw
materials for the next periods.
To generate high quality and fast information, it needs a system with specific hardware that supports
the process of large scale data quickly and has a high performance, with the client-server based application
and distributed database that accessible across the entire computers in the local or public computer network.
The advances in various fields of science require computer systems with high performance in speed
and computing capacity. The implication is the technology of personal and supercomputer increases rapidly.
The main obstacles of supercomputers are procurement cost, operation, and maintenance, and the alternative
is parallel processing. A parallel distributes a work package that will be processed by all the entire computers
in the system. With this parallel system, the investment cost can be reduced. Note that this system has high
flexibility to adapt to the changes in computer technology. Users can customize the system based on their
purposes. To get a fast computation process, it only needs to upgrade the processors and RAM without
storage media in every computer, and for the application that produces a lot of data, it only needs to upgrade
the storage media.
There are two ways to aim an efficient computation time in a high-performance computation (HPC)
system, firstly is to produce a high-speed processor, and secondly is run the application in a parallel
environment with multi-processors. For the first way, the processor manufacturer will meet a difficulty
because the lithography technique is almost reaching the limit. The newest processor is made with 45nm
fabrication technology and if it is reduced the processor’s reliability will also reduce. Therefore, the big
chance to improve the computation speed with a high possibility is a parallel computation technique [10].
HPC is a method to address the problem with high complexity related to workload and a large
number of data [11]. One of the techniques in HPC is parallel computation [10]. A parallel processing system
is a group of connected computers that working together as an integrated computer system to address the
same problem with one goal [12].
2. PARALLEL COMPUTING ARCHITECTURE
Based on the instruction and data stream, the computer categorized into 4 groups, single instruction
stream, single data stream (SISD), single instruction stream, multiple data streams (SIMD), multiple
instruction streams, single data stream (MISD), and multiple instruction streams, multiple data stream
(MIMD) [13]. There are several styles in parallel programming:
2.1. Single program, multiple data (SPMD)
Data and programs are distributed to each processor and the execution is scheduled. Each processor
executes the same program but the processed data is different.
2.2. Master-slave
A processor as a master and several processors as slaves.
2.3. Multiple program, multiple data
Data and programs are distributed to each processor. Every processor executes a different program
and data. The parallel computation system is included in the MIMD group, this group can be divided into a
multi-processor system and multi-computer system. A multi-processor system is a parallel computing system
that is based on the single memory utilization at the same time simultaneously. A multi-computer system is a
parallel computer system with an independent processor and RAM in every computer. In this paper, we
propose the high-performance computation using memetic algorithm (HPCMA) for fingerprint identification.
3. RESLATED RESEARCH
The related conducted researches are the researches about fingerprint identification that has been
conducted by other researchers. [14] conducts research to identify fingerprints in the big data framework with
a distributed model. [15] states that a memetic algorithm can improve efficiency, reduce memory
consumption, and has a better ability to utilize the resource system. In the research [16], the memetic
3. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 4, August 2021 : 2110 – 2118
2112
algorithm is used to do a feature selection in handwritten word recognition. Moscato et al. [17], explained
that a memetic algorithm can outperform the proposed method even this algorithm needs more computation
time and also generates a high-quality solution. Feng et al. [18], uses a memetic algorithm to do a treatment
plan faster and [19] proposes a memetic fingerprint matching algorithm (MFMA) without local matching to
do a fingerprint matching. The MFMA significantly reduces the generation that has to be identified [19]. To
design a memetic algorithm, the considered problem is optimization as a specific problem [20]. Assiroj et al.
[21], use the original memetic algorithm to process the fingerprint dataset and this algorithm works properly.
This algorithm is also could be parallelized, Mirsoleimani et al. [22], implements parallel type on the
graphics processing unit (GPU). This technique solves task scheduling problems for several multi-processing
systems as also conducted by [23]. Island model of parallel memetic algorithms was proposed by [24]-[27]
with dynamic local search.
4. METHOD
In this work, we propose a high-performance computing memetic algorithm (HPCMA) method. We
run the original memetic algorithm in HPC mode. In Figure 1 is a framework of HPCMA. According to
Figure 1, we modify the original memetic algorithm to run in HPC as a parallel condition. We use this
HPCMA framework to process the image fingerprint dataset and here are the steps:
Figure 1. HPCMA framework
4.1. Local search in HPC mode
This process is to read all the entire file and folder image datasets that have been divided into four
groups. After this reading process, the algorithm will convert all the image data. Firstly algorithm converts
the image to an array string then secondly, the algorithm converts the string array to binary code. When this
conversion is finished algorithm compares the number of converted data to all image fingerprint data and if it
gets the same number process will be continued to the next selection, if not the process will wait until the
local search process is complete.
4. Bulletin of Electr Eng & Inf ISSN: 2302-9285
The influence of data size on a high-performance computing … (Priati Assiroj)
2113
4.2. Selection in HPC mode
We use 2% of the population as the sample randomly. These 140 parents candidates will be divided
into 2 groups, male and female then compare the number of the selected candidate to the number of data
selection samples. If it gets the same number process will be continued to the next crossover and if not the
process will wait until the selection process is complete.
4.3. Crossover in HPC mode
Crossover is a mating process for all the entire parent candidates to get new offspring. Each member
of the male population will be crossed to all members of the female population. This crossover process will
be looped until all the entire membership of both population, male and female, are well crossed then compare
the number of crossed data to the number of multiplication of male and female, if it gets the same number
process will be continued to the Next Mutation and if not the process will wait until crossover is complete.
4.4. Mutation in HPC mode
This is the final process of the memetic algorithm. A mutation is a process that reverses the value of
the binary code of the generated offspring from the crossover process. The value 1 in binary code will be
reversed into 0 and 0 will be reversed into 1. Therefore we will get the newest and highest quality offspring.
When the mutation process is finished, the algorithm will measure the number of the mutated data and
compare it to the generated offspring from a crossover, if it gets the same number process will be finished
and if not the process will wait until the mutation is complete. Based on Figure 2, the left side, MA, is
Memetic algorithm in original condition, and on the right side, HPCMA is a memetic algorithm that runs in
HPC utilizes the threads feature of processors.
Figure 2. Illustration MA to HPCMA
Reads folder and file
(local search with 4
criterions)
Converts image files
to string array
Converts string array
to binary mode
Selects parents
candidate from the
total population
Crossover. MA mates
the parents candidate
each other
Mutation. MA
mutates the data from
crossover
Reads folder and file
(local search with 4
criterions)
Converts image files
to string array
Converts string array
to binary mode
Selects parents
candidate from the
total population
Crossover. MA mates
the parents candidate
each other
Mutation. MA
mutates the data from
crossover
5. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 4, August 2021 : 2110 – 2118
2114
This work implements the FVC2006 with data 7200 fingerprint data then categorized into 4
characteristics. Firstly is a full-sized image, secondly is 60% with dark color boundaries, thirdly is 60% with
bright color boundaries, and fourthly is 80% with bright color boundaries and unclear image. Then we make
15 specimens from the combination of data. 1st specimen consists of 7200 fingerprint data, 2nd specimen
consists of 1800 fingerprint data, 3rd specimen consists of 1800 fingerprint data, 4th specimen consists of
1800 fingerprint data, 5th specimen consists of 1800 fingerprint data, 6th specimen consists of 3600
fingerprint data, 7th specimen consists of 3600 fingerprint data, 8th specimen consists of 3600 fingerprint
data, 9th specimen consists of 3600 fingerprint data, 10th specimen consists of 3600 fingerprint data, 11th
specimen consists of 3600 fingerprint data, 12th specimen consists of 5400 fingerprint data, 13th specimen
consists of 5400 fingerprint data, 14th specimen consists of 5400 fingerprint data, and the last specimen,
15th, consists of 5400 fingerprint data.
5. RESULT AND DISCUSSION
This work uses a 7200 synthetic fingerprint dataset from FVC2006 and runs in the computer system
with Intel i5 2540M 2.6GHz 4 core and 16GB RAM, 500GB SSD as HPCMA machine and computer system
with Intel i5 2430M 2.4GHz 4 core and 8GB RAM, 250GB SSD as database machine. Testing begins with
data mapping and thread creation in each computer with different numbers of data. With more data to be
processed and more created threads, the mapping time is also longer.
In this work, we compare the test in two environments of operating systems. The first is the
Windows 7 operating system and the second is Windows 10 operating system. Data are divided into fifteen
specimens with each character to see the data holistically then we measure the size of each specimen and
measure the speed up and efficiency. Below are the results of the experiment from each operating system.
Table 1 and Table 2 are a list of data size for each specimen, speed up, and efficiency of HPCMA on
Windows 7 and Windows 10. Figure 3 is the speed-up visualization of each specimen in Windows 7, and
Figure 4 is the speed-up visualization for each specimen in Windows 10.
Table 1. Experiment result in Windows 7
Specimen Data Size Speed up (ms) Efficiency
1 22.8GB 249.0038057 10.37067904
2 0.237 GB 34.55627211 2.053812416
3 4.8 GB 294.3173384 13.56162175
4 4.3 GB 269.0168797 12.45388033
5 2.4 GB 160.8715959 7.943400165
6 8.8 GB 288.9740842 12.50843364
7 7.8 GB 266.42708 11.6195177
8 4.5 GB 162.3858895 7.275164187
9 11.4 GB 301.0793305 13.35765794
10 8 GB 200.1693822 9.272002684
11 7.6 GB 191.844378 8.976055725
12 17.2 GB 306.8103217 12.95318989
13 13 GB 241.1332293 10.30241264
14 15.7 GB 249.0009356 10.99598499
15 12.2 GB 224.7810952 9.629383921
Table 2. Experiment result in Windows 10
Specimen Data size Speed Up (ms) Efficiency
1 22.8 GB 241.3684533 9.843870006
2 0.237 GB 31.73842967 1.917808586
3 4.8 GB 274.3291009 12.76242446
4 4.3 GB 237.2079802 11.17275106
5 2.4 GB 152.7474748 7.587524869
6 8.8 GB 268.1426055 11.68635096
7 7.8 GB 238.2088843 10.37998272
8 4.5 GB 156.0196491 6.972254909
9 11.4 GB 271.8752708 12.19454521
10 8 GB 187.7135075 8.90476869
11 7.6 GB 149.3226829 7.231500221
12 17.2 GB 275.2537613 11.52186279
13 13 GB 218.8803572 9.308268217
14 15.7 GB 236.1300983 10.44332603
15 12.2 GB 206.8560388 8.757247622
Figure 3. Speed up of HPCMA on Windows 7
249,00
34,56
294,32
269,02
160,87
288,97
266,43
162,39
301,08
200,17
191,84
306,81
241,13
249,00
224,78
0
50
100
150
200
250
300
350
0 2 4 6 8 10 12 14 16
Speed
up
(ms)
Specimen
6. Bulletin of Electr Eng & Inf ISSN: 2302-9285
The influence of data size on a high-performance computing … (Priati Assiroj)
2115
Figure 4. Speed up of HPCMA on Windows 10
Figure 5 is a visualization of HPCMA efficiency for each specimen in Windows 7. The efficiency of
HPCMA in specimen 1 is 10.37067904, and in specimen 2 is 2.053812418. The efficiency of HPCMA in
specimen 3 to specimen 15 is also displayed in Figure 5.
Figure 5. Efficiency on Windows 7
Figure 6 is a visualization of HPCMA efficiency for each specimen in Windows 10. The efficiency
of HPCMA in specimen 1 is 9.84387006, and in specimen 2 is 1.917808586. The efficiency of HPCMA for
specimen 3 to specimen 15 is also displayed in Figure 6.
Figure 6. Efficiency on Windows 10
241,3684533
31,73842967
274,3291009
237,2079802
152,7474748
268,1426055
238,2088843
156,0196491
271,8752708
187,7135075
149,3226829
275,2537613
218,8803572
236,1300983
206,8560388
0
50
100
150
200
250
300
0 2 4 6 8 10 12 14 16
Speec
Up
(ms)
Specimen
10,37067904
2,053812416
13,56162175
12,45388033
7,943400165
12,50843364
11,6195177
7,275164187
13,35765794
9,272002684
8,976055725
12,95318989
10,30241264
10,99598499
9,629383921
0
2
4
6
8
10
12
14
16
0 2 4 6 8 10 12 14 16
Efisiensi
Specimen
9,843870006
1,917808586
12,76242446
11,17275106
7,587524869
11,68635096
10,37998272
6,972254909
12,19454521
8,90476869
7,231500221
11,52186279
9,308268217
10,44332603
8,757247622
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16
Efficiency
Specimen
7. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 4, August 2021 : 2110 – 2118
2116
Visualization of the influence of data size with processing time. The bigger data size needs a longer
processing time and the smaller data size is faster to be processed. From figure 4 above, specimen 1 with
22.8GB data size needs 72.904 seconds, and specimen 2 with 0.237GB data size only needs 8.347 seconds.
The performance of HPCMA in Windows 7 and Windows 10 is almost similar. For example, HPCMA
processed specimen 1 in 72.904 seconds in Windows 7 and 80.982 seconds in Windows 10 shown in Figure 7.
Figure 7. Processing time of each Specimen
6. CONCLUSION
In the simple linear regression, the experiment result of data size influence to HPCMA’s processing
time in Windows 10 is 0.937 or 93.7%. It means data size is very influential to HPCMA’s processing time in
Windows 10 for 97% and 6.3% depends on other variables. For Windows 7, data size is very influential to
HPCMA’s processing time for 95.9% and 4.1% depends on other variables. The experiment result of data
size influence to HPCMA’s efficiency in Windows 10 is 0.195 or 19.5%. It means data size is only
influencing efficiency for 19.5%, and 80.5% depends on other variables. For Windows 7, data size is
influencing efficiency for 19.3%, and 80.7% depends on other variables. The experiment result of data size
influence to HPCMA’s speed up on Windows 7 is 0.286 or 28.6%. It means data size is only influencing
speed up for 28.6%, and 71.4% depends on other variables. For Windows 10, data size in influencing speed
up for 31.7%, and 68.3% depends on other variables. On the other hand, data size is very influential to
HPCMA’s processing time in Windows 7 and Windows 10 about 90%. It influences about 30% on speed up
and not for efficiency in Windows 7 or Windows 10.
ACKNOWLEDGEMENTS
This work is supported by Research and Technology Transfer Office, Bina Nusantara University as
a part of Bina Nusantara University’s International Research Grant entitled MEMETIC ALGORITHM IN
HIGH-PERFORMANCE COMPUTATION with contract number: No.026/VR.RTT/IV/2020 and contract
date: 6 April 2020.
REFERENCES
[1] A. K. Jain and J. Feng, "Latent Fingerprint Matching," in IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 33, no. 1, pp. 88-100, Jan. 2011, doi: 10.1109/TPAMI.2010.59.
[2] P. Moscato, “Memetic Algorithms: A Short Introduction,” New ideas in optimization, pp. 219-234, 1999.
[3] J. Lin and Y. Chen, "Analysis on the Collaboration Between Global Search and Local Search in Memetic
Computation," in IEEE Transactions on Evolutionary Computation, vol. 15, no. 5, pp. 608-623, Oct. 2011, doi:
10.1109/TEVC.2011.2150754.
[4] P. Merz and B. Freisleben, “Fitness Landscapes and Memetic Algorithm Design,” Electrical Engineering, pp. 1-19,
1999.
[5] Yew-Soon Ong, Meng-Hiot Lim, Ning Zhu and Kok-Wai Wong, "Classification of adaptive memetic algorithms: a
comparative study," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 36, no. 1,
pp. 141-152, Feb. 2006, doi: 10.1109/TSMCB.2005.856143.
[6] A. Caponio, G. L. Cascella, F. Neri, N. Salvatore and M. Sumner, "A Fast Adaptive Memetic Algorithm for Online
and Offline Control Design of PMSM Drives," in IEEE Transactions on Systems, Man, and Cybernetics, Part B
(Cybernetics), vol. 37, no. 1, pp. 28-41, Feb. 2007, doi: 10.1109/TSMCB.2006.883271.
8. Bulletin of Electr Eng & Inf ISSN: 2302-9285
The influence of data size on a high-performance computing … (Priati Assiroj)
2117
[7] M. Gong, Z. Peng, L. Ma and J. Huang, "Global Biological Network Alignment by Using Efficient Memetic
Algorithm," in IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 13, no. 6, pp. 1117-
1129, 1 November 2016, doi: 10.1109/TCBB.2015.2511741.
[8] M. Urselmann, S. Barkmann, G. Sand and S. Engell, "A Memetic Algorithm for Global Optimization in Chemical
Process Synthesis Problems," in IEEE Transactions on Evolutionary Computation, vol. 15, no. 5, pp. 659-683, Oct.
2011, doi: 10.1109/TEVC.2011.2150753.
[9] V. Pachori, G. Ansari, and N. Chaudhary, “Improved performance of advance encryption standard using parallel
computing,” International Journal of Engineering Research and Applications, vol. 2, no. 1, pp. 967–971, 2012.
[10] P. Assiroj, A. L. Hananto, A. Fauzi and H. L. Hendric Spits Warnars, "High Performance Computing (HPC)
Implementation: A Survey," 2018 Indonesian Association for Pattern Recognition International Conference
(INAPR), 2018, pp. 213-217, doi: 10.1109/INAPR.2018.8627040.
[11] M. Abd Rahman and A. Mamat, “A Study of Image Processing in Agriculture Application under High Performance
Computing Environment,” International Journal of Computer Science and Telecommunications, vol. 3, no. 8, pp.
16-24, 2012.
[12] P. Assiroj et al., “The Form of High-Performance Computing: A Survey,” IOP Conference Series: Materials
Science and Engineering, vol. 662, no. 5, p. 052002, 2019.
[13] J. L. Hennessy and D. a Patterson, "Computer Architecture," Fourth Edition: A Quantitative Approach. 2006.
[14] D. Peralta, I. Triguero, R. Sanchez-Reillo, F. Herrera, and J. M. Benitez, “Fast fingerprint identification for large
databases,” Pattern Recognition, vol. 47, no. 2, pp. 588-602, 2014, doi: 10.1016/j.patcog.2013.08.002.
[15] R. Welekar and N. V Thakur, "An Enhanced Approach to Memetic Algorithm Used for Character Recognition,"
Springer Singapore, vol. 768, pp. 593-602, 2019, doi: 10.1007/978-981-13-0617-4_57.
[16] M. Ghosh, S. Malakar, S. Bhowmik, R. Sarkar, and M. Nasipuri, “Memetic Algorithm Based Feature Selection for
Handwritten City Name Recognition,” Springer, vol. 775, pp. 599-613, 2017, doi: 10.1007/978-981-10-6430-2_47.
[17] P. Moscato, A. Mendes, and R. Berretta, “Benchmarking a memetic algorithm for ordering microarray data,”
BioSystems, vol. 88, no. 1-2, pp. 56-75, 2007, doi: 10.1016/j.biosystems.2006.04.005.
[18] L. Feng, A. H. Tan, M. H. Lim, and S. W. Jiang, “Band selection for hyperspectral images using probabilistic
memetic algorithm,” Soft Computing, vol. 20, no. 12, pp. 4685-4693, 2016, doi: 10.1007/s00500-014-1508-1.
[19] W. Sheng, G. Howells, M. Fairhurst, and F. Deravi, “A memetic fingerprint matching algorithm,” IEEE
Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 402–411, 2007.
[20] W. Sheng, G. Howells, M. Fairhurst and F. Deravi, "A Memetic Fingerprint Matching Algorithm," in IEEE
Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 402-412, Sept. 2007, doi:
10.1109/TIFS.2007.902681.
[21] P. Assiroj, H. L. H. S. Warnars, E. Abdurrachman, A. I. Kistijantoro, and A. Doucet, “Measuring memetic
algorithm performance on image fingerprints dataset,” Telkomnika (Telecommunication Computing Electronics and
Control), vol. 19, no. 1, pp. 96-104, 2021, doi: 10.12928/telkomnika.v19i1.16418.
[22] S. A. Mirsoleimani, A. Karami, and F. Khunjush, “A parallel memetic algorithm on GPU to solve the task
scheduling problem in heterogeneous environments,” GECCO 2013 - Proceedings of the 2013 Genetic and
Evolutionary Computation Conference, 2013, pp. 1181–1188, doi: 10.1145/2463372.2463518.
[23] R. Cheng and M. Gen, "Parallel machine scheduling problems using memetic algorithms," 1996 IEEE International
Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929), 1996,
pp. 2665-2670 vol.4, doi: 10.1109/ICSMC.1996.561355.
[24] J. Tang, M. H. Lim, and Y. S. Ong, “Adaptation for parallel memetic algorithm based on population entropy,”
GECCO 2006 - Genetic and Evolutionary Computation Conference, vol. 1, pp. 575-582, 2006, doi:
10.1145/1143997.1144100.
[25] M. Blocho and Z. J. Czech, "A Parallel Memetic Algorithm for the Vehicle Routing Problem with Time Windows,"
2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, 2013, pp. 144-151,
doi: 10.1109/3PGCIC.2013.28.
[26] A. Mendes, C. Cotta, V. Garcia, P. Franca and P. Moscato, "Gene ordering in microarray data using parallel
memetic algorithms," 2005 International Conference on Parallel Processing Workshops (ICPPW'05), 2005, pp.
604-611, doi: 10.1109/ICPPW.2005.34.
[27] E. Armstrong, G. Grewal, S. Areibi and G. Darlington, "An investigation of parallel memetic algorithms for VLSI
circuit partitioning on multi-core computers," CCECE 2010, 2010, pp. 1-6, doi: 10.1109/CCECE.2010.5575207.
BIOGRAPHIES OF AUTHORS
Priati Assiroj was born in Cirebon, Jawa Barat, Indonesia. She has Bachelor and Master's in
Computer Science. She received the Bachelor from STMIK Bani Saleh Bekasi, in 2011and
received her Master from STMIK LIKMI, Bandung, Indonesia, in 2016. From 2014 to 2016,
she was a lecturer in Universitas Singaperbangsa Karawang, Indonesia, and from 2016 to 2019
she was a lecturer in Universitas Buana Perjuangan Karawang in Information System Dept.
Since January 2019 she is a lecturer in Politeknik Imigrasi, Ministry of Law and Human
Rights, Republic of Indonesia. She is a doctoral student in Computer Science since March
2018 at Bina Nusantara Graduate Program, Doctor of Computer Science, Bina Nusantara
University Jakarta, Indonesia. Her research fields are data mining, high-performance
computing, and evolutionary algorithm.
9. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 4, August 2021 : 2110 – 2118
2118
Harco Leslie Hendric Spits Warnars received a Ph.D. degree in Computer Science from
Manchester Metropolitan University. Since September 2015 he is a Head of Information
Systems concentration at department Doctor of Computer Science Bina Nusantara University,
works some project research with my doctoral computer Science students in research area such
as Game, Artificial Intelligence including Data Mining, Machine Learning and Decision
Support System application such as DSS, BI, Dashboard, Data Warehouse, and so on
Edi Abdurrachman, received B.Sc and Master of Statistics in Applied Statistics from Bogor
Agricultural University then received M.Sc and Ph.D. in survey statistics and statistics from
IOWA State University, USA. He is currently a professor and dean of the Binus Graduate
Program, Doctor of Computer Science, Bina Nusantara University Jakarta. His research
interest includes statistics, survey statistics, and applied statistics and management information
systems. Mr. Abdurrachman’s awards and honors include the MU SIGMA RHO Society
(1985) and Best Lecturer Binus University (2012). He is also a member of the American
Statistical Association, International Association of Engineers (IAENG), Gamma Sigma Beta,
and as a Vice President of the Asian Federation for Information Technology in Agriculture.
From 1980-2015 actives in the ministry of agriculture in many positions of the director. He is
also active as a public speaker in national and international seminars.
Achmad I Kistijantoro, received the B.Eng. degree in informatics from the Institute of
Technology Bandung, (ITB), Bandung, Indonesia, the masters’ degree from TU Delft, Delft,
The Netherlands, and the Ph.D. degree from the University of Newcastle upon Tyne,
Newcastle upon Tyne, U.K., His current research interests includes distributed systems,
parallel computation, and high-performance computation.
Antoine Doucet is a Full Professor in computer science at the L3i laboratory of the University
of La Rochelle since 2014. He leads the research group in document analysis, digital contents,
and images (about 40 people) and is additionally the director of the ICT department of the
Vietnam-France University of Science and Technology of Hanoi. Additionally, he is the
principal investigator of the H2020 project NewsEye, running until 2021 and focusing on
augmenting access to historical newspapers, across domains and languages. He further leads
the effort on semantic enrichment for low-resourced languages in the context of the H2020
project Embeddia. His main research interests lie in the fields of information retrieval, natural
language processing, and (text) data mining. The central focus of his work is on the
development of methods that scale to very large document collections and that do not require
prior knowledge of the data, hence that are robust to noise (e.g stemming from OCR) and
language-independent. Antoine Doucet holds a Ph.D. in computer science from the University
in Helsinki (Finland) since 2005, and a French research supervision habilitation (HDR) since
2012.