Artificial intelligence (AI) and machine learning (ML) have influenced every part of our day-to-day activities in this era of technological advancement, making a living more comfortable on the earth. Among the several AI and ML algorithms, the support vector machine (SVM) has become one of the most generally used algorithms for data mining, prediction and other (AI and ML) activities in several domains. The SVM’s performance is significantly centred on the kernel function (KF); nonetheless, there is no universal accepted ground for selecting an optimal KF for a specific domain. In this paper, we investigate empirically different KFs on the SVM performance in various fields. We illustrated the performance of the SVM based on different KF through extensive experimental results. Our empirical results show that no single KF is always suitable for achieving high accuracy and generalisation in all domains. However, the gaussian radial basis function (RBF) kernel is often the default choice. Also, if the KF parameters of the RBF and exponential RBF are optimised, they outperform the linear and sigmoid KF based SVM method in terms of accuracy. Besides, the linear KF is more suitable for the linearly separable dataset.
IRJET-An Effective Strategy for Defense & Medical Pictures Security by Singul...IRJET Journal
The document proposes an effective strategy for encrypting medical and defense images using singular value decomposition (SVD) and fractional Fourier transform (FrFT). It discusses generating shares of an image by applying FrFT to the SVD components, specifically the S matrix of singular values. Multiple shares are created using a sharing matrix. The strategy aims to securely transmit images over unreliable networks. Quantitative analysis shows the approach encrypts images quickly, with high sensitivity to changes and resistance to differential attacks, making it effective for encrypting sensitive images.
RTL Implementation of image compression techniques in WSNIJECEIAES
The Wireless sensor networks have limitations regarding data redundancy, power and require high bandwidth when used for multimedia data. Image compression methods overcome these problems. Non-negative Matrix Factorization (NMF) method is useful in approximating high dimensional data where the data has non-negative components. Another method of the NMF called (PNMF) Projective Nonnegative Matrix Factorization is used for learning spatially localized visual patterns. Simulation results show the comparison between SVD, NMF, PNMF compression schemes. Compressed images are transmitted from base station to cluster head node and received from ordinary nodes. The station takes on the image restoration. Image quality, compression ratio, signal to noise ratio and energy consumption are the essential metrics measured for compression performance. In this paper, the compression methods are designed using Matlab.The parameters like PSNR, the total node energy consumption are calculated. RTL schematic of NMF SVD, PNMF methods is generated by using Verilog HDL.
In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. At the same time, algorithms for quantum computers have been shown to efficiently solve some problems that are intractable on conventional, classical computers. We show that quantum computing not only reduces the time required to train a deep restricted Boltzmann machine, but also provides a richer and more comprehensive framework for deep learning than classical computing and leads to significant improvements in the optimization of the underlying objective function. Our quantum methods also permit efficient training of full Boltzmann machines and multilayer, fully connected models and do not have well known classical counterparts.
IRJET- Optimization of Semantic Image Retargeting by using Guided Fusion NetworkIRJET Journal
This document presents a method for optimizing semantic image retargeting using a guided fusion network. It begins by discussing existing image retargeting methods and their limitations in preserving semantic meaning. The proposed method extracts three semantic components (foreground, action context, background) from images using deep learning modules. A classification guided fusion network is used to integrate the semantic component maps and generate a "semantic collage" importance map. This importance map is fed into an image carrier to generate a retargeted target image that better preserves semantic information from the original image. The method is evaluated on benchmark datasets and shows improved performance over existing baselines in preserving semantics during image retargeting.
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
A Novel Algorithm for Watermarking and Image Encryption cscpconf
Digital watermarking is a method of copyright protection of audio, images, video and text. We
propose a new robust watermarking technique based on contourlet transform and singular value
decomposition. The paper also proposes a novel encryption algorithm to store a signed double
matrix as an RGB image. The entropy of the watermarked image and correlation coefficient of
extracted watermark image is very close to ideal values, proving the correctness of proposed
algorithm. Also experimental results show resiliency of the scheme against large blurring attack
like mean and gaussian filtering, linear filtering (high pass and low pass filtering) , non-linear
filtering (median filtering), addition of a constant offset to the pixel values and local exchange of pixels .Thus proving the security, effectiveness and robustness of the proposed watermarking algorithm.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
IRJET-An Effective Strategy for Defense & Medical Pictures Security by Singul...IRJET Journal
The document proposes an effective strategy for encrypting medical and defense images using singular value decomposition (SVD) and fractional Fourier transform (FrFT). It discusses generating shares of an image by applying FrFT to the SVD components, specifically the S matrix of singular values. Multiple shares are created using a sharing matrix. The strategy aims to securely transmit images over unreliable networks. Quantitative analysis shows the approach encrypts images quickly, with high sensitivity to changes and resistance to differential attacks, making it effective for encrypting sensitive images.
RTL Implementation of image compression techniques in WSNIJECEIAES
The Wireless sensor networks have limitations regarding data redundancy, power and require high bandwidth when used for multimedia data. Image compression methods overcome these problems. Non-negative Matrix Factorization (NMF) method is useful in approximating high dimensional data where the data has non-negative components. Another method of the NMF called (PNMF) Projective Nonnegative Matrix Factorization is used for learning spatially localized visual patterns. Simulation results show the comparison between SVD, NMF, PNMF compression schemes. Compressed images are transmitted from base station to cluster head node and received from ordinary nodes. The station takes on the image restoration. Image quality, compression ratio, signal to noise ratio and energy consumption are the essential metrics measured for compression performance. In this paper, the compression methods are designed using Matlab.The parameters like PSNR, the total node energy consumption are calculated. RTL schematic of NMF SVD, PNMF methods is generated by using Verilog HDL.
In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. At the same time, algorithms for quantum computers have been shown to efficiently solve some problems that are intractable on conventional, classical computers. We show that quantum computing not only reduces the time required to train a deep restricted Boltzmann machine, but also provides a richer and more comprehensive framework for deep learning than classical computing and leads to significant improvements in the optimization of the underlying objective function. Our quantum methods also permit efficient training of full Boltzmann machines and multilayer, fully connected models and do not have well known classical counterparts.
IRJET- Optimization of Semantic Image Retargeting by using Guided Fusion NetworkIRJET Journal
This document presents a method for optimizing semantic image retargeting using a guided fusion network. It begins by discussing existing image retargeting methods and their limitations in preserving semantic meaning. The proposed method extracts three semantic components (foreground, action context, background) from images using deep learning modules. A classification guided fusion network is used to integrate the semantic component maps and generate a "semantic collage" importance map. This importance map is fed into an image carrier to generate a retargeted target image that better preserves semantic information from the original image. The method is evaluated on benchmark datasets and shows improved performance over existing baselines in preserving semantics during image retargeting.
A FUZZY INTERACTIVE BI-OBJECTIVE MODEL FOR SVM TO IDENTIFY THE BEST COMPROMIS...ijfls
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In several applications, some of the input points are misclassified and each is not fully allocated to either of these two groups. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. For the proposed fuzzy bi-objective quadratic programming model, a major contribution will be added by obtaining different effective support vectors due to changes in weighting values. The experimental results, show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions. The main contribution of this paper includes constructing a utility function for measuring the degree of infection with coronavirus disease (COVID-19).
A Novel Algorithm for Watermarking and Image Encryption cscpconf
Digital watermarking is a method of copyright protection of audio, images, video and text. We
propose a new robust watermarking technique based on contourlet transform and singular value
decomposition. The paper also proposes a novel encryption algorithm to store a signed double
matrix as an RGB image. The entropy of the watermarked image and correlation coefficient of
extracted watermark image is very close to ideal values, proving the correctness of proposed
algorithm. Also experimental results show resiliency of the scheme against large blurring attack
like mean and gaussian filtering, linear filtering (high pass and low pass filtering) , non-linear
filtering (median filtering), addition of a constant offset to the pixel values and local exchange of pixels .Thus proving the security, effectiveness and robustness of the proposed watermarking algorithm.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
This paper presents a study of the efficiency and performance speedup achieved by applying Graphics Processing Units for Face Recognition Solutions. We explore one of the possibilities of parallelizing and optimizing a well-known Face Recognition algorithm, Principal Component Analysis (PCA) with Eigenfaces. In recent years, the Graphics Processing Units (GPU) has been the subject of extensive research and the computation speed of GPUs has been rapidly increasing.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-MeshIDES Editor
Forecasting always plays an important role in
business, technology and many others and it helps
organizations to increase profits, reduce lost sales and more
efficient production planning. A parallel algorithm for
forecasting reported recently on OTIS-Mesh[9]. This parallel
algorithm requires 5( Root n– 1) electronic steps and 4 optical
steps. In this paper we present an improved parallel algorithm
for time series short term forecasting using OTIS-Mesh. This
parallel algorithm requires 5(-1) electronic steps and 1 optical
step using same number of I/O ports as considered in [9] and
shown to be an improvement over the parallel algorithm for
time series forecasting using OTIS-Mesh [9].
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
Mining at scale with latent factor models for matrix completionFabio Petroni, PhD
PhD Thesis
F. Petroni:
"Mining at scale with latent factor models for matrix completion."
Sapienza University of Rome, 2016.
Abstract: "Predicting which relationships are likely to occur between real-world objects is a key task for several applications. For instance, recommender systems aim at predicting the existence of unknown relationships between users and items, and exploit this information to provide personalized suggestions for items to be of use to a specific user. Matrix completion techniques aim at solving this task, identifying and leveraging the latent factors that triggered the the creation of known relationships to infer missing ones.
This problem, however, is made challenging by the size of today’s datasets. One way to handle such large-scale data, in a reasonable amount of time, is to distribute the matrix completion procedure over a cluster of commodity machines. However, current approaches lack of efficiency and scalability, since, for instance, they do not minimize the communication or ensure a balance workload in the cluster.
A further aspect of matrix completion techniques we investigate is how to improve their prediction performance. This can be done, for instance, considering the context in which relationships have been captured. However, incorporating generic contextual information within a matrix completion algorithm is a challenging task.
In the first part of this thesis, we study distributed matrix completion solutions, and address the above issues by examining input slicing techniques based on graph partitioning algorithms. In the second part of the thesis, we focus on context-aware matrix completion techniques, providing solutions that can work both (i) when the revealed entries in the matrix have multiple values and (ii) all the same value."
Message Sequence Chart (MSC) is a graphical trace language for describing the communication
behaviour of distributed systems. In order to build a Petri Net model of a complex system, this
paper introduces a strategy to build a Petri Net model by using MSC. The algorithm of
translating the MSC model to Petri Net model is also given in this paper. We can get the Petri
Net model and improve it by translating the model of MSC. By analyzing the Petri Net model, it is proved to be safe and reliable.
F. Petroni, L. Querzoni, R. Beraldi, M. Paolucci:
"LCBM: Statistics-Based Parallel Collaborative Filtering."
In: Proceedings of the 17th International Conference on Business Information Systems (BIS), 2014.
Abstract: "In the last ten years, recommendation systems evolved from novelties to powerful business tools, deeply changing the internet industry. Collaborative Filtering (CF) represents today’s a widely adopted strategy to build recommendation engines. The most advanced CF techniques (i.e. those based on matrix factorization) provide high quality results, but may incur prohibitive computational costs when applied to very large data sets. In this paper we present Linear Classifier of Beta distributions Means (LCBM), a novel collaborative filtering algorithm for binary ratings that is (i) inherently parallelizable and (ii) provides results whose quality is on-par with state-of-the-art solutions (iii) at a fraction of the computational cost."
11 construction productivity and cost estimation using artificial Vivan17
This chapter discusses using artificial neural networks (ANNs) to estimate construction project productivity and costs. ANNs can learn from previous examples to predict outputs like cost and schedule based on input data. The chapter provides an overview of ANNs and examples of their use in construction cost and duration estimation. It then presents a framework for developing ANNs for productivity and cost predictions, and provides a detailed case study applying ANNs to estimate productivity of precast installation activities. The case study ANN was able to predict installation times with an average error of around 20%, demonstrating the potential of ANNs for aiding construction cost and schedule estimates.
1) The document proposes GASGD, a distributed asynchronous stochastic gradient descent algorithm for matrix completion via graph partitioning.
2) GASGD aims to efficiently exploit computer cluster architectures by mitigating load imbalance, minimizing communication, and tuning synchronization frequency among computing nodes.
3) The key contributions of GASGD are a graph partitioning-based input slicing solution for load balancing, reducing shared data usage by leveraging characteristics of bipartite power-law datasets, and introducing a synchronization parameter to trade off communication cost and convergence rate.
An approach for color image compression of bmp and tiff images using dct and dwtIAEME Publication
This document summarizes a research paper that compares image compression using the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It finds that DWT performs better than DCT in terms of Mean Square Error and Peak Signal to Noise Ratio. The paper analyzes compression of BMP and TIFF color images using DCT and DWT. It converts color images to grayscale, then compresses the grayscale images using DWT. DWT decomposes images into different frequency components and scales, allowing for better image compression compared to DCT.
F. Petroni, L. Querzoni, K. Daudjee, S. Kamali and G. Iacoboni:
"HDRF: Stream-Based Partitioning for Power-Law Graphs."
In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management (CIKM), 2015.
Abstract: "Balanced graph partitioning is a fundamental problem that is receiving growing attention with the emergence of distributed graph-computing (DGC) frameworks. In these frameworks, the partitioning strategy plays an important role since it drives the communication cost and the workload balance among computing nodes, thereby affecting system performance.
However, existing solutions only partially exploit a key characteristic of natural graphs commonly found in the
real-world: their highly skewed power-law degree distributions.
In this paper, we propose High-Degree (are) Replicated First (HDRF), a novel streaming vertex-cut graph partitioning algorithm that effectively exploits skewed degree distributions by explicitly taking into account vertex degree in the placement decision. We analytically and experimentally evaluate HDRF on both synthetic and real-world graphs and show that it outperforms all existing algorithms in partitioning quality."
Survey on clustering based color image segmentation and novel approaches to f...eSAT Journals
Abstract Segmentation is an important image processing technique that helps to analyze an image automatically. Applications involving detection or recognition of objects in images often include segmentation process. This paper describes two unsupervised clustering based color image segmentation techniques namely K-means clustering and Fuzzy C-means (FCM) clustering. The advantages and disadvantages of both K-means and Fuzzy C-means algorithm are also presented in this paper. K-means algorithm takes less computation time as compared to Fuzzy C-means algorithm which produces result close to that of K-means. On the other hand in FCM algorithm each pixel of an image can have membership to more than one cluster which is not in case of K-means algorithm, an advantage to FCM method. Color images contain wide variety of information and are more complicated than gray scale images. In image processing, though color image segmentation is a challenging task but provides a path for image analysis in practical application fields. Secondly some novel approaches to FCM algorithm for better image segmentation are also discussed such as SFCM (Spatial FCM) and THFCM (Thresholding FCM). Basic FCM algorithm does not take into consideration the spatial information of the image. SFCM specially focus on spatial details and contribute towards image segmentation results for image analysis. It introduces spatial function into FCM algorithm membership function and then operates with available spatial information. THFCM is another approach that focus on thresholding technique for image segmentation. It main task is to find a discerner cluster that will act as automatic threshold. These two approaches shows how better segmentation results can be obtained.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes the key points of a research paper on regularized graph convolutional neural networks (RGCNN) for point cloud segmentation. Specifically:
1) RGCNN directly processes raw point clouds without voxelization or other preprocessing. It constructs graphs based on point coordinates and normals, performs graph convolutions to learn features, and adaptively updates the graphs during learning.
2) RGCNN leverages spectral graph theory to treat point cloud features as graph signals, defines convolutions via Chebyshev polynomial approximation, and regularizes learning with a graph-signal smoothness prior.
3) Experiments on ShapeNet show RGCNN achieves competitive segmentation performance with lower complexity than state-of-the
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Efficient Block Classification of Computer Screen Images for Desktop Sharing ...DR.P.S.JAGADEESH KUMAR
The document presents a neural network approach for efficient block classification of computer screen images for desktop sharing. It segments screen images into text/graphics and picture/background blocks using discrete wavelet transform coefficients and statistical features of 8x8 blocks. The neural network is trained to classify blocks into the two classes and minimize classification errors. Experimental results show the approach achieves over 95% accuracy on various test images with minimal training time and iterations. Future work could aim to further improve classification accuracy.
This document summarizes the results of applying the Successive Survivable Routing (SSR) algorithm to different network models to calculate backup capacity requirements. The SSR algorithm provides an approximate solution to the Spare Capacity Assignment problem of finding the minimum backup capacity needed to deploy equivalent failure-disjoint recovery paths. The document finds that enhanced versions of SSR that incorporate capacity giveback and state-dependent restoration can reduce backup capacity requirements by an average of 12% compared to the standard SSR algorithm. These SSR techniques were applied to reference networks, realistic networks from internet topology data, and synthetic networks generated by the BRITE topology generator.
GENERALIZED LEGENDRE POLYNOMIALS FOR SUPPORT VECTOR MACHINES (SVMS) CLASSIFIC...IJNSA Journal
In this paper, we introduce a set of new kernel functions derived from the generalized Legendre polynomials to obtain more robust and higher support vector machine (SVM) classification accuracy. The generalized Legendre kernel functions are suggested to provide a value of how two given vectors are like each other by changing the inner product of these two vectors into a greater dimensional space. The proposed kernel functions satisfy the Mercer’s condition and orthogonality properties for reaching the optimal result with low number support vector (SV). For that, the new set of Legendre kernel functions could be utilized in classification applications as effective substitutes to those generally used like Gaussian, Polynomial and Wavelet kernel functions. The suggested kernel functions are calculated in compared to the current kernels such as Gaussian, Polynomial, Wavelets and Chebyshev kernels by application to various non-separable data sets with some attributes. It is seen that the suggested kernel functions could give competitive classification outcomes in comparison with other kernel functions. Thus, on the basis test outcomes, we show that the suggested kernel functions are more robust about the kernel parameter change and reach the minimal SV number for classification generally.
Support Vector Machine Optimal Kernel SelectionIRJET Journal
This document discusses selecting the optimal kernel for support vector machines (SVMs) based on different datasets. It provides background on SVMs and how their performance depends on the kernel function used. The document evaluates 4 kernel types (linear, polynomial, radial basis function (RBF), sigmoid) on 3 datasets: heart disease data, digit recognition data, and social network ads data. For each dataset and kernel combination, it reports accuracy, sensitivity, specificity, and kappa statistic metrics from implementing SVMs in R. The linear and RBF kernels generally performed best, with RBF working best for datasets with larger numbers of features like digit recognition data.
This paper presents a study of the efficiency and performance speedup achieved by applying Graphics Processing Units for Face Recognition Solutions. We explore one of the possibilities of parallelizing and optimizing a well-known Face Recognition algorithm, Principal Component Analysis (PCA) with Eigenfaces. In recent years, the Graphics Processing Units (GPU) has been the subject of extensive research and the computation speed of GPUs has been rapidly increasing.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-MeshIDES Editor
Forecasting always plays an important role in
business, technology and many others and it helps
organizations to increase profits, reduce lost sales and more
efficient production planning. A parallel algorithm for
forecasting reported recently on OTIS-Mesh[9]. This parallel
algorithm requires 5( Root n– 1) electronic steps and 4 optical
steps. In this paper we present an improved parallel algorithm
for time series short term forecasting using OTIS-Mesh. This
parallel algorithm requires 5(-1) electronic steps and 1 optical
step using same number of I/O ports as considered in [9] and
shown to be an improvement over the parallel algorithm for
time series forecasting using OTIS-Mesh [9].
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
Mining at scale with latent factor models for matrix completionFabio Petroni, PhD
PhD Thesis
F. Petroni:
"Mining at scale with latent factor models for matrix completion."
Sapienza University of Rome, 2016.
Abstract: "Predicting which relationships are likely to occur between real-world objects is a key task for several applications. For instance, recommender systems aim at predicting the existence of unknown relationships between users and items, and exploit this information to provide personalized suggestions for items to be of use to a specific user. Matrix completion techniques aim at solving this task, identifying and leveraging the latent factors that triggered the the creation of known relationships to infer missing ones.
This problem, however, is made challenging by the size of today’s datasets. One way to handle such large-scale data, in a reasonable amount of time, is to distribute the matrix completion procedure over a cluster of commodity machines. However, current approaches lack of efficiency and scalability, since, for instance, they do not minimize the communication or ensure a balance workload in the cluster.
A further aspect of matrix completion techniques we investigate is how to improve their prediction performance. This can be done, for instance, considering the context in which relationships have been captured. However, incorporating generic contextual information within a matrix completion algorithm is a challenging task.
In the first part of this thesis, we study distributed matrix completion solutions, and address the above issues by examining input slicing techniques based on graph partitioning algorithms. In the second part of the thesis, we focus on context-aware matrix completion techniques, providing solutions that can work both (i) when the revealed entries in the matrix have multiple values and (ii) all the same value."
Message Sequence Chart (MSC) is a graphical trace language for describing the communication
behaviour of distributed systems. In order to build a Petri Net model of a complex system, this
paper introduces a strategy to build a Petri Net model by using MSC. The algorithm of
translating the MSC model to Petri Net model is also given in this paper. We can get the Petri
Net model and improve it by translating the model of MSC. By analyzing the Petri Net model, it is proved to be safe and reliable.
F. Petroni, L. Querzoni, R. Beraldi, M. Paolucci:
"LCBM: Statistics-Based Parallel Collaborative Filtering."
In: Proceedings of the 17th International Conference on Business Information Systems (BIS), 2014.
Abstract: "In the last ten years, recommendation systems evolved from novelties to powerful business tools, deeply changing the internet industry. Collaborative Filtering (CF) represents today’s a widely adopted strategy to build recommendation engines. The most advanced CF techniques (i.e. those based on matrix factorization) provide high quality results, but may incur prohibitive computational costs when applied to very large data sets. In this paper we present Linear Classifier of Beta distributions Means (LCBM), a novel collaborative filtering algorithm for binary ratings that is (i) inherently parallelizable and (ii) provides results whose quality is on-par with state-of-the-art solutions (iii) at a fraction of the computational cost."
11 construction productivity and cost estimation using artificial Vivan17
This chapter discusses using artificial neural networks (ANNs) to estimate construction project productivity and costs. ANNs can learn from previous examples to predict outputs like cost and schedule based on input data. The chapter provides an overview of ANNs and examples of their use in construction cost and duration estimation. It then presents a framework for developing ANNs for productivity and cost predictions, and provides a detailed case study applying ANNs to estimate productivity of precast installation activities. The case study ANN was able to predict installation times with an average error of around 20%, demonstrating the potential of ANNs for aiding construction cost and schedule estimates.
1) The document proposes GASGD, a distributed asynchronous stochastic gradient descent algorithm for matrix completion via graph partitioning.
2) GASGD aims to efficiently exploit computer cluster architectures by mitigating load imbalance, minimizing communication, and tuning synchronization frequency among computing nodes.
3) The key contributions of GASGD are a graph partitioning-based input slicing solution for load balancing, reducing shared data usage by leveraging characteristics of bipartite power-law datasets, and introducing a synchronization parameter to trade off communication cost and convergence rate.
An approach for color image compression of bmp and tiff images using dct and dwtIAEME Publication
This document summarizes a research paper that compares image compression using the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It finds that DWT performs better than DCT in terms of Mean Square Error and Peak Signal to Noise Ratio. The paper analyzes compression of BMP and TIFF color images using DCT and DWT. It converts color images to grayscale, then compresses the grayscale images using DWT. DWT decomposes images into different frequency components and scales, allowing for better image compression compared to DCT.
F. Petroni, L. Querzoni, K. Daudjee, S. Kamali and G. Iacoboni:
"HDRF: Stream-Based Partitioning for Power-Law Graphs."
In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management (CIKM), 2015.
Abstract: "Balanced graph partitioning is a fundamental problem that is receiving growing attention with the emergence of distributed graph-computing (DGC) frameworks. In these frameworks, the partitioning strategy plays an important role since it drives the communication cost and the workload balance among computing nodes, thereby affecting system performance.
However, existing solutions only partially exploit a key characteristic of natural graphs commonly found in the
real-world: their highly skewed power-law degree distributions.
In this paper, we propose High-Degree (are) Replicated First (HDRF), a novel streaming vertex-cut graph partitioning algorithm that effectively exploits skewed degree distributions by explicitly taking into account vertex degree in the placement decision. We analytically and experimentally evaluate HDRF on both synthetic and real-world graphs and show that it outperforms all existing algorithms in partitioning quality."
Survey on clustering based color image segmentation and novel approaches to f...eSAT Journals
Abstract Segmentation is an important image processing technique that helps to analyze an image automatically. Applications involving detection or recognition of objects in images often include segmentation process. This paper describes two unsupervised clustering based color image segmentation techniques namely K-means clustering and Fuzzy C-means (FCM) clustering. The advantages and disadvantages of both K-means and Fuzzy C-means algorithm are also presented in this paper. K-means algorithm takes less computation time as compared to Fuzzy C-means algorithm which produces result close to that of K-means. On the other hand in FCM algorithm each pixel of an image can have membership to more than one cluster which is not in case of K-means algorithm, an advantage to FCM method. Color images contain wide variety of information and are more complicated than gray scale images. In image processing, though color image segmentation is a challenging task but provides a path for image analysis in practical application fields. Secondly some novel approaches to FCM algorithm for better image segmentation are also discussed such as SFCM (Spatial FCM) and THFCM (Thresholding FCM). Basic FCM algorithm does not take into consideration the spatial information of the image. SFCM specially focus on spatial details and contribute towards image segmentation results for image analysis. It introduces spatial function into FCM algorithm membership function and then operates with available spatial information. THFCM is another approach that focus on thresholding technique for image segmentation. It main task is to find a discerner cluster that will act as automatic threshold. These two approaches shows how better segmentation results can be obtained.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes the key points of a research paper on regularized graph convolutional neural networks (RGCNN) for point cloud segmentation. Specifically:
1) RGCNN directly processes raw point clouds without voxelization or other preprocessing. It constructs graphs based on point coordinates and normals, performs graph convolutions to learn features, and adaptively updates the graphs during learning.
2) RGCNN leverages spectral graph theory to treat point cloud features as graph signals, defines convolutions via Chebyshev polynomial approximation, and regularizes learning with a graph-signal smoothness prior.
3) Experiments on ShapeNet show RGCNN achieves competitive segmentation performance with lower complexity than state-of-the
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Efficient Block Classification of Computer Screen Images for Desktop Sharing ...DR.P.S.JAGADEESH KUMAR
The document presents a neural network approach for efficient block classification of computer screen images for desktop sharing. It segments screen images into text/graphics and picture/background blocks using discrete wavelet transform coefficients and statistical features of 8x8 blocks. The neural network is trained to classify blocks into the two classes and minimize classification errors. Experimental results show the approach achieves over 95% accuracy on various test images with minimal training time and iterations. Future work could aim to further improve classification accuracy.
This document summarizes the results of applying the Successive Survivable Routing (SSR) algorithm to different network models to calculate backup capacity requirements. The SSR algorithm provides an approximate solution to the Spare Capacity Assignment problem of finding the minimum backup capacity needed to deploy equivalent failure-disjoint recovery paths. The document finds that enhanced versions of SSR that incorporate capacity giveback and state-dependent restoration can reduce backup capacity requirements by an average of 12% compared to the standard SSR algorithm. These SSR techniques were applied to reference networks, realistic networks from internet topology data, and synthetic networks generated by the BRITE topology generator.
GENERALIZED LEGENDRE POLYNOMIALS FOR SUPPORT VECTOR MACHINES (SVMS) CLASSIFIC...IJNSA Journal
In this paper, we introduce a set of new kernel functions derived from the generalized Legendre polynomials to obtain more robust and higher support vector machine (SVM) classification accuracy. The generalized Legendre kernel functions are suggested to provide a value of how two given vectors are like each other by changing the inner product of these two vectors into a greater dimensional space. The proposed kernel functions satisfy the Mercer’s condition and orthogonality properties for reaching the optimal result with low number support vector (SV). For that, the new set of Legendre kernel functions could be utilized in classification applications as effective substitutes to those generally used like Gaussian, Polynomial and Wavelet kernel functions. The suggested kernel functions are calculated in compared to the current kernels such as Gaussian, Polynomial, Wavelets and Chebyshev kernels by application to various non-separable data sets with some attributes. It is seen that the suggested kernel functions could give competitive classification outcomes in comparison with other kernel functions. Thus, on the basis test outcomes, we show that the suggested kernel functions are more robust about the kernel parameter change and reach the minimal SV number for classification generally.
Support Vector Machine Optimal Kernel SelectionIRJET Journal
This document discusses selecting the optimal kernel for support vector machines (SVMs) based on different datasets. It provides background on SVMs and how their performance depends on the kernel function used. The document evaluates 4 kernel types (linear, polynomial, radial basis function (RBF), sigmoid) on 3 datasets: heart disease data, digit recognition data, and social network ads data. For each dataset and kernel combination, it reports accuracy, sensitivity, specificity, and kappa statistic metrics from implementing SVMs in R. The linear and RBF kernels generally performed best, with RBF working best for datasets with larger numbers of features like digit recognition data.
AMAZON STOCK PRICE PREDICTION BY USING SMLTIRJET Journal
This document discusses predicting stock prices of Amazon (AMZN) stock using machine learning algorithms. It evaluates KNN, SVR, elastic net, and linear regression algorithms and finds that linear regression has the highest accuracy based on precision, recall, and F1 score metrics. The document provides details on each algorithm and compares their performance on this stock price prediction task. Linear regression builds a linear model to predict the dependent variable (stock price) based on independent variables and is shown to outperform the other supervised learning algorithms for this specific prediction problem.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
This document summarizes a research paper that proposes a novel architecture for implementing a 1D lifting integer wavelet transform (IWT) using residue number system (RNS). The key aspects covered are:
1) RNS offers advantages over binary representations for digital signal processing by avoiding carry propagation. A ROM-based approach is proposed for RNS division.
2) The lifting scheme for discrete wavelet transforms is summarized, including split, predict, and update stages.
3) A novel RNS-based architecture is proposed using three main blocks - split, predict, and update - that repeat at each decomposition level. Pipelined implementations of the predict and update blocks are detailed.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The document compares the Levenberg-Marquardt and Scaled Conjugate Gradient algorithms for training a multilayer perceptron neural network for image compression. It finds that while both algorithms performed comparably in terms of accuracy and speed, the Levenberg-Marquardt algorithm achieved slightly better accuracy as measured by average training accuracy and mean squared error, while the Scaled Conjugate Gradient algorithm was faster as measured by average training iterations. The document compresses a standard test image called Lena using both algorithms and analyzes the results.
An optimized deep learning model for optical character recognition applications IJECEIAES
The convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recognition applications; the proposed method was evaluated for performance in terms of computational accuracy, convergence analysis, and cost.
A Fuzzy Interactive BI-objective Model for SVM to Identify the Best Compromis...ijfls
This document summarizes a research paper that proposes a fuzzy bi-objective support vector machine (SVM) model to identify infected COVID-19 patients. The model uses SVM classification with two objectives - maximizing margin between classes and minimizing misclassification errors. An α-cut transforms the fuzzy model into a classical bi-objective problem solved using weighting methods. This generates multiple efficient solutions. An interactive process then identifies the best compromise based on minimizing the number of support vectors in each class. The model constructs a utility function to measure COVID-19 infection levels based on the SVM classification.
The effect of gamma value on support vector machine performance with differen...IJECEIAES
This document summarizes a research paper that investigated the effect of gamma value on support vector machine (SVM) performance with different kernel functions. It studied how changing the gamma value impacted the classification accuracy of SVM when using polynomial, radial basis function (RBF), and sigmoid kernels on various datasets. The results showed that the effect of gamma value was uneven across the three kernel functions and datasets. Polynomial and sigmoid kernels were more influenced by gamma, while RBF kernel performance was more stable with different gamma values, exhibiting only slight changes in accuracy. In general, the gamma value and kernel function used affected SVM's classification performance, and their interaction with the dataset needed to be considered.
COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...ijcsit
Enterprise financial distress or failure includes bankruptcy prediction, financial distress, corporate performance prediction and credit risk estimation. The aim of this paper is that using wavelet networks innon-linear combination prediction to solve ARMA (Auto-Regressive and Moving Average) model problem.ARMA model need estimate the value of all parameters in the model, it has a large amount of computation.Under this aim, the paper provides an extensive review of Wavelet networks and Logistic regression. Itdiscussed the Wavelet neural network structure, Wavelet network model training algorithm, Accuracy rateand error rate (accuracy of classification, Type I error, and Type II error). The main research opportunity exist a proposed of business failure prediction model (wavelet network model and logistic regression
model). The empirical research which is comparison of Wavelet Network and Logistic Regression on training and forecasting sample, the result shows that this wavelet network model is high accurate and the overall prediction accuracy, Type Ⅰerror and Type Ⅱ error, wavelet networks model is better thanlogistic regression model.
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
This document proposes a method to detect digital image forgeries using local binary patterns (LBP) and histogram of oriented gradients (HOG). It extracts LBP features from the input image, then applies HOG to the LBP features. These combined features are classified using a support vector machine (SVM) as authentic or tampered. Testing on CASIA datasets achieved detection rates of 92.3% for CASIA-1 and 96.1% for CASIA-2, outperforming other existing methods. The method is effective at forgery detection while having reduced time complexity.
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
Efficient Feature Selection for Fault Diagnosis of Aerospace System Using Syn...IRJET Journal
This document summarizes a research paper that proposes a two-level feature extraction method using both syntax and semantic algorithms to improve fault diagnosis of aerospace systems. At the syntax level, an improved Chi-squared statistic called ICHI is used to select features and address issues with unbalanced data sets. At the semantic level, a topic modeling approach called PLDA that incorporates prior domain knowledge into LDA is used to further extract features. The extracted features from both levels are then combined to boost the performance of support vector machine classification, especially for minority fault categories.
This document provides a 3-paragraph summary of support vector machines (SVMs). It begins with a brief history of SVMs from the 1990s work of Russian mathematicians Vapnik and Chervonenkis. It then explains how SVMs find the optimal separating hyperplane for classification problems, including methods for non-linear and non-separable data using kernel functions and cost functions. It concludes by noting applications of SVMs include text recognition, face detection, gene expression analysis, and music information retrieval tasks.
A VBA Based Computer Program for Nonlinear FEA of Large Displacement 2D Beam ...Sreekanth Sivaraman
This document describes a VBA-based computer program developed in Microsoft Excel for performing nonlinear finite element analysis of large displacement 2D beam structures. The program uses 2-node beam elements with 6 degrees of freedom per node and employs a combined corotational-total Lagrangian formulation. Polynomial operations are used to generate stiffness and force matrices, making the program adaptable to different element formulations. The program is equipped to handle large displacements, rotations, and strains. Its accuracy is demonstrated through benchmark tests from NAFEMS.
Machine learning in Dynamic Adaptive Streaming over HTTP (DASH)Eswar Publications
Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a novel taxonomy that includes six state of the art techniques of machine learning that have been applied to Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4) Classification, (5) Decision Tree learning, and (6) Neural networks.
The hybrid evolutionary algorithm for optimal planning of hybrid wobanIAEME Publication
This document discusses the hybrid evolutionary algorithm for optimal planning of a hybrid wireless-optical broadband access network (WOBAN). It proposes using two hybrid evolutionary algorithms to solve the optimization problems of optimally placing wireless base stations and optical network units in the WOBAN architecture. The first algorithm combines multicriteria genetic algorithm and hill climbing algorithm to solve the base station placement problem, while the second combines multicriteria genetic algorithm and a different hill climbing algorithm to solve the optical network unit placement problem. The hybrid evolutionary algorithms are shown to be effective techniques for obtaining good optimal solutions for placement in the complex WOBAN network model.
The hybrid evolutionary algorithm for optimal planning of hybrid woban (1)iaemedu
This document summarizes a research paper on developing hybrid evolutionary algorithms to solve optimal placement problems in hybrid wireless-optical broadband access networks (WOBANs). It proposes using two hybrid evolutionary algorithms (HEAs) - one combining genetic algorithm and hill climbing for wireless base station placement, and another combining genetic algorithm and modified hill climbing for optical network unit placement. The HEAs aim to find global optimal solutions for placement to minimize cost while maximizing coverage, as traditional methods may find only local optima. The document outlines the network architecture, problem formulation, related work, and proposed HEAs for solving the multi-objective placement optimization problems in WOBANs.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Similar to An empirical assessment of different kernel functions on the performance of support vector machines (20)
Remote sensing and GIS application for monitoring drought vulnerability in In...riyaniaes
Agricultural drought is one of the hydrometeorological disasters that cause significant losses because it affects food stocks. In addition, agricultural droughts, impact the physical and socio-economic development of the community. Remote sensing technology is used to monitor agricultural droughts spatially and temporally for minimizing losses. This study reviewed the literatures related to remote sensing and GIS for monitoring drought vulnerability in Indonesia. The study was conducted on an island-scale on Java Island, a provincial-scale in East Java and Bali, and a district-scale in Indramayu and Kebumen. The dominant method was the drought index, which involves variable land surface temperature (LST), vegetation index, land cover, wetness index, and rainfall. Each study has a strong point and a weak point. Low-resolution satellite imagery has been used to assess drought vulnerability. At the island scale, it provides an overview of drought conditions, while at the provincial scale, it focuses on paddy fields and has little detailed information. In-situ measurements at the district scale detect meteorological drought accurately, but there were limitations in the mapping unit's detailed information. Drought mapping using GIS and remote sensing at the district scale has detailed spatial information on climate and physiographic aspects, but it needs temporal data monitoring.
Comparative review on information and communication technology issues in educ...riyaniaes
The use of information and communication technology is very beneficial in the education sector because it can enhance the quality of education. However, the implementation of ICT in the education sector of developed and developing countries is a challenging task. This paper explains the comparative study of ICT issues in the education sector of developed and developing countries. In particular, we compare issues between Pakistan and high-tech countries. Our study reveals the fact that the education sector is facing numerous ICT problems that are based on culture, finance, management, infrastructure, lack of training, lack of equipment, teacher’s refusal, and ethical issues. At the end of this paper, various issues faced by the implementation of ICT in the education sector of Pakistan have been categorized into various types, namely, infrastructure, lack of IT professionals, lack of high-speed internet and equipment. Our research is based on five key research questions related to ICT issues. We used a mixed approach where the results of this study can be used as a set of guidelines to help make the learning environment technology-oriented, fast, planned, and productive. Future directions are also given at the end of this paper.
The development and implementation of an android-based saving and loan cooper...riyaniaes
The savings and loans cooperative are a cooperative that has an important purpose in its reach and services, such as the Permata Ngijo Savings and Loans Cooperative. Activities within the cooperative itself include managing member data as well as savings and loan transactions. In systems that are still manual, such as Microsoft Excel, there is a risk of inaccurate data and takes a long time. This manual system is still less effective because the number of members and transactions continues to grow. Therefore, an application is needed as a more effective data processing and information system for members that is easier to access. This study aims to build an android-based savings and loan application using the hypertext preprocessor (PHP) programming language and my structured query language (MySQL) as data storage. Based on the android application that has been made, it shows that the black box functionality of the application runs as planned, the success rate for users reaches 90%.
Online medical consultation: covid-19 system using software object-oriented a...riyaniaes
The internet has been a source of medical information, it has been used for online medical consultation (OMC). OMC is now offered by many providers internationally with diverse models and features. In OMC, consultations and treatments are available 24/7. The covid-19 pandemic across-the-board, many people unable to go to hospital or clinic because the spread of the virus. This paper tried to answer two research questions. The first one on how the OMC can help the patients during covid-19 pandemic. A literature review was conducted to answer the first research question. The second one on how to develop system in OMC related to covid-19 pandemic. The system was developed by Visual Studio 2019 using software object-oriented approach. Online expert review was conducted within 6 experts from health and academic industry to verify the model. Also, the system was validated by 11 users from heath and academic industry to confirm its usability. The statistical package for social science (SPSS 22) was used to analyze the collected data. The result of expert review confirmed that covid-19 system can help the patients. Also, the validity of the system was confirmed by 11 users from health and academic industry.
Successful factors determining the significant relationship between e-governa...riyaniaes
Every government's major objective is to provide the greatest services in order to establish efficiency and quality of performance. Syria's government has understood how critical it is to go in the direction of information technology. However, there are gaps and poor links across government sectors, which has tainted the image of Syrian e-governance. As a result, one of the main aims of this study is to figure out what factors impact Syrians' acceptance of the e-government system. A total of 600 questionnaires were delivered to Syrian individuals as part of a survey. The data was analysed using the structural equation model (SEM) using AMOS version 21.0. User intention to utilise an e-government system was shown to be influenced by performance expectations, effort expectations, system flexibility, citizens-centricity, and facilitating conditions. Assurance, responsiveness, reliability, tangibles, and empathy are five fundamental factors that have a major impact on government operation excellence. Behavioural Intention is being utilised as a mediator between the government operation excellence (GOE) initiative and the e-government platform.
Non-linear behavior of root and stem diameter changes in monopodial orchidriyaniaes
Precision agriculture aims to maximize yield with optimum resources. Vast majority of natural systems are acknowledged as complex and non-linear. However, prior to formulation of precise models, linearity tests are performed to validate plant behavior. This study has presented proof that the water uptake system in monopodial orchid is indeed non-linear. The change in physical growth of root and stem due to temperature and relative humidity factors are observed. The work focused on Ascocenda Fuchs Harvest Moon x (V. Chaophraya x Boots) orchid hybrid. Three complementary methods are presented: linearity tests through 1) regression fitting; 2) scatter plots; and 3) cross-correlation function tests. Root diameter, stem diameter, temperature, and relative humidity are logged at 15 minutes interval for a duration of 71 days. The polynomial equations derived for root diameter and stem diameter changes attained strong regression coefficients. The non-linear behavior is further confirmed by the scatter plots where no linear associations are present between the independent and dependent variables. Subsequently, the cross-correlation function tests conducted on temperature-root diameter, temperature-stem diameter, relative humidity-root diameter, and relative humidity-stem diameter combinations also revealed weak correlation. Despite using different techniques, the behavior of physical changes has been consistently proven to be non-linear.
Cucumber disease recognition using machine learning and transfer learningriyaniaes
Cucumber is grown, as a cash crop besides it is one of the main and popular vegetables in Bangladesh. As Bangladesh's economy is largely dependent on the agricultural sector, cucumber farming could make economic and productivity growth more sustainable. But many diseases diminish the situation of cucumber. Early detection of disease can help to stop disease from spreading to other healthy plants and also accurate identifying the disease will help to reduce crop losses through specific treatments. In this paper, we have presented two approaches namely traditional machine learning (ML) and CNN-based transfer learning. Then we have compared the performance of the applied techniques to find out the most appropriate techniques for recognizing cucumber diseases. In our ML approach, the system involves five steps. After collecting the image, pre-processing is done by resizing, filtering, and contrast-enhancing. Then we have compared various ML algorithms using k-means based image segmentation after extracted 10 relevant features. Random forest gives the best accuracy with 89.93% in the traditional ML approach. We also studied and applied CNN-based transfer learning to investigate the further improvement of recognition performance. Lastly, a comparison among various transfer learning models such as InceptionV3, MobileNetV2, and VGG16 has been performed. Between these two approaches, MobileNetV2 achieves the highest accuracy with 93.23%.
Using particle swarm optimization to solve test functions problemsriyaniaes
In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.
Android mobile application for wildfire reporting and monitoringriyaniaes
Peat fires cause major environmental problems in Central Kalimantan Province, Indonesia and threaten human health and effect the social-economic sector. The lack of peat fire detection systems is one factor that causing these reoccurring fires. Therefore, in this study, we develop an Android mobile platform application and a web-based application to support the citizen-volunteers who want to contribute wildfires reports, and the decision-makers who wish to collect, visualize, and evaluate these wildfires reports. In this paper, the global navigation satellite system (GNSS) and a global position system (GPS) sensor from a smartphone’s camera, is a useful tool to show the potential fire and smoke’s close-range location. The exchangeable image (EXIF) file image and GPS metadata captured by a mobile phone can store and supply raw observation to our devices and sent it to the data center through global internet communication. This work’s results are the proposed application easy-to-use to monitoring potential peat fire by location and data activity. This paper focuses on developing an application for the mobile platform for peat fire reporting and a web-based application to collect peat fire location for decision-makers. Our main objective is to detect the potential and spread of fire in peatlands as early as possible by utilizing community reports using smartphones.
An effective classification approach for big data with parallel generalized H...riyaniaes
Advancements in information technology is contributing to the excessive rate of big data generation recently. Big data refers to datasets that are huge in volume and consumes much time and space to process and transmit using the available resources. Big data also covers data with unstructured and structured formats. Many agencies are currently subscribing to research on big data analytics owing to the failure of the existing data processing techniques to handle the rate at which big data is generated. This paper presents an efficient classification and reduction technique for big data based on parallel generalized Hebbian algorithm (GHA) which is one of the commonly used principal component analysis (PCA) neural network (NN) learning algorithms. The new method proposed in this study was compared to the existing methods to demonstrate its capabilities in reducing the dimensionality of big data. The proposed method in this paper is implemented using Spark Radoop platform.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Road construction is not as easy as it seems to be, it includes various steps and it starts with its designing and
structure including the traffic volume consideration. Then base layer is done by bulldozers and levelers and after
base surface coating has to be done. For giving road a smooth surface with flexibility, Asphalt concrete is used.
Asphalt requires an aggregate sub base material layer, and then a base layer to be put into first place. Asphalt road
construction is formulated to support the heavy traffic load and climatic conditions. It is 100% recyclable and
saving non renewable natural resources.
With the advancement of technology, Asphalt technology gives assurance about the good drainage system and with
skid resistance it can be used where safety is necessary such as outsidethe schools.
The largest use of Asphalt is for making asphalt concrete for road surfaces. It is widely used in airports around the
world due to the sturdiness and ability to be repaired quickly, it is widely used for runways dedicated to aircraft
landing and taking off. Asphalt is normally stored and transported at 150’C or 300’F temperature
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
SENTIMENT ANALYSIS ON PPT AND Project template_.pptx
An empirical assessment of different kernel functions on the performance of support vector machines
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 6, December 2021, pp. 3403~3411
ISSN: 2302-9285, DOI: 10.11591/eei.v10i6.3046 3403
Journal homepage: http://beei.org
An empirical assessment of different kernel functions on the
performance of support vector machines
Isaac Kofi Nti, Owusu Nyarko-Boateng, Felix Adebayo Adekoya, Benjamin Asubam Weyori
Department of Computer Science and Informatics, University of Energy and Natural Resources, Sunyani, Ghana
Article Info ABSTRACT
Article history:
Received Apr 16, 2021
Revised Sep 11, 2021
Accepted Oct 22, 2021
Artificial intelligence (AI) and machine learning (ML) have influenced every
part of our day-to-day activities in this era of technological advancement,
making a living more comfortable on the earth. Among the several AI and ML
algorithms, the support vector machine (SVM) has become one of the most
generally used algorithms for data mining, prediction and other (AI and ML)
activities in several domains. The SVM’s performance is significantly centred
on the kernel function (KF); nonetheless, there is no universal accepted
ground for selecting an optimal KF for a specific domain. In this paper, we
investigate empirically different KFs on the SVM performance in various
fields. We illustrated the performance of the SVM based on different KF
through extensive experimental results. Our empirical results show that no
single KF is always suitable for achieving high accuracy and generalisation in
all domains. However, the gaussian radial basis function (RBF) kernel is often
the default choice. Also, if the KF parameters of the RBF and exponential
RBF are optimised, they outperform the linear and sigmoid KF based SVM
method in terms of accuracy. Besides, the linear KF is more suitable for the
linearly separable dataset.
Keywords:
Gaussian radial basis function
Kernel function
Machine learning
Support vector machine
This is an open access article under the CC BY-SA license.
Corresponding Author:
Isaac Kofi Nti
Department of Computer Science and Informatics
University of Energy and Natural Resources
Sunyani, Ghana
Email: isaac.nti@uenr.edu.gh
1. INTRODUCTION
In the 21st
century, artificial intelligence (AI) and its sub-disciplines such as machine learning (ML),
data mining, deep learning and expert systems have experienced a rebirth following parallel developments in
computer power, theoretical understanding and vast amounts of data. These techniques have taken centre
stage in the technology industry, assisting in solving several thought-provoking computing, software
engineering, and operations research issues [1], [2]. They have impacted demand forecasting, financial
analysis, supply chain planning, computer vision, big data analytics, customer engagement, business domain
knowledge, education and many more domains. Several ML algorithms like decision trees (DTs), neural
networks (NN), K-nearest neighbour (KNN), naïve base (NB), random forest (RF) and support vector
machine (SVM) have achieved real-world application success [3]. Among these algorithms, the SVM has
been widely used recently. To mention a few, finance [4], [5], engineering [6], healthcare [7]-[9].
The SVM is one of the most robust supervised ML algorithms based on statistical learning theory
(STL) developed by Vapnik [10]. The SVM employs the risk minimisation theory to establish the best
separation hyperplane in multi-dimensional space to classify a bipartite outcome [11]. Initially, the SVM was
designed for binary classification [12]; however, of late, the SVM is applicable for both classification and
2. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3403 – 3411
3404
regression ML tasks. The performance of the SVM has been compared with other ML algorithms, such as
Bayesian logistic regression, and decision tree [13], [14], random forest [15], [16], neural network [17], [18]
and k-nearest neighbours [19], [20]. Notwithstanding variations in the experimental outcomes, the SVM is
equated to more traditional models in many of these studies. In contrast, other studies [13], [14], [16], [19]
reported that the SVM grippingly outperformed several conventional models.
When training an SVM model, there is a need to choose a KF and its associated parameters, and this is
one of the biggest challenges to users of the SVM [5], [11], [21]. The KF permits the SVM (a linear machine) to
transform the feature space and act as a non-linear model. The KF parameters regulate the shape of the
separating margin used to classify a set of features. An accurate selection of these parameters can significantly
improve the prediction accuracy of the SVM [4], [22]. However, the availability of numerous KFs makes it
challenging to select the appropriate one for a specific domain. Unfortunately, several researchers (especially
beginners) adopt default SVM without worrying about the parameters it uses (e.g., KF).
Nonetheless, selecting all these parameters is necessary before using the SVM in a specific task since
they are task-dependent. Besides, there is no universal accepted technique for choosing the appropriate KF and
its parameters in a particular domain to attain high generalisation [12]. Additionally, the optimal regularisation
parameter (C) is pivotal to obtain accurate outcomes [4], [17]. Hence, the current study presents a
comprehensive comparative analysis of several KF on SVM performance for heart disease detection, exchange
rate, and weather prediction. Furthermore, we aim to make the KF generalised for a specific domain.
Based on the above discussions, this study seeks to answer the question; which KF and its
parameters are suitable for achieving higher generalisation of the SVM under a given dataset? We
hypothesised that no single KF and its associated parameters are suitable for all domain applications. The rest
of this paper is organised as shown in. First, section 2 presents the basic overview of the SVM, the various
types of KF and the study framework. Then, we present the empirical results and discussions in section 3,
followed by the study conclusions and future works in section 4.
2. RESEARCH METHOD
2.1. Machine learning
ML is a subfield of AI centred on creating computer algorithms capable of analysing and learning
data with intrinsic patterns and improving their accuracy rate with time without independently. According to
[23], ML can be defined graphically, as shown in Figure 1. Given some class of tasks (T), to a computer
program, the program is said to learn from experience (E) and performance measure (P), if its performance at
(T), as measured by P, improves with (E). Typically, ML is divided into four areas [24], i.e., 1) supervised
learning (SL), 2) unsupervised learning (UL), 3) evolutionary learning (EL) and 4) reinforcement learning
(RL). Among these four techniques, studies [3], [25], [26] shows that SL is the most commonly used by
researchers and professionals due to the already labelled dataset’s availability and SL lesser computational
time than UL, RL and EL.
Task (T)
Computer program learning
algorithm
Experience (E)
Performance (P)
Figure 1. ML definition
Several SL algorithms exist, e.g., KNN, linear regression, RF, logistical regression, gradient boosted
trees (GBT), NN, SVM, DT and NB. However, since the current study centres on the SVM, we briefly
present its overview in the subsequent section.
2.2. Support vector machine
The SVM algorithm was developed at AT&T Bell laboratories [10] to accurately classify binary
dependent features using a unique hyperplane (H). The SVM maps input features (v) to a higher dimensional
space where the best separating hyperplane is created. The hyperplane serves as a borderline sandwiched
between two classes, created by maximising the margin between support vectors of both classes [19].
Typically, the hyperplane differs in value for different feature dimensions, e.g., for 1-dimension, H would be
a point; for 2-dimension, H would be a straight-line. Whiles for dimensions greater than two, H would be a
plane. Figure 2 shows a simple SVM [27]. The H with the most significant margin between the classes is
3. Bulletin of Electr Eng & Inf ISSN: 2302-9285
An empirical assessment of different kernel functions on the performance … (Isaac Kofi Nti)
3405
considered the best. Margin stands for the slice’s greatest width parallel to the H that has no inner data points.
The data points, which are neighbouring to the separating H, are called support vectors (see Figure 2).
Figure 2. A sketch of the SVM [27]
Typically, the optimal H for p-dimensional features, from observations (x),𝑤ℎ𝑒𝑟𝑒 𝑥 =
{𝑥1, 𝑥2, … , 𝑥𝑛} with class labels (𝑦𝑖), {𝑖 = 1,2,3, ⋯ , 𝑛 ∈ [1, −1]} is estimated as defined in (1) using a weight
vector (𝑤𝑡
) of the feature and a scalar of intercept (𝛽0). Based on this H, the decision boundary between two
class labels is computed using (2) and (3) as defined in [12], [19].
𝑓(𝑥) = {𝑊𝑡
𝑥𝑖 + 𝛽𝑜} (1)
𝑦𝑖(𝑊𝑡
𝑥𝑖 + 𝛽0 ≥ +1 𝑖𝑓 𝑦𝑖 = 1) (2)
𝑦𝑖(𝑊𝑡
𝑥𝑖 + 𝛽0 ≤ −1 𝑖𝑓 𝑦𝑖 = −1) ∀𝑖= 1,2, … , 𝑛 (3)
where Wt
is a weight vector and 𝛽0 is called the bias value
The SVM employs a practical mathematical function known as kernel trick to map the classification
data and a dot-product for mapping a higher dimension. Usually, KFs are grouped into two classes, namely
(i) rotation invariant kernel and (ii) translation invariant kernel. Table 1: shows the kernel functions for
setting up an SVM model as defined [12].
Table 1. Types of KS for setting up an SVM model [12]
Kernels Definition
1. Linear 𝐾(𝑥𝑖, 𝑥𝑗) = 1 + 𝑥𝑖
𝑇
𝑥𝑗
2. Polynomial 𝐾(𝑥𝑖, 𝑥𝑗) = exp(1 + 𝑥𝑖
𝑇
𝑥𝑗)𝑝
3. Radial Basis Function (RBF) 𝐾(𝑥𝑖, 𝑥𝑗) = exp(−𝛾 ∥ 𝑥𝑖 − 𝑥𝑗 ∥2
)
4. Gaussian RBF
𝐾(𝑥𝑖, 𝑥𝑗) = exp (
−∥ 𝑥𝑖 − 𝑥𝑗 ∥2
2𝜎2
)
5. Exponential RBF
𝐾(𝑥𝑖, 𝑥𝑗) = exp (
−∥ 𝑥𝑖 − 𝑥𝑗 ∥
2𝜎2
)
6. Sigmoid 𝐾(𝑥𝑖, 𝑥𝑗) = tanh (𝑘𝑥𝑖
𝑇
𝑥𝑗 − 𝛿)
7. Spline
𝑘𝑠𝑝𝑙𝑖𝑛𝑒(𝑢, 𝑣) = 1 + 𝑢𝑣 + ∫ (𝑢 − 𝑡)+(𝑣 − 𝑡)+
1
0
𝑑𝑡
8. Anova Spline 𝐾(𝑢, 𝑣) = 1 + 𝑘(𝑢1
, 𝑣1) + 𝑘(𝑢2
, 𝑣2) + 𝑘(𝑢1
,𝑣1)𝑘(𝑢2
, 𝑣2)
9. Additive 𝐾(𝑢, 𝑣) = ∑ 𝐾(𝑢, 𝑣)
𝑖
10. Tensor product
𝐾(𝑢, 𝑣) = ∏ 𝐾𝑚(𝑢𝑚, 𝑣𝑚)
𝑛
𝑚−1
Lately, the SVM has gain popularity among SL algorithms for solving real-world problems, like
handwriting analysis, facial analysis and more, specifically for pattern classification, outlier detection and
regression-based applications [28], [29]. However, one critical challenge faced by SVM users is choosing the
appropriate KF and its associated parameters, such as penalty parameter (C), KF parameters like the gamma
(𝛾) for the RBF kernel. Studies [4], [17], [28] affirms that is the vital step in managing a learning task with
the SVM since it has a substantial effect on its accuracy. Thus, the primary problems accompanying setting
up the SVM model are how to select the KF and its allied parameter values for a specific domain application
[4], [17], [28], and this paper attempts to find the best expression for these SVM parameters in three different
4. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3403 – 3411
3406
domain. Specifically, we examine how the SVM efficiency in predicting diabetes, fake banknote, flower
species and wheat species is affected by different KF and other parameters. Thus, we aim to make the KFs
generalised for a specific domain. It is anticipated that this study outcome will serve as a ground base for
several researchers (precisely beginners) to adopt the appropriate SVM parameter for a particular domain
application without worrying.
2.3. Study framework
Figure 3 shows the experimental framework of this study. In (5) different data sets were downloaded
from various sources for this study; see Table 2 for details.
WSD
IFD
PIDD
Data
Preprocessing SVM
Kernels
Evaluation
Train Data (80%)
Test
Data (20%)
Datasets
• Linear
• Polynomial
• RBF
• Exponential RBF
• Sigmoid
BKN
WD
Figure 3. Study framework
Table 2. Used datasets details
Dataset
Size
(n x m)
Description Source
Wheat seeds dataset
(WSD)
210 8 The WDS contains measures of different wheat species. The aim is to predict a
seed species of a given seed feature.
A
Iris flowers dataset (IFD) 150 5 The IFD contains the measured features of the iris, and the aim is to predict a
flower species based on measured characteristics.
A
Pima indians diabetes
dataset (PIDD)
768 9 The PIDD is meant for predicting the inception of diabetes within 5-years. B
Banknotes (BKN) 1372 5 The BKN aim at predicting the authenticity of a given banknote based on
extracted features
A
Weather data (WD) 14668 5 The weather data aims at predicting the maximum temperature using some
weather parameters
C
A=https://archive.ics.uci.edu; B=https://raw.githubusercontent.com; C=https://power.larc.nasa.gov; n=number of records; m=number of
features
Firstly, we preprocessed each dataset separately to free the datasets from missing values, outliers
and data inconsistency. We then normalised each dataset using the max-min function (see (4)) within the
range [0, 1].
𝒙′
=
𝒙−𝒙𝒎𝒊𝒏
𝒙𝒎𝒊𝒏𝒎𝒂𝒙
(4)
where x’
is the normalisation value; x=the value to be normalised, 𝑥𝑚𝑖𝑛 𝑎𝑛𝑑 𝑥𝑚𝑎𝑥 are the minima and
maxima value of the dataset.
Each normalised dataset is divided into a training set (80%) and a testing set (20%). From the
different Kernels discussed in Table 1, we adopted five (5) most commonly used, namely: (i) linear, (ii)
polynomial, (iii) radial basis function (RBF), (iv) exponential RBF and (v) sigmoid. Based on these six
kernels. We set up an SVM model using different parameters (see Table 3). The aim is to make the kernels
generalised for every dataset. 10-fold cross-validation was used for generalisation. We evaluate the
performance of the SVM based on (i) confusion matrix, (ii) accuracy for classification analysis and
root-mean-square-error (RMSE) for regression analysis. The Scikit-learn library was used to implement the
SVM model. The experiment was conducted on Google colab [30], a free online platform for modelling ML
models on powerful hardware options like GPU and TPU. Table 3 shows the details of the parameter used in
modelling the SVM.
5. Bulletin of Electr Eng & Inf ISSN: 2302-9285
An empirical assessment of different kernel functions on the performance … (Isaac Kofi Nti)
3407
Table 3. Used parameters
Parameter Description
Kernel Represent the kernel type for algorithm
Penalty parameter (C) The Regularisation parameter, its strength is inversely proportional to C.
Degree (d) Degree of the polynomial kernel function (‘poly’). All other kernels ignore it.
Gamma (γ) Kernel coefficient for 'radial basis function', 'polynomial' and 'sigmoid'.
3. EXPERIMENTAL RESULTS AND DISCUSSION
Table 4 shows the experimental results of different KF with several datasets. The results show that
one single KF is not suitable for all domains, which confirms arguments by literature [4], [17], [28] that
selecting the correct KF is a vital step in setting up an SVM model. An incorrect choice of the KF will lead to
abysmal results by the model. Hence, the random selection of KF is not always optimal for achieving a high
SVM generalisation. From the outcome, it suggests that the linear kernel is more suitable for a small dataset.
Also, SVM is not a parametric model, so complexity increases as the training dataset’s size increases.
Table 4 shows that the optimal kernels effectively enhance the SVM prediction performance and
generalizability by optimising the result. Likewise, the same kernel function on different datasets gave
different results (see Table 4); this suggests that a given dataset’s statistical property can effectively inform
the choice of a KF and its parameters.
Figures 4-9 shows the confusion matrix plot for each KF and its parameters on the different datasets;
the results affirm Table 4. From Table 4 and Figures 4-9, it can be inferred that the choice of an SVM KF
highly depends on the problem at hand, i.e., what one is attempting to model. Therefore, the motivation
behind selecting a specific KF is very intuitive and upfront based on what kind of information one is intended
to extract about the dataset. The outcome (see Table 4) affirms that correct tuning of the KF parameter in the
RBF, Sigmond and exponential RBF KFs increases SVM accuracy compared with the linear kernel.
Table 4. Comparing different kernels on datasets
Kernel Function (Associated Parameters)
Machine Learning Task
Classification Regression
Accuracy (%) RMSE
IFD PIDD WSD BKN WD
Linear
C=5 80 76.6 95 99 0.22
C=100000 80 75 95 99 0.22
Polynomial
C=5, d=2 76.7 64.9 68 75 1.02
C=100000, d=3 66.7 68 57 87 1.01
RBF
C=5, γ=5 70 70.8 92 100 0.19
C=100000, γ=2 73.3 70 88 100 0.16
Sigmoid C=5, k=0.5, δ=0 43.3 70.2 90 75 1668.3
C=100000, k=2, δ=4 43.3 70 88 75 1700.1
Exponential RBF
C=5, σ=2 76.7 77.4 93 99.6 0.2
C=100000, σ=5 73.3 77.38 93 99.62 0.19
Figure 4. Confusion matrix of SVM with different kernels on IFD dataset
(E)
(D)
(C)
(B)
(A)
6. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3403 – 3411
3408
Figure 5. Confusion matrix of SVM with different kernels on PIDD dataset
Figure 6. Confusion matrix of SVM with different kernels on PIDD dataset
Figure 7. Confusion Matrix of SVM with different kernels on PIDD dataset
(D) (E)
(C)
(C)
(B)
(B)
(A)
(A)
(E)
(D)
(C)
(B)
(A)
(E)
(D)
7. Bulletin of Electr Eng & Inf ISSN: 2302-9285
An empirical assessment of different kernel functions on the performance … (Isaac Kofi Nti)
3409
Figure 8. Confusion matrix of SVM with different kernels on BKN dataset
Figure 9. Confusion matrix of SVM with different kernels on BKN dataset
4. CONCLUSION
SVM is one of the machine learning algorithms that has received much attention from the research
committee and professionals. Its success has been seen in every sector of our day-to-day activities. On the other
hand, one primary concern with its implementation is selecting the most appropriate kernel function and fine-
tuning its associated parameters. Several KF can be used for different applications, but the most suitable
depends on the problem at hand (domain area). This paper reviewed and examined the effect of the linear,
polynomial, RBF, sigmoid, and exponential RBF Kernel functions on the SVM algorithm’s performance. We
assessed these KF performances on the SVM using the confusion matrix and accuracy for classification and the
RMSE for regression tasks. It is observed that no single KF is suitable for all classification problems or
regression problems. Also, an SVM with optimised kernel parameters for RBF and exponential RBF KFs are
more likely to outperform the linear and sigmoid kernel base SVM methods in terms of accuracy. Since the KFs
performance varies with the problem at hand, this study suggests a fused kernel method as a way forward.
Hence, a systematic method and optimisation technique such as genetic algorithm (GA) and particle swarm
optimisations (PSO) can be used to build a unified kernel. Therefore, the challenge of selecting the correct
kernels and the best logical method of merging these kernels is a future direction.
ACKNOWLEDGEMENTS
We express our gratitude to everyone who contributed to the development of this paper.
(D)
(A) (B)
(E)
(C)
(E)
(D)
(D)
(B)
(A)
8. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3403 – 3411
3410
REFERENCES
[1] S. Natale and A. Ballatore, “Imagining the thinking machine: Technological myths and the rise of artificial
intelligence,” Convergence, vol. 26, no. 1, pp. 3-18, 2020, doi: 10.1177/1354856517715164.
[2] E. Farrow, “To augment human capacity-Artificial intelligence evolution through causal layered analysis,” Futures,
vol. 108, pp. 61–71, 2019, doi: 10.1016/j.futures.2019.02.022.
[3] I. K. Nti, A. F. Adekoya and B. A. Weyori, “A systematic review of fundamental and technical analysis of stock
market predictions,” Artificial Intelligence Review, vol. 53, no. 4, pp. 3007-3057, 2019, doi: 10.1007/s10462-019-
09754-z.
[4] I. K. Nti, A. F. Adekoya and B. A. Weyori, “Efficient Stock-Market Prediction Using Ensemble Support Vector
Machine,” Open Computer Science, vol. 10, no. 1, pp. 153-163, Jul. 2020, doi: 10.1515/comp-2020-0199.
[5] H. Jiang, W.-K. Ching, K. F. C. Yiu and Y. Qiu, “Stationary Mahalanobis kernel SVM for credit risk evaluation,”
Applied Soft Computing, vol. 71, pp. 407-417, Oct. 2018, doi: 10.1016/j.asoc.2018.07.005.
[6] M. Elangovan, V. Sugumaran, K. I. Ramachandran and S. Ravikumar, “Effect of SVM kernel functions on
classification of vibration signals of a single point cutting tool,” Expert Systems with Applications, vol. 38, no. 12,
pp. 15202–15207, 2011, doi: 10.1016/j.eswa.2011.05.081.
[7] C. Venkatesan, P. Karthigaikumar, A. Paul, S. Satheeskumaran and R. Kumar, "ECG Signal Preprocessing and
SVM Classifier-Based Abnormality Detection in Remote Healthcare Applications," in IEEE Access, vol. 6, pp.
9767-9773, 2018, doi: 10.1109/ACCESS.2018.2794346.
[8] K. Harimoorthy and M. Thangavelu, “Multi-disease prediction model using improved SVM-radial bias technique in
healthcare monitoring system,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, pp. 3715-
3723, 2020, doi: 10.1007/s12652-019-01652-0.
[9] M. Kathuria and S. Gambhir, “Critical Condition Detection Using Lion Hunting Optimizer and SVM Classifier in a
Healthcare WBAN,” International Journal of E-Health and Medical Communications, vol. 11, no. 1, pp. 52-68,
2020, doi: 10.4018/IJEHMC.2020010104.
[10] V. N. Vapnik, "An overview of statistical learning theory," in IEEE Transactions on Neural Networks, vol. 10, no.
5, pp. 988-999, Sept. 1999, doi: 10.1109/72.788640.
[11] M. E. Matheny, F. S. Resnic, N. Arora and L. Ohno-Machado, “Effects of SVM parameter optimisation on
discrimination and calibration for post-procedural PCI mortality,” Journal of Biomedical Informatics, vol. 40, no. 6,
pp. 688-697, 2007, doi: 10.1016/j.jbi.2007.05.008.
[12] R. Sangeetha and B. Kalpana, “A Comparative Study and Choice of an Appropriate Kernel for Support Vector
Machines,” in Communications in Computer and Information Science, vol. 101, pp. 549-553, 2010, doi:
10.1007/978-3-642-15766-0_93.
[13] V.-H. Nhu et al., “Comparison of Support Vector Machine, Bayesian Logistic Regression, and Alternating
Decision Tree Algorithms for Shallow Landslide Susceptibility Mapping along a Mountainous Road in the West of
Iran,” Applied Sciences, vol. 10, no. 15, p. 5047, 2020, doi: 10.3390/app10155047.
[14] A. E. Minarno, W. A. Kusuma and H. Wibowo, “Performance Comparisson Activity Recognition using Logistic
Regression and Support Vector Machine,” in 2020 3rd International Conference on Intelligent Autonomous
Systems (ICoIAS), Feb. 2020, pp. 19-24, doi: 10.1109/ICoIAS49312.2020.9081858.
[15] A. Sabat-Tomala, E. Raczko and B. Zagajewski, “Comparison of Support Vector Machine and Random Forest
Algorithms for Invasive and Expansive Species Classification Using Airborne Hyperspectral Data,” Remote
Sensing, vol. 12, no. 3, p. 516, Feb. 2020, doi: 10.3390/rs12030516.
[16] C. Aroef, Y. Rivan and Z. Rustam, “Comparing random forest and support vector machines for breast cancer
classification,” TELKOMNIKA Telecommunication Computer Electron and Control, vol. 18, no. 2, p. 815, 2020,
doi: 10.12928/telkomnika.v18i2.14785.
[17] I. K. Nti, A. F. Adekoya and B. A. Weyori, “A comprehensive evaluation of ensemble learning for stock-market
prediction,” Journal of Big Data vol. 7, no. 1, p. 20, 2020, doi: 10.1186/s40537-020-00299-5.
[18] J. Horak, J. Vrbka and P. Suler, “Support Vector Machine Methods and Artificial Neural Networks Used for the
Development of Bankruptcy Prediction Models and their Comparison,” Journal of Risk and Financial
Management, vol. 13, no. 3, p. 60, 2020, doi: 10.3390/jrfm13030060.
[19] M. Yaqoob, F. Iqbal and S. Zahir, “Comparing predictive performance of k-nearest neighbors and support vector
machine for predicting ischemic heart disease,” Research Journal in Advanced Sciences, vol. 1, no. 2, pp. 49-60,
2020.
[20] S. Shabani et al., “Modeling Pan Evaporation Using Gaussian Process Regression K-Nearest Neighbors Random
Forest and Support Vector Machines; Comparative Analysis,” Atmosphere, vol. 11, no. 1, p. 66, 2020, doi:
10.3390/atmos11010066.
[21] I. S. Al-Mejibli, D. H. Abd, J. K. Alwan and A. J. Rabash, "Performance Evaluation of Kernels in Support Vector
Machine," 2018 1st Annual International Conference on Information and Sciences (AiCIS), 2018, pp. 96-101, doi:
10.1109/AiCIS.2018.00029.
[22] B. Peng, L. Peng, G. Dao-tian and L. Yan, “Study on SVM Calibration Model Parameter for Mixed Gas,” Procedia
Engineering, vol. 15, pp. 3642-3645, 2011, doi: 10.1016/j.proeng.2011.08.682.
[23] T. Mitchell, "Machine Learning," 1st Editio. McGraw Hill, 1997.
[24] R. V. Molina, “Choosing the Right Kernel a Meta-Learning Approach To Kernel Selection in Support Vector
Machines Choosing the Right Kernel a Meta-Learning Approach To Kernel,” University of Houston, 2015.
[25] I. K. Nti, A. F. Adekoya, B. A. Weyori and O. Nyarko-Boateng, “Applications of artificial intelligence in
engineering and manufacturing: a systematic review,” Journal of Intelligent Manufacturing, pp. 1-29, 2021, doi:
9. Bulletin of Electr Eng & Inf ISSN: 2302-9285
An empirical assessment of different kernel functions on the performance … (Isaac Kofi Nti)
3411
10.1007/s10845-021-01771-6.
[26] I. K. Nti, M. Teimeh, O. Nyarko-Boateng and A. F. Adekoya, “Electricity load forecasting: a systematic review,”
Journal of Electrical Systems and Information Technology, vol. 7, no. 13, pp. 1-19, Dec. 2020, doi:
10.1186/s43067-020-00021-8.
[27] C. Savas and F. Dovis, "Comparative Performance Study of Linear and Gaussian Kernel SVM Implementations for
Phase Scintillation Detection," 2019 International Conference on Localization and GNSS (ICL-GNSS), 2019, pp. 1-
6, doi: 10.1109/ICL-GNSS.2019.8752635.
[28] S. Sarafrazi and H. Nezamabadi-pour, “Facing the classification of binary problems with a GSA-SVM hybrid
system,” Mathematical and Computer Modelling, vol. 57, no. 1-2, pp. 270-278, 2013, doi:
10.1016/j.mcm.2011.06.048.
[29] L. C. Padierna, M. Carpio, A. Rojas-Domínguez, H. Puga and H. Fraire, “A novel formulation of orthogonal
polynomial kernel functions for SVM classifiers: The Gegenbauer family,” Pattern Recognition, vol. 84, pp. 211-
225, Dec. 2018, doi: 10.1016/j.patcog.2018.07.010.
[30] E. Bisong, "Building Machine Learning and Deep Learning Models on Google Cloud Platform," Google
Colaboratory, Berkeley, CA: Apress, 2019, pp. 59-64.
BIOGRAPHIES OF AUTHORS
Isaac Kofi Nti holds HND in Electrical & Electronic Engineering from Sunyani Technical
University, B. Sc. in Computer Science from Catholic University College, M. Sc. in
Information Technology from Kwame Nkrumah University of Science and Technology. Mr
Nti is a Lecturer at the Department of Computer Science and Informatics, University of Energy
and Natural Resources (UENR), Sunyani Ghana and currently is a Ph. D. candidate and the
same department. His research interests include artificial intelligence, energy system
modelling, intelligent information systems, social and sustainable computing, business
analytics, and data privacy and security. Email: Isaac.nti@uenr.edu.gh. ORCID ID:
https://orcid.org/0000-0001-9257-4295
Adebayo Felix Adekoya holds B. Sc. (1994), M. Sc. (2002), and Ph. D. (2010) in Computer
Science, an MBA in Accounting & Finance (1998), and a Postgraduate Diploma in Teacher
Education (2004). In addition, he has put in about twenty-five (25) years of experience as a
lecturer, researcher and administrator at the higher educational institution levels in Nigeria and
Ghana. A. F. Adekoya is an Associate Professor of Computer Science. Currently, he serves as
the Dean, School of Sciences, and the Acting Pro-VC University of Energy and Natural
Resources, Sunyani, Ghana. His research interests include artificial intelligence, business &
knowledge engineering, intelligent information systems, and social and sustainable computing.
ORCID iD: https://orcid.org/0000-0002-5029-2393
Owusu Nyarko-Boateng is a PhD Computer Science candidate and a lecturer at the
Department of Computer Science and Informatics, the University of Energy and Natural
Resources, Ghana. He holds HND Electrical & Electronics Engineering, BSc Computer
Science, PGDE, and MSc Information Technology. He has in-depth experience in
telecommunications transmission systems, including fiber optics cable deployment for a long-
haul and short distance (FTTx), 2G BTS, WCDMA (3G), and 4G (LTE) plants installations
and configurations. His research areas include machine learning, artificial intelligence,
computer networks and data communications, network security, fiber optics technologies,
modelling transmission systems, 5G & 6G Technologies, Expert Systems, computational
intelligence for data communications.
Benjamin Asubam Weyori received his Ph. D. and M. Phil. in Computer Engineering from
the Kwame Nkrumah University of Science and Technology (KNUST), Ghana, in 2016 and
2011, respectively. He obtained his Bachelor of Science in Computer Science from the
University for Development Studies (UDS), Tamale, Ghana, in 2006. He is currently a Senior
Lecturer and the Head of the Department of Computer Science and Informatics, the University
of Energy and Natural Resources (UENR) in Ghana. His main research interests include
artificial intelligence, computer vision (image processing), machine learning, and web
engineering. ORCID iD: https://orcid.org/0000-0001-5422-425