This paper presents a new algorithm for solving large scale global optimization problems based on hybridization of simulated annealing and Nelder-Mead algorithm. The new algorithm is called simulated Nelder-Mead algorithm with random variables updating (SNMRVU). SNMRVU starts with an initial solution, which is generated randomly and then the solution is divided into partitions. The neighborhood zone is generated, random number of partitions are selected and variables updating process is starting in order to generate a trail neighbor solutions. This process helps the SNMRVU algorithm to explore the region around a current iterate solution. The Nelder- Mead algorithm is used in the final stage in order to improve the best solution found so far and accelerates the convergence in the final stage. The performance of the SNMRVU algorithm is evaluated using 27 scalable benchmark functions and compared with four algorithms. The results show that the SNMRVU algorithm is promising and produces high quality solutions with low computational costs.
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
Gesture recognition based on computer-vision is an important part of human-computer interaction. But it lacks in several points, that was image brightness, recognition time, and accuracy. Because of that goal of this research was to create a hand gesture recognition system that had good performances using discrete wavelet transform and hidden Markov models. The first process was pre-processing, which done by resizing the image to 128x128 pixels and then segmented the skin color. The second process was feature extraction using the discrete wavelet transform. The result was the feature value in the form of a feature vector from the image. The last process was gesture classification using hidden Markov models to calculate the highest probability of feature matrix which had obtained from the feature extraction process. The result of the system had 72% of accuracy using 150 training and 100 test data images that consist five gestures. The newness thing found in this experiment were the effect of acquisition and pre-processing. The accuracy had been escalated by 14% compared to Sebastien’s dataset at 58%. The increment effect propped by brightness and contrast value.
It is rather surprising that in software engineering, standard measurement units have yet to be
widely accepted and used. Every other engineering discipline has their own. By and large, effort
is the most commonly used parameter for measuring software initiatives. The problem of
course is that effort is not an independent variable – it depends on who is doing the work and
how it is done. This presentation looks at an approach that has been used to convert the large
amount of effort data usually collected in an organization into something that can meaningfully
be used for estimation and comparison purposes.
Implementation of Decision Support System for various purposes now can facilitate policy makers to get the best alternative from a variety of predefined criteria, one of the methods used in the implementation of Decision Support System is VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje), VIKOR method in this research got the best results with an efficient and easily understood process computationally, it is expected that the results of this study facilitate various parties to develop a model any solutions.
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
Gesture recognition based on computer-vision is an important part of human-computer interaction. But it lacks in several points, that was image brightness, recognition time, and accuracy. Because of that goal of this research was to create a hand gesture recognition system that had good performances using discrete wavelet transform and hidden Markov models. The first process was pre-processing, which done by resizing the image to 128x128 pixels and then segmented the skin color. The second process was feature extraction using the discrete wavelet transform. The result was the feature value in the form of a feature vector from the image. The last process was gesture classification using hidden Markov models to calculate the highest probability of feature matrix which had obtained from the feature extraction process. The result of the system had 72% of accuracy using 150 training and 100 test data images that consist five gestures. The newness thing found in this experiment were the effect of acquisition and pre-processing. The accuracy had been escalated by 14% compared to Sebastien’s dataset at 58%. The increment effect propped by brightness and contrast value.
It is rather surprising that in software engineering, standard measurement units have yet to be
widely accepted and used. Every other engineering discipline has their own. By and large, effort
is the most commonly used parameter for measuring software initiatives. The problem of
course is that effort is not an independent variable – it depends on who is doing the work and
how it is done. This presentation looks at an approach that has been used to convert the large
amount of effort data usually collected in an organization into something that can meaningfully
be used for estimation and comparison purposes.
Implementation of Decision Support System for various purposes now can facilitate policy makers to get the best alternative from a variety of predefined criteria, one of the methods used in the implementation of Decision Support System is VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje), VIKOR method in this research got the best results with an efficient and easily understood process computationally, it is expected that the results of this study facilitate various parties to develop a model any solutions.
Principal component analysis - application in financeIgor Hlivka
Principal component analysis is a useful multivariate times series method to examine and study the drivers of the changes in the entire dataset. The main advantage of PCA is the reduction of dimensionality where the large sets of data get transformed into few principal factors that explain majority of variability in that group. PCA has found many applications in finance – both in risk and yield curve analytics
An Elementary Introduction to Kalman FilteringAshish Sharma
KALMAN FILTERING IS a state estimation technique
used in many application areas such as spacecraft
navigation, motion planning in robotics, signal
processing, and wireless sensor networks because
of its ability to extract useful information from
noisy data and its small computational and memory
requirements.12,20,27–29 Recent work has used Kalman
filtering in controllers for computer systems.5,13,14,23
Although many introductions to Kalman filtering are
available in the literature,1–4,6–11,17,21,25,29 they are usually
focused on particular applications such as robot motion
or state estimation in linear systems, making it difficult to
see how to apply Kalman filtering to other problems. Other
presentations derive Kalman filtering as an application
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES)
Ad hoc & sensor networks, Adaptive applications, Aeronautical Engineering, Aerospace Engineering
Agricultural Engineering, AI and Image Recognition, Allied engineering materials, Applied mechanics,
Architecture & Planning, Artificial intelligence, Audio Engineering, Automation and Mobile Robots
Automotive Engineering….
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Different analytical and numerical methods are commonly used to solve transient heat conduction problems. In this problem, the use of Alternating Direct Implicit scheme (ADI) was adopted to solve temperature variation within an infinitesimal long bar of a square cross-section. The bottom right quadrant of the square cross-section of the bar was selected. The surface of the bar was maintained at constant temperature and temperature variation within the bar was evaluated within a time frame. The Laplace equation governing the 2-dimesional heat conduction was solved by iterative schemes as a result of the time variation. The modelled problem using COMSOL-MULTIPHYSICS software validated the result of the ADI analysis. On comparing the Modelled results from COMSOL MULTIPHYSICS and the results from ADI iterative scheme graphically, there was an high level of agreement between both results.
A simple finite element solver for thermo-mechanical problems - margonari eng...Scilab
In this paper we would like to show how it is possible to develop a simple but effective finite element solver to deal with thermo-mechanical problems. In many engineering situations it is necessary to solve heat conduction problems, both steady and unsteady state, to estimate the temperature field inside a medium and, at the same time, compute the induced strain and stress states.
To solve such problems many commercial software tools are available. They provide user-friendly interfaces and flexible solvers, which can also take into account very complicated boundary conditions, such as radiation, and nonlinearities of any kind, to allow the user to model the reality in a very accurate and reliable way.
However, there are some situations in which the problem to be solved requires a simple and standard modeling: in these cases it could be sufficient to have a light and dedicated software able to give reliable solutions. Moreover, other two desirable features of such a software could be the possibility to access the source to easily program new tools and, last but not least, to have a cost-and-license free product. This turns out to be very useful when dealing with the solution of optimization problems.
Keeping in mind these considerations, we used the Scilab platform and the gmsh (which are both open source codes) to show that it is possible to build tailored software tools, able to solve standard but complex problems quite efficiently.
Algebraic Fault Attack on the SHA-256 Compression FunctionIJORCS
The cryptographic hash function SHA-256 is one member of the SHA-2 hash family, which was proposed in 2000 and was standardized by NIST in 2002 as a successor of SHA-1. Although the differential fault attack on SHA-1compression function has been proposed, it seems hard to be directly adapted to SHA-256. In this paper, an efficient algebraic fault attack on SHA-256 compression function is proposed under the word-oriented random fault model. During the attack, an automatic tool STP is exploited, which constructs binary expressions for the word-based operations in SHA-256 compression function and then invokes a SAT solver to solve the equations. The simulation of the new attack needs about 65 fault injections to recover the chaining value and the input message block with about 200 seconds on average. Moreover, based on the attack on SHA-256 compression function, an almost universal forgery attack on HMAC-SHA-256 is presented. Our algebraic fault analysis is generic, automatic and can be applied to other ARX-based primitives.
License plate recognition system is one of the core technologies in intelligent traffic control. In this paper, a new and tunable algorithm which can detect multiple license plates in high resolution applications is proposed. The algorithm aims at investigation into and identification of the novel Iranian and some European countries plate, characterized by both inclusion of blue area on it and its geometric shape. Obviously, the suggested algorithm contains suitable velocity due to not making use of heavy pre-processing operation such as image-improving filters, edge-detection operation and omission of noise at the beginning stages. So, the recommended method of ours is compatible with model-adaptation, i.e., the very blue section of the plate so that the present method indicated the fact that if several plates are included in the image, the method can successfully manage to detect it. We evaluated our method on the two Persian single vehicle license plate data set that we obtained 99.33, 99% correct recognition rate respectively. Further we tested our algorithm on the Persian multiple vehicle license plate data set and we achieved 98% accuracy rate. Also we obtained approximately 99% accuracy in character recognition stage.
Principal component analysis - application in financeIgor Hlivka
Principal component analysis is a useful multivariate times series method to examine and study the drivers of the changes in the entire dataset. The main advantage of PCA is the reduction of dimensionality where the large sets of data get transformed into few principal factors that explain majority of variability in that group. PCA has found many applications in finance – both in risk and yield curve analytics
An Elementary Introduction to Kalman FilteringAshish Sharma
KALMAN FILTERING IS a state estimation technique
used in many application areas such as spacecraft
navigation, motion planning in robotics, signal
processing, and wireless sensor networks because
of its ability to extract useful information from
noisy data and its small computational and memory
requirements.12,20,27–29 Recent work has used Kalman
filtering in controllers for computer systems.5,13,14,23
Although many introductions to Kalman filtering are
available in the literature,1–4,6–11,17,21,25,29 they are usually
focused on particular applications such as robot motion
or state estimation in linear systems, making it difficult to
see how to apply Kalman filtering to other problems. Other
presentations derive Kalman filtering as an application
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES)
Ad hoc & sensor networks, Adaptive applications, Aeronautical Engineering, Aerospace Engineering
Agricultural Engineering, AI and Image Recognition, Allied engineering materials, Applied mechanics,
Architecture & Planning, Artificial intelligence, Audio Engineering, Automation and Mobile Robots
Automotive Engineering….
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Novel Cosine Approximation for High-Speed Evaluation of DCTCSCJournals
This article presents a novel cosine approximation for high-speed evaluation of DCT (Discrete Cosine Transform) using Ramanujan Ordered Numbers. The proposed method uses the Ramanujan ordered number to convert the angles of the cosine function to integers. Evaluation of these angles is by using a 4th degree Polynomial that approximates the cosine function with error of approximation in the order of 10^-3. The evaluation of the cosine function is explained through the computation of the DCT coefficients. High-speed evaluation at the algorithmic level is measured in terms of the computational complexity of the algorithm. The proposed algorithm of cosine approximation increases the overhead on the number of adders by 13.6%. This algorithm avoids floating-point multipliers and requires N/2log2N shifts and (3N/2 log2 N)- N + 1 addition operations to evaluate an N-point DCT coefficients thereby improving the speed of computation of the coefficients .
Different analytical and numerical methods are commonly used to solve transient heat conduction problems. In this problem, the use of Alternating Direct Implicit scheme (ADI) was adopted to solve temperature variation within an infinitesimal long bar of a square cross-section. The bottom right quadrant of the square cross-section of the bar was selected. The surface of the bar was maintained at constant temperature and temperature variation within the bar was evaluated within a time frame. The Laplace equation governing the 2-dimesional heat conduction was solved by iterative schemes as a result of the time variation. The modelled problem using COMSOL-MULTIPHYSICS software validated the result of the ADI analysis. On comparing the Modelled results from COMSOL MULTIPHYSICS and the results from ADI iterative scheme graphically, there was an high level of agreement between both results.
A simple finite element solver for thermo-mechanical problems - margonari eng...Scilab
In this paper we would like to show how it is possible to develop a simple but effective finite element solver to deal with thermo-mechanical problems. In many engineering situations it is necessary to solve heat conduction problems, both steady and unsteady state, to estimate the temperature field inside a medium and, at the same time, compute the induced strain and stress states.
To solve such problems many commercial software tools are available. They provide user-friendly interfaces and flexible solvers, which can also take into account very complicated boundary conditions, such as radiation, and nonlinearities of any kind, to allow the user to model the reality in a very accurate and reliable way.
However, there are some situations in which the problem to be solved requires a simple and standard modeling: in these cases it could be sufficient to have a light and dedicated software able to give reliable solutions. Moreover, other two desirable features of such a software could be the possibility to access the source to easily program new tools and, last but not least, to have a cost-and-license free product. This turns out to be very useful when dealing with the solution of optimization problems.
Keeping in mind these considerations, we used the Scilab platform and the gmsh (which are both open source codes) to show that it is possible to build tailored software tools, able to solve standard but complex problems quite efficiently.
Algebraic Fault Attack on the SHA-256 Compression FunctionIJORCS
The cryptographic hash function SHA-256 is one member of the SHA-2 hash family, which was proposed in 2000 and was standardized by NIST in 2002 as a successor of SHA-1. Although the differential fault attack on SHA-1compression function has been proposed, it seems hard to be directly adapted to SHA-256. In this paper, an efficient algebraic fault attack on SHA-256 compression function is proposed under the word-oriented random fault model. During the attack, an automatic tool STP is exploited, which constructs binary expressions for the word-based operations in SHA-256 compression function and then invokes a SAT solver to solve the equations. The simulation of the new attack needs about 65 fault injections to recover the chaining value and the input message block with about 200 seconds on average. Moreover, based on the attack on SHA-256 compression function, an almost universal forgery attack on HMAC-SHA-256 is presented. Our algebraic fault analysis is generic, automatic and can be applied to other ARX-based primitives.
License plate recognition system is one of the core technologies in intelligent traffic control. In this paper, a new and tunable algorithm which can detect multiple license plates in high resolution applications is proposed. The algorithm aims at investigation into and identification of the novel Iranian and some European countries plate, characterized by both inclusion of blue area on it and its geometric shape. Obviously, the suggested algorithm contains suitable velocity due to not making use of heavy pre-processing operation such as image-improving filters, edge-detection operation and omission of noise at the beginning stages. So, the recommended method of ours is compatible with model-adaptation, i.e., the very blue section of the plate so that the present method indicated the fact that if several plates are included in the image, the method can successfully manage to detect it. We evaluated our method on the two Persian single vehicle license plate data set that we obtained 99.33, 99% correct recognition rate respectively. Further we tested our algorithm on the Persian multiple vehicle license plate data set and we achieved 98% accuracy rate. Also we obtained approximately 99% accuracy in character recognition stage.
Using Virtualization Technique to Increase Security and Reduce Energy Consump...IJORCS
An approach has been presented in this paper in order to generate a secure environment on internet Based Virtual Computing platform and also to reduce energy consumption in green cloud computing. The proposed approach constantly checks the accuracy of stored data by means of a central control service inside the network environment and also checks system security through isolating single virtual machines using a common virtual environment. This approach has been simulated on two types of Virtual Machine Manager (VMM) Quick EMUlator (Qemu), HVM (Hardware Virtual Machine) Xen and outputs of the simulation in VMInsight show that when service is getting singly used, the overhead of its performance will be increased. As a secure system, the proposed approach is able to recognize malicious behaviors and assure service security by means of operational integrity measurement. Moreover, the rate of system efficiency has been evaluated according to the amount of energy consumption on five applications (Defragmentation, Compression, Linux Boot Decompression and Kernel Boot). Therefore, this has been resulted that to secure multi-tenant environment, managers and supervisors should independently install a security monitoring system for each Virtual Machines (VMs) which will come up to have the management heavy workload of. While the proposed approach, can respond to all VM’s with just one virtual machine as a supervisor.
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 4). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
All paper submissions (http://www.ijorcs.org/submit-paper) are received and managed electronically by IJORCS Team. Detailed instructions about the submission procedure are available on IJORCS website (http://www.ijorcs.org/author-guidelines)
IJORCS now welcomes research manuscripts for its next issue, Volume 2, Issue 5. Authors are encouraged to contribute to IJORCS by submitting articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
Control of traffic lights at the intersections of the main issues is the optimal traffic. Intersections to regulate traffic flow of vehicles and eliminate conflicting traffic flows are used. Modeling and simulation of traffic are widely used in industry. In fact, the modeling and simulation of an industrial system is studied before creating economically and when it is affordable. The aim of this article is a smart way to control traffic. The first stage of the project with the objective of collecting statistical data (cycle time of each of the intersection of the lights of vehicles is waiting for a red light) steps where the data collection found optimal amounts next it is. Introduced by genetic algorithm optimization of parameters is performed. GA begin with coding step as a binary variable (the range specified by the initial data set is obtained) will start with an initial population and then a new generation of genetic operators mutation and crossover and will Finally, the members of the optimal fitness values are selected as the solution set. The optimal output of Petri nets CPN TOOLS modeling and software have been implemented. The results indicate that the performance improvement project in intersections traffic control systems. It is known that other data collected and enforced intersections of evolutionary methods such as genetic algorithms to reduce the waiting time for traffic lights behind the red lights and to determine the appropriate cycle.
A COMPARATIVE ANALYSIS OF FUZZY BASED HYBRID ANFIS CONTROLLER FOR STABILIZATI...ijscmcjournal
This paper illustrates a Comparative study of highly non-linear, complex and multivariable Inverted
Pendulum (IP) system on Cart using different soft computing techniques. Firstly, a Fuzzy logic controller
was designed using triangular and trapezoidal shape Membership functions (MF's). The trapezoidal fuzzy
controller shows better results in comparison to triangular fuzzy controller. Secondly, an Adaptive neuro
fuzzy inference system (ANFIS) controller was used to optimize the results obtained from trapezoidal fuzzy
controller. Finally, the study illustrates the effect of variation in shape of MF's on Performance parameters
of the IP system. The results shows that ANFIS controller provides better results in comparison to both
fuzzy controller.
FPGA Implementation of FIR Filter using Various Algorithms: A RetrospectiveIJORCS
This Paper is a review study of FPGA implementation of Finite Impulse response (FIR) with low cost and high performance. The key observation of this paper is an elaborate analysis about hardware implementations of FIR filters using different algorithm i.e., Distributed Arithmetic (DA), DA-Offset Binary Coding (DA-OBC), Common Sub-expression Elimination (CSE) and sum-of-power-of-two (SOPOT) with less resources and without affecting the performance of the original FIR Filter.
Stochastic Approximation and Simulated AnnealingSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 8.
More info at http://summerschool.ssa.org.ua
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this paper, a hybrid method for solving multi-objective problem has been provided. The proposed
method is combining the ε-Constraint and the Cuckoo algorithm. First the multi objective problem
transfers into a single-objective problem using ε-Constraint, then the Cuckoo optimization algorithm will
optimize the problem in each task. At last the optimized Pareto frontier will be drawn. The advantage of
this method is the high accuracy and the dispersion of its Pareto frontier. In order to testing the efficiency
of the suggested method, a lot of test problems have been solved using this method. Comparing the results
of this method with the results of other similar methods shows that the Cuckoo algorithm is more suitable
for solving the multi-objective problems.
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this paper, a hybrid method for solving multi-objective problem has been provided. The proposed method is combining the ε-Constraint and the Cuckoo algorithm. First the multi objective problem transfers into a single-objective problem using ε-Constraint, then the Cuckoo optimization algorithm will optimize the problem in each task. At last the optimized Pareto frontier will be drawn. The advantage of
this method is the high accuracy and the dispersion of its Pareto frontier. In order to testing the efficiency of the suggested method, a lot of test problems have been solved using this method. Comparing the results of this method with the results of other similar methods shows that the Cuckoo algorithm is more suitable for solving the multi-objective problems.
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov ModeIJECEIAES
Simulated Annealing algorithm (SA) is a well-known probabilistic heuristic. It mimics the annealing process in metallurgy to approximate the global minimum of an optimization problem. The SA has many parameters which need to be tuned manually when applied to a specific problem. The tuning may be difficult and time-consuming. This paper aims to overcome this difficulty by using a self-tuning approach based on a machine learning algorithm called Hidden Markov Model (HMM). The main idea is allowing the SA to adapt his own cooling law at each iteration, according to the search history. An experiment was performed on many benchmark functions to show the efficiency of this approach compared to the classical one.
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
Stochastic Analysis of Van der Pol OscillatorModel Using Wiener HermiteExpans...IJERA Editor
We study a model related to Van der Poloscillatorunder an external stochastic excitation described by white
noise process. This study is limited to find the Gaussian behavior of the stochastic solution processes related to
the model. Under the application ofWiener-Hermite expansion, a deterministic system is generated to describe
the Gaussian solution parameters (Mean and Variance).The deterministic system solution is approximated by
applying the multi-stepdifferential transformedmethodand the results are compared with NDSolveMathematica
10 package. Some case studies are considered to illustrate some comparisons for the obtained results related to
the Gaussian behavior parameters.
Stochastic Analysis of Van der Pol OscillatorModel Using Wiener HermiteExpans...IJERA Editor
We study a model related to Van der Poloscillatorunder an external stochastic excitation described by white
noise process. This study is limited to find the Gaussian behavior of the stochastic solution processes related to
the model. Under the application ofWiener-Hermite expansion, a deterministic system is generated to describe
the Gaussian solution parameters (Mean and Variance).The deterministic system solution is approximated by
applying the multi-stepdifferential transformedmethodand the results are compared with NDSolveMathematica
10 package. Some case studies are considered to illustrate some comparisons for the obtained results related to
the Gaussian behavior parameters.
Adapted Branch-and-Bound Algorithm Using SVM With Model SelectionIJECEIAES
Branch-and-Bound algorithm is the basis for the majority of solving methods in mixed integer linear programming. It has been proving its efficiency in different fields. In fact, it creates little by little a tree of nodes by adopting two strategies. These strategies are variable selection strategy and node selection strategy. In our previous work, we experienced a methodology of learning branch-and-bound strategies using regression-based support vector machine twice. That methodology allowed firstly to exploit information from previous executions of Branch-and-Bound algorithm on other instances. Secondly, it created information channel between node selection strategy and variable branching strategy. And thirdly, it gave good results in term of running time comparing to standard Branch-and-Bound algorithm. In this work, we will focus on increasing SVM performance by using cross validation coupled with model selection.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
Branch and Bound technique (B&B) is commonly used for intelligent search in finding a set of integer solutions within a space of interest. The corresponding binary tree structure provides a natural parallelism allowing concurrent evaluation of sub-problems using parallel computing technology. Flower pollination Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of Flower pollination Meta-heuristic Algorithm, (FPPSO), for solving integer programming problems. The proposed algorithm combines the standard flower pollination algorithm (FP) with the particle swarm optimization (PSO) algorithm to improve the searching accuracy. Numerical results show that the FPPSO is able to obtain the optimal results in comparison to traditional methods (branch and bound) and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm.Branch and bound, flower pollination Algorithm; meta-heuristics; optimization; the particle swarm optimization; integer programming.
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
Hybrid method for achieving Pareto front on economic emission dispatch IJECEIAES
In this paper hybrid method, Modified Nondominated Sorted Genetic Algorithm (MNSGA-II) and Modified Population Variant Differential Evolution(MPVDE) have been placed in effect in achieving the best optimal solution of Multiobjective economic emission load dispatch optimization problem. In this technique latter, one is used to enforce the assigned percent of the population and the remaining with the former one. To overcome the premature convergence in an optimization problem diversity preserving operator is employed, from the tradeoff curve the best optimal solution is predicted using fuzzy set theory. This methodology validated on IEEE 30 bus test system with six generators, IEEE 118 bus test system with fourteen generators and with a forty generators test system. The solutions are dissimilitude with the existing metaheuristic methods like Strength Pareto Evolutionary Algorithm-II, Multiobjective differential evolution, Multiobjective Particle Swarm optimization, Fuzzy clustering particle swarm optimization, Nondominated sorting genetic algorithm-II.
Similar to Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems (20)
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 2). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
To view complete list of topics coverage of IJORCS, Aim & Scope, please visit, www.ijorcs.org/scope
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 1). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
Voice Recognition System using Template MatchingIJORCS
It is easy for human to recognize familiar voice but using computer programs to identify a voice when compared with others is a herculean task. This is due to the problem that is encountered when developing the algorithm to recognize human voice. It is impossible to say a word the same way in two different occasions. Human speech analysis by computer gives different interpretation based on varying speed of speech delivery. This research paper gives detail description of the process behind implementation of an effective voice recognition algorithm. The algorithm utilize discrete Fourier transform to compare the frequency spectra of two voice samples because it remained unchanged as speech is slightly varied. Chebyshev inequality is then used to determine whether the two voices came from the same person. The algorithm is implemented and tested using MATLAB.
Channel Aware Mac Protocol for Maximizing Throughput and FairnessIJORCS
The proper channel utilization and the queue length aware routing protocol is a challenging task in MANET. To overcome this drawback we are extending the previous work by improving the MAC protocol to maximize the Throughput and Fairness. In this work we are estimating the channel condition and Contention for a channel aware packet scheduling and the queue length is also calculated for the routing protocol which is aware of the queue length. The channel is scheduled based on the channel condition and the routing is carried out by considering the queue length. This queue length will provide a measurement of traffic load at the mobile node itself. Depending upon this load the node with the lesser load will be selected for the routing; this will effectively balance the load and improve the throughput of the ad hoc network.
A Review and Analysis on Mobile Application Development Processes using Agile...IJORCS
Over a last decade, mobile telecommunication industry has observed a rapid growth, proved to be highly competitive, uncertain and dynamic environment. Besides its advancement, it has also raised number of questions and gained concern both in industry and research. The development process of mobile application differs from traditional softwares as the users expect same features similar to their desktop computer applications with additional mobile specific functionalities. Advanced mobile applications require assimilation with existing enterprise computing systems such as databases, legacy applications and Web services. In addition, the lifecycle of a mobile application moves much faster than that of a traditional Web application and therefore the lifecycle management associated therein must be adjusted accordingly. The Security and application testing are more stimulating and interesting in mobile application than in Web applications since the technology in mobile devices progresses rapidly and developers must stay in touch with the latest developments, news and trends in their area of work. With the rising competence of software market, researchers are seeking more flexible methods that can adjust to dynamic situations where software system requirements are changing over time, producing valuable software in short duration and within low budget. The intrinsic uncertainty and complexity in any software project therefore requires an iterative developmental plan to cope with uncertainty and a large number of unknown variables. Agile Methodologies were thus introduced to meet the new requirements of the software development companies. The agile methodologies aim at facilitating software development processes where changes are acceptable at any stage and provide a structure for highly collaborative software development. Therefore, the present paper aims in reviewing and analysing different prevalent methodologies utilizing agile techniques that are currently in use for the development of mobile applications. This paper provides a detailed review and analysis on the use of agile methodologies in the proposed processes associated with mobile application skills and highlights its benefit and constraints. In addition, based on this analysis, future research needs are identified and discussed.
Congestion Prediction and Adaptive Rate Adjustment Technique for Wireless Sen...IJORCS
In general, nodes in Wireless Sensor Networks (WSNs) are equipped with limited battery and computation capabilities but the occurrence of congestion consumes more energy and computation power by retransmitting the data packets. Thus, congestion should be regulated to improve network performance. In this paper, we propose a congestion prediction and adaptive rate adjustment technique for Wireless Sensor Networks. This technique predicts congestion level using fuzzy logic system. Node degree, data arrival rate and queue length are taken as inputs to the fuzzy system and congestion level is obtained as an outcome. When the congestion level is amidst moderate and maximum ranges, adaptive rate adjustment technique is triggered. Our technique prevents congestion by controlling data sending rate and also avoids unsolicited packet losses. By simulation, we prove the proficiency our technique. It increases system throughput and network performance significantly.
A Study of Routing Techniques in Intermittently Connected MANETsIJORCS
A Mobile Ad hoc Network (MANET) is a self-configuring infrastructure less network of mobile devices connected by wireless. These are a kind of wireless Ad hoc Networks that usually has a routable networking environment on top of a Link Layer Ad hoc Network. The routing approach in MANET includes mainly three categories viz., Reactive Protocols, Proactive Protocols and Hybrid Protocols. These traditional routing schemes are not pertinent to the so called Intermittently Connected Mobile Ad hoc Network (ICMANET). ICMANET is a form of Delay Tolerant Network, where there never exists a complete end – to – end path between two nodes wishing to communicate. The intermittent connectivity araise when network is sparse or highly mobile. Routing in such a spasmodic environment is arduous. In this paper, we put forward the indication of prevailing routing approaches for ICMANET with their benefits and detriments
Improving the Efficiency of Spectral Subtraction Method by Combining it with ...IJORCS
In the field of speech signal processing, Spectral subtraction method (SSM) has been successfully implemented to suppress the noise that is added acoustically. SSM does reduce the noise at satisfactory level but musical noise is a major drawback of this method. To implement spectral subtraction method, transformation of speech signal from time domain to frequency domain is required. On the other hand, Wavelet transform displays another aspect of speech signal. In this paper we have applied a new approach in which SSM is cascaded with wavelet thresholding technique (WTT) for improving the quality of speech signal by removing the problem of musical noise to a great extent. Results of this proposed system have been simulated on MATLAB.
An Adaptive Load Sharing Algorithm for Heterogeneous Distributed SystemIJORCS
Due to the restriction of designing faster and faster computers, one has to find the ways to maximize the performance of the available hardware. A distributed system consists of several autonomous nodes, where some nodes are busy with processing, while some nodes are idle without any processing. To make better utilization of the hardware, the tasks or load of the overloaded node will be sent to the under loaded node that has less processing weight to minimize the response time of the tasks. Load balancing is a tool used effectively for balancing the load among the systems. Dynamic load balancing takes into account of the current system state for migration of the tasks from heavily loaded nodes to the lightly loaded nodes. In this paper, we devised an adaptive load-sharing algorithm to balance the load by taking into consideration of connectivity among the nodes, processing capacity of each node and link capacity.
The Design of Cognitive Social Simulation Framework using Statistical Methodo...IJORCS
Modeling the behavior of the cognitive architecture in the context of social simulation using statistical methodologies is currently a growing research area. Normally, a cognitive architecture for an intelligent agent involves artificial computational process which exemplifies theories of cognition in computer algorithms under the consideration of state space. More specifically, for such cognitive system with large state space the problem like large tables and data sparsity are faced. Hence in this paper, we have proposed a method using a value iterative approach based on Q-learning algorithm, with function approximation technique to handle the cognitive systems with large state space. From the experimental results in the application domain of academic science it has been verified that the proposed approach has better performance compared to its existing approaches.
An Enhanced Framework for Improving Spatio-Temporal Queries for Global Positi...IJORCS
To efficiently process continuous spatio-temporal queries, we need to efficiently and effectively handle large number of moving objects and continuous updates on these queries. In this paper, we propose a framework that employs a new indexing algorithm that is built on top of SQL Server 2008 and avoid the overhead related to R-Tree indexing. To answer range queries, we utilize dynamic materialized view concept to efficiently handle update queries. We propose an adaptive safe region to reduce communication costs between the client and the server and to minimize position update load. Caching of results was utilized to enhance the overall performance of the framework. To handle concurrent spatio-temporal queries, we utilize publish/subscribe paradigm to group similar queries and efficiently process these requests. Experiments show that the overall proposed framework performance was able to outperform R-Tree index and produce promising and satisfactory results.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Dynamic Map and Diffserv Based AR Selection for Handoff in HMIPv6 Networks IJORCS
In HMIPv6 Networks, most of the existing handoff decision mechanisms deal mainly with the selection of Mobility Anchor Point (MAP), ignoring the selection of access router (AR) under each MAP. In this paper, we propose a new mechanism called “Dynamic MAP and Diffserv based ARs selection for Handoff in HMIPv6 networks” and it deals with selecting the MAP as well as ARs. MAP will be selected dynamically by checking load, session mobility ratio (SMR), Binding update cost and Location Rate. After selecting the best MAP, the Diffserv approach is used to select the AR under the MAP, based on its resource availability. The AR is implemented at the edge router of Diffserv. DiffServ can be used to provide low-latency to critical network traffic such as voice or streaming media while providing simple best-effort service to non-critical services such as web traffic or file transfers. By using this mechanism, we can assure that better resource utilization and throughput can be attained during Handoff in HMIPv6 networks.
From Physical to Virtual Wireless Sensor Networks using Cloud Computing IJORCS
In the modern world, billions of physical sensors are used for various dedications: Environment Monitoring, Healthcare, Education, Defense, Manufacturing, Smart Home, Agriculture Precision and others. Nonetheless, they are frequently utilized by their own applications and thereby snubbing the significant possibilities of sharing the resources in order to ensure the availability and performance of physical sensors. This paper assumes that the immense power of the Cloud can only be fully exploited if it is impeccably integrated into our physical lives. The principal merit of this work is a novel architecture where users can share several types of physical sensors easily and consequently many new services can be provided via a virtualized structure that allows allocation of sensor resources to different users and applications under flexible usage scenarios within which users can easily collect, access, process, visualize, archive, share and search large amounts of sensor data from different applications. Moreover, an implementation has been achieved using Arduino-Atmega328 as hardware platform and Eucalyptus/Open Stack with Orchestra-Juju for Private Sensor Cloud. Then this private Cloud has been connected to some famous public clouds such as Amazon EC2, ThingSpeak, SensorCloud and Pachube. The testing was successful at 80%. The recommendation for future work would be to improve the effectiveness of virtual sensors by applying optimization techniques and other methods.
Prediction of Atmospheric Pressure at Ground Level using Artificial Neural Ne...IJORCS
Prediction of Atmospheric Pressure is one important and challenging task that needs lot of attention and study for analyzing atmospheric conditions. Advent of digital computers and development of data driven artificial intelligence approaches like Artificial Neural Networks (ANN) have helped in numerical prediction of pressure. However, very few works have been done till now in this area. The present study developed an ANN model based on the past observations of several meteorological parameters like temperature, humidity, air pressure and vapour pressure as an input for training the model. The novel architecture of the proposed model contains several multilayer perceptron network (MLP) to realize better performance. The model is enriched by analysis of alternative hybrid model of k-means clustering and MLP. The improvement of the performance in the prediction accuracy has been demonstrated by the automatic selection of the appropriate cluster.
Ant Colony with Colored Pheromones Routing for Multi Objectives Quality of Se...IJORCS
In this article, we present a new Ant-routing algorithm with colored pheromones and clustering techniques for satisfying users’ Quality of Service (QoS) requirements in Wireless Sensor Networks (WSNs). An important problem is to detect the best route from a source node to the destination node. Moreover, it is considered that the feature of non-uniformly distributed traffic load and possibility existing of the traffic requiring various performances; therefore, it is assumed the different class of traffic required for QoS of communication. In this paper, novel protocol, the suitability of using meta-heuristic an ant colony optimization based on energy saving and multi objectives, the demand of QoS routing protocol for WSN will be very adaptive ,resident power and mainly decrease end-to-end delay. These metrics are used by colored pheromones adapted to the traffic classes. Moreover, we reinforce the proposed method for scalability issue by clustering techniques. We use a proactive route discover algorithms in clusters and reactive discovery mechanism between different clusters. Compared to existing QoS routing protocols, the novel algorithm has been designed for various service categories such as real time (RT) and best effort (BE) traffic, resulted lower packet deadline miss ratio and higher energy efficiency and better QoS and longer lifetime.
Design a New Image Encryption using Fuzzy Integral Permutation with Coupled C...IJORCS
This article introduces a novel image encryption algorithm based on DNA addition combining and coupled two-dimensional piecewise nonlinear chaotic map. This algorithm consists of two parts. In the first part of the algorithm, a DNA sequence matrix is obtained by encoding each color component, and is divided into some equal blocks and then the generated sequence of Sugeno integral fuzzy and the DNA sequence addition operation is used to add these blocks. Next, the DNA sequence matrix from the previous step is decoded and the complement operation to the result of the added matrix is performed by using Sugeno fuzzy integral. In the second part of the algorithm, the three modified color components are encrypted in a coupling fashion in such a way to strengthen the cryptosystem security. It is observed that the histogram, the correlation and avalanche criterion, can satisfy security and performance requirements (Avalanche criterion > 0.49916283). The experimental results obtained for the CVG-UGR image databases reveal the fact that the proposed algorithm is suitable for practical use to protect the security of digital image information over the Internet.
Can “Feature” be used to Model the Changing Access Control Policies? IJORCS
Access control policies [ACPs] regulate the access to data and resources in information systems. These ACPs are framed from the functional requirements and the Organizational security & privacy policies. It was found to be beneficial, when the ACPs are included in the early phases of the software development leading to secure development of information systems. Many approaches are available for including the ACPs in requirements and design phase. They relied on UML artifacts, Aspects and also Feature for this purpose. But the earlier modeling approaches are limited in expressing the evolving ACPs due to organizational policy changes and business process modifications. In this paper, we analyze, whether “Feature”- defined as an increment in program functionality can be used as a modeling entity to represent the Evolving Access control requirements. We discuss the two prominent approaches that use Feature in modeling ACPs. Also we have a comparative analysis to find the suitability of Features in the context of changing ACPs. We conclude with our findings and provide directions for further research.
Ontology Based Information Extraction for Disease Intelligence IJORCS
Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems
1. International Journal of Research in Computer Science
eISSN 2249-8265 Volume 4 Issue 3 (2014) pp. 1-11
www.ijorcs.org, A Unit of White Globe Publications
doi: 10.7815/ijorcs.43.2014.084
HYBRID SIMULATED ANNEALING AND
NELDER-MEAD ALGORITHM FOR SOLVING
LARGE-SCALE GLOBAL OPTIMIZATION
PROBLEMS
Ahmed Fouad Ali
Suez Canal University, Dept. of Computer Science, Faculty of Computers and Information, Ismailia, EGYPT
ahmed_fouad@ci.suez.edu.eg
Abstract: This paper presents a new algorithm for
solving large scale global optimization problems
based on hybridization of simulated annealing and
Nelder-Mead algorithm. The new algorithm is called
simulated Nelder-Mead algorithm with random
variables updating (SNMRVU). SNMRVU starts with
an initial solution, which is generated randomly and
then the solution is divided into partitions. The
neighborhood zone is generated, random number of
partitions are selected and variables updating process
is starting in order to generate a trail neighbor
solutions. This process helps the SNMRVU algorithm
to explore the region around a current iterate
solution. The Nelder- Mead algorithm is used in the
final stage in order to improve the best solution found
so far and accelerates the convergence in the final
stage. The performance of the SNMRVU algorithm is
evaluated using 27 scalable benchmark functions and
compared with four algorithms. The results show that
the SNMRVU algorithm is promising and produces
high quality solutions with low computational costs.
Keywords: Global optimization, Large-scale
optimization, Nelder-Mead algorithm, Simulated
annealing.
I. INTRODUCTION
Simulated annealing (SA) applied to optimization
problems emerge from the work of S. Kirkpatrick et al.
[18] and V. Cerny [3]. In these pioneering works, SA
has been applied to graph partitioning and VLSI
design. In the 1980s, SA had a major impact on the
field of heuristic search for its simplicity and
efficiency in solving combinatorial optimization
problems. Then, it has been extended to deal with
continuous optimization problems [25] and has been
successfully applied to solve a variety of applications
like scheduling problems that include project
scheduling [1, 4], parallel machines [5, 17, 21, 34].
However, implementing SA on the large scale
optimization problems is still very limited in
comparison with some other meta-heuristics like
genetic algorithm, differential evolution, particle
swarm, etc. The main powerful feature of SA is the
ability of escaping from being trapped in local minima
by accepting uphill moves through a probabilistic
procedure especially in the earlier stages of the search.
In this paper, we produce a new hybrid simulated
annealing and Nelder-Mead algorithm for solving
large scale global optimization problems. The
proposed algorithm is called simulated Nelder-Mead
algorithm with random variables updating
(SNMRVU). The goal of the SNMRVU algorithm is
construct an efficient hybrid algorithm to obtain
optimal or near optimal solutions of a given objective
functions with different properties and large number
of variables. In order to search the neighborhood of
the current solution with large variables, we need to
reduce the dimensionality. The proposed SNMRVU
algorithm searches neighborhood zones of smaller
number of variables at each iteration instead of
searching neighborhood zones of all the n variables.
Many promising methods have been proposed to solve
the mentioned problem in Equation 1, for example,
genetic algorithms [14, 24], particle swarm
optimization [23, 28], ant colony optimization [31],
tabu search [9, 10], differential evolution [2, 6] scatter
search [15, 20], and variable neighborhood search [11,
27]. Although the efficiency of these methods, when
applied to lower and middle dimensional problems,
e.g., D < 100, many of them suffer from the curse of
dimensionality when applied to high dimensional
problems.
The quality of any meta-heuristics method is its
capability of performing wide exploration and deep
exploitations process, these two processes have been
invoked in many meta-heuristics works through
different strategies, see for instance [8, 12, 135].
www.ijorcs.org
2. 2 Ahmed Fouad Ali
SNMRVU algorithm invoked these two processes
and combined them together in order to improve its
performance through three strategies. The first
strategy is a variable partitioning strategy, which
allows SNMRVU algorithm to intensify the search
process at each iteration. The second strategy is an
effective neighborhood structure, which uses the
neighborhood area to generate trail solutions. The last
strategy is final intensification, which uses the Nelder-
Mead algorithm as a local search algorithm in order to
improve the best solutions found so far. The
performance of the SNMRVU algorithm is tested
using 27 functions, 10 of them are classical function
[16] and they are reported in Table 3, the reminder 17
are hard functions [22], which are reported in Table
10. SNMRVU is compared with four algorithms as
shown in Section IV, Section V. The numerical results
show that SNMRVU is a promising algorithm and
faster than other algorithms, and it gives high quality
solutions. The paper is organized as follows. In
Section II, we define the global optimization problems
and give an overview of the Nelder-Mead algorithms.
Section III describes the proposed SNMRVU
algorithm. Sections IV and V discuss the performance
of the proposed algorithm and report the comparative
experimental results on the benchmark functions.
Section VI summarizes the conclusion.
II. PROBLEM DEFINITION AND OVERVIEW OF
THE NELDER-MEAD ALGORITHM
In the following subsections, we define the global
optimization problems and present an overview of the
Nelder-Mead algorithms and the necessary
mechanisms in order to understand the proposed
algorithm.
A. Global optimization problems definition
Meta-heuristics have received more and more
popularity in the past two decades. Their efficiency
and effectiveness to solve large and complex problems
has attracted many researchers to apply their
techniques in many applications. One of these
applications is solving the global optimization
problems, this problem can express as follows:
Where f(x) is a nonlinear function, x = (x1, ..., xn) is
a vector of continuous and bounded variables, x, l, u ϵ
Rn
.
B. The Nelder-Mead algorithm
The Nelder-Mead algorithm [29] is one of the most
popular derivative free nonlinear optimization
algorithm. The Nelder-Mead algorithm starts with n +
1 point (vertices) as x1, x2, ..., xn+1. The algorithm
evaluates, order and re-label the vertices. At each
iteration, new points are computed, along with their
function values, to form a new simplex. Four scalar
parameters must be specified to define a complete
Nelder-Mead algorithm; coefficients of reflection ρ,
expansion χ, contraction γ , and shrinkage σ. These
parameters are chosen to satisfy ρ > 0, χ > 1, 0 < γ < 1
and 0 < σ < 1.
III. THE PROPOSED SNMRVU ALGORITHM
The proposed SNMRVU starts with an initial
solution generated randomly, which consists of n
variables. The solution is divided into η partitions,
each partition contains υ variables (a limited number
of dummy variables may be added to the last partition
if the number of variables n is not a multiple of υ). At
a fixed temperature, random partition(s) is/are
selected, and trail solutions are generated by updating
random numbers of variables in the selected partition.
The neighbor solution with the best objective function
value is always accepted. Otherwise, the neighbor is
selected with a given probability that depends on the
current temperature and the amount of degradation ∆E
of the objective value. ∆E represents the difference in
the objective value between the current solution and
generated neighboring solution. At a particular level of
temperature, many trails solutions are explored until
the equilibrium state is reached, which is a given
number of iterations executed at each temperature T in
our in SNMRVU algorithm. The temperature is
gradually decreased according to a cooling schedule.
The scenario is repeated until T reached to Tmin.
SNMRVU uses the Nelder-Mead algorithm as a local
search algorithm in order to refine the best solution
found so far.
A. Variable partitioning and trail solution generating
An iterate solution in SNMRVU is divided into η
partitions. Each partition contains υ variables, i.e., η =
n/υ. The partitioning mechanism with υ = 5 is shown
in Figure 1. The dotted rectangular in Figure 1 shows
the selected partition with its variables. Trail solutions
are generating by updating all the variables in the
selected partition(s). The number of generated trail
solutions is μ, the number of the selected partition(s)
at each iteration in the algorithm inner loop (Markove
chain) is w, where w is a random number, w ϵ [1, 3].
Once the equilibrium state is reached (a given number
www.ijorcs.org
3. Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems 3
of iterations equal to μ), the temperature is gradually
decreased according to a cooling schedule, and the
operation of generating new trail solutions is repeated
until stopping criteria satisfied, e.g., T ≤ Tmin.
Figure 1: Variable Partitioning
B. The SNMRVU algorithm
SNMRVU algorithm starts with an initial solution
x0
generated randomly. At each iteration the solution
is divided into η partitions. The neighborhood zone is
generated in order to generate a trail neighborhood
solutions. The generated neighbor solution that
improves the objective function is always selected.
Otherwise the solution is accepted with a given
probability e-∆E/T, where T is the current
temperature, and ∆E is the amount of the degradation
of the objective function. The scenario is repeated
until the equilibrium state is reached. In SNMRVU
algorithm, the equilibrium state is a given number of
iterations executed at each temperature, this number is
equals to μ, μ is a user predefined number. Once the
equilibrium state is reached, the temperature is
decreased gradually according to a cooling schedule.
This process is repeated until the stopping criteria
satisfied, which is in our algorithm T ≤ Tmin. In order
to refine the best solution, SNMRVU uses the Nelder-
Mead algorithm as a local search algorithm in the final
stage. The structure of the SNMRVU with the formal
detailed description is given in Algorithm 1, all
variables in Algorithm 1 and its values are reported in
Table 1, 2.
Algorithm 1 SNMRVU Algorithm
1. Choose an initial solution x0
.
2. Set initial values for Tmax, Tmin, β, μ, υ , z0.
3. Set z= z0, T= Tmax, x=x0
.
4. Repeat
5. k:=0.
6. Repeat
7. Partition the solution x into η partitions, where
η=n/ υ.
8. Generate neighborhood trail solutions y1
,...yn
around x.
9. Set x' equal to the best trail solution from
y1
,...yn
.
10. ∆E =f(x')-f(x).
11. If ∆E ≤ 0 Then
12. x=x'.
13. z=τzmax, τ>1
14. Else
15. z=αzmin, α<1
16. If rand( )< e-∆E/T
Then
17. x=x'.
18. EndIf
19. EndIf
20. k=k+1.
21. Until k ≤ μ
22. T=T-β.
23. Until T≤ T min
24. Apply Nelder-Mead algorithm at Nelite solutions.
Table 1: Parameter used in SNMRVU Algorithm
Parameters Definitions
n No. of variables
υ Partition size
η No. of total partitions
μ No of maximum trail solutions
ζ Id no, of the selected partition
w No of the selected updated partition(s)
z Radius of neighborhood
z0 Initial radius of neighborhood
zmax Maximum setting of z
zmin Minimum setting of z
Tmax Initial temperature
Tmin Final temperature
∆E The different objective value between the
current solution and the generated solution
β Temperature reduction value
Nmax No. of maximum iteration in the Nelder-
Mead algorithm
Nelite No. of best solutions for intensification
Table 2: Parameter Setting
Parameter Value
υ 5
η n/ υ
μ 3
zmax (u-l)/2
zmin (u-l)/50
z0 (zmax+ zmin)/2
Tmax n
β 1
Nmax 5n
Nelite 1
IV. NUMERICAL EXPERIMENTAL
This section presents the performance of our
proposed SNMRVU algorithm and the comparison
results between it and other four benchmark
algorithms. SNMRVU uses a selected two sets of
benchmark functions with different properties. The
www.ijorcs.org
4. 4 Ahmed Fouad Ali
first set of functions contains of 10 classical functions
which listed in Table 3, the second set of functions
contains of 17 hard functions from special session on
real parameter optimization of IEEE congress on
evolutionary computations, CEC2005 [22], the results
of the benchmark algorithms and our SNMRVU
algorithm are averaged over 30 runs. The numerical
results of all tested functions are reported in Tables 4,
5, 6, 7, 9, and 10.
A. Performance analysis
In the following subsection, we present the
efficiency of the applied strategies in our proposed
SNMRVU algorithm.
Table 3: Classical Functions
f Function name Bounds Global
minimum
F1 Branin Function [-5,10] F(x*
)= 0.397887
F2 Booth Function [-10,10] F(x*
)= 0
F3 Matyas Function [-5,10] F(x*
)= 0
F4 Rosenbrock
Function
[-5,10] F(x*
)= 0
F5 Zakhrof Function [-5,10] F(x*
)= 0
F6 Trid Function [-n2
,n2
] at n=6,
F(x*
)= -50
F7 SumSqure function [-5,10] F(x*
)= 0
F8 Sphere Function [-100,100] F(x*
)= 0
F9 Staircased
Rosenbrock
[-5,10] F(x*
)= 0
F10 Staircased LogAbs
Function
[-5,10] F(x*
)= 0
1. The efficiency of variables partitioning:
In order to check the efficiency of variable
partitioning, we compare our SNMRVU algorithm
with the basic SA algorithm using the same
termination criteria and the same SA parameters, the
results are shown in Figure 2. The dotted line
represents the results of the basic SA (without
partitioning mechanism), the solid line represents the
result SNMRVU algorithm with the variables
partitioning process. f2 is selected with dimensions
100, 500 and 1000 by plotting the number of iterations
versus the function values. Figure 2 shows that the
function values of SNMRVU are rapidly decreases as
the number of iteration increases rather than the
performance of the basic SA. We can conclude from
Figure 2 that exploring the neighborhood of all
variables at the same time as the basic SA do, can
badly effects the progress of the search. However,
exploring the neighborhood gradually through
partition with small number of variables as applied in
the SNMRVU algorithm can give better results.
2. The performance of the Nelder-Mead algorithm
SNMRVU uses the Nelder-Mead algorithm as an
intensification process in order to accelerate the
convergence in the final stage instead of let the
algorithm run for several iterations without significant
improvement of the function values. Figure 3 shows
the performance of the Nelder-Mead algorithm on two
functions f3, h2 by plotting the number of iterations
versus the function values. The dotted line in Figure 3
represents the behavior of the final intensification
process.
B. SNMRVU and other algorithms.
We compared our SNMRVU algorithm with four
benchmark algorithm. The results of all algorithms are
taken from its original papers.
1. SSR (Scatter search with randomized subset
generation).
In [16] two scatter search methods were proposed,
they called SSR, SSC, and some computational testes
were applied on classical direct search methods [7, 19,
30, 32, 33, 36, 37], and joined them with scatter search
method. In SSR method, a subset is created in two
steps. First, the subset size is decided, and second
selecting the solutions from the reference set randomly
until the required subset size is obtained. SSR uses
biased randomized method in order to select the subset
size. The different size probabilities are adjusted
dynamically based on search history.
www.ijorcs.org
5. Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems 5
Figure 2: Basic SA vs. SNMRVU.
Figure 3: The Performance of the Nelder-Mead Algorithm.
www.ijorcs.org
6. 6 Ahmed Fouad Ali
2. SSC (Scatter search with clustering subset
generation).
SSC method uses k clustering algorithm [26] in
order to create subsets by dividing the reference set
points into k clusters. The cluster are then ranked by
the quality of the points, which they contain, and the
cluster are selected as subset sequentially.
C. Comparison between SNMRVU, SSR and SSC on
functions with 16-512 dimensions.
The proposed SNMRVU algorithm was tested and
compared with SSR, SSC algorithm on 10 benchmark
functions, which reported in Table 3. The dimensions
of functions are 16, 32, 64, 128, 256, and 512,
respectively, the results of the three algorithms are
reported in Tables 4, 5, 6. SNMRVU was run 30 times
with different initial solutions, the average of function
evaluation values of each test function is reported. In
the case where at least one run fails to the global
solution, to produce a converged point, the ratio of
successful run is recorded in parentheses. The
experimental results for f5 in Table 6 at 256, 512
dimensional and f6 in Table 5 at 128 dimensional were
not reported in its original paper [16]. The best mean
values of the four methods are marked in bold face.
SNMRVU algorithm can obtain better function values
if we apply one termination criterion, which is the
number of evaluation function values is ≤ 50, 000.
Table 4: Function Evaluations of SSR, SSC and SNMRVU with 16, 32 Dimensions.
16 dimensions 32 dimensions
F SSR SSC SNMRVU SSR SSC SNMRVU
F1 406.6 374.2 456.4 779.7 783.2 1086.2
F2 806.4 737.2 218.3 1732.4 1436.9 393.6
F3 1109.8 902.1 217.3 2422.8 1834.6 327.1
F4 (0.9) 12500.4 5134 (0.9) 21397.7 13213.4
F5 6827.8 4030.5 439.4 32626.4 16167.1 888.3
F6 4357.8 3447.5 643.1 (0.8) (0.3) 2076
F7 298.3 344.7 312.6 622.5 647.5 734.5
F8 227.2 323.8 133.4 462.5 463.2 261.5
F9 31978.3 13575.7 2645.3 26315.3 22706.1 5112.3
F10 441.8 445.8 395.4 893.7 896.3 698.7
Table 5: Function Evaluations of SSR, SSC and SNMRVU with 64, 128 Dimensions.
64 dimensions 128dimensions
F SSR SSC SNMRVU SSR SSC SNMRVU
F1 1636.6 1626.4 1996.1 3448.8 3443.7 4130.2
F2 3154.8 3148.5 842.8 6737.8 7516.3 1545.4
F3 3854.2 3827.1 777.4 8575 9687.9 1674
F4 39193.2 (0.2) 39212.3 (0.0) (0.0) (0.0)
F5 (0.0) (0.1) 1654.2 (0.0) (0.0) 4125.3
F6 (0.0) (0.0) 7407.3 - - 34440.3
F7 1369 1348.4 1996.4 2840.7 2808.8 4135.2
F8 958 983.9 517.3 1979.4 2111.7 1029.1
F9 37090.5 (0.0) (0.2) (0.0) (0.0) (0.0)
F10 1858.8 1861 1559.7 3880.2 3899.9 2128.9
Table 6: Function Evaluations of SSR, SSC and SNMRVU with 256, 512 Dimensions.
256 dimensions 512 dimensions
F SSR SSC SNMRVU SSR SSC SNMRVU
F1 7173.4 7184.6 8273.7 15206.9 15221.8 1734.8
F2 16293.8 16915.3 3081.3 36840 36635.4 7179.2
F3 24207 24903.1 3338.1 (0.0) (0.0) 6656.8
F4 (0.0) (0.0) (0.0) (0.0) (0.0) (0.0)
F5 - - 9113.1 - - 18312.5
F6 (0.0) (0.0) (0.0) (0.0) (0.0) (0.0)
F7 6057.1 6105.4 7934.3 12866.2 13021.4 23215.7
F8 4236.8 4258.6 2310.3 8890.4 8890.2 4614.1
F9 (0.0) (0.0) (0.0) (0.0) (0.0) (0.0)
F10 8070.3 8212.7 6811.2 17180.3 16947.2 12593.3
www.ijorcs.org
7. Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems 7
Table 7 shows the mean number of function values.
For function f1, we obtained the exact global minimum
value for all given dimensions with function
evaluation values ≤ 50, 000. Also we obtained the
exact global minimum values for functions f6, h2 till n
= 128, we can reach to the global minimum of these
functions for n > 128 by increasing the number of
function evaluation values, for example for function f6
our SNMRVU algorithm can obtains the value -
2828800, which is the global minimum of it within a
cost of 97400 at dimension n = 256. For the other
functions, we obtained very close results to their
global minimum. We can conclude that, the SNMRVU
algorithm is robust and can obtains better function
values with lower computational cost than the other
two algorithms values with lower computational cost
than the other two algorithms.
Table 7: Mean Number of Function Values with 16- 512
Dimensions.
Mean number of function values
F 16 32 64
F1 0 0 0
F2 2.15e-11 5.33e-12 2.7e-11
F3 1.73e-13 2.6e-08 1.31e-12
F4 2.34e-09 6.86e-10 3.17e-08
F5 2.1e-10 0 1.34e-08
F6 0 1.77e-08 0
F7 1.44e-09 0 7.88e-08
F8 0 1.75e-03 0
F9 5.3e-04 1.44e-09 (0.0)
F10 4.12e-09 2.45e-07
F 128 256 512
F1 0 0 0
F2 7.14e-11 9.06e-10 3.34e-09
F3 5.57e-11 6.56e-11 2.12e-09
F4 (0.0) (0.0) (0.0)
F5 6.01e-07 1.54e-05 4.32e-05
F6 0 (0.0) (0.0)
F7 3.85e-07 2.11e-06 9.85e-06
F8 4.9e-07 1.42e-14 2.81e-14
F9 (0.0) (0.0) (0.0)
F10 4.3e-07 3.12e-04 1.12e-04
V. HARD BENCHMARK TEST FUNCTIONS
In this section, we evaluate the SNMRVU
algorithm with another type of functions with various
properties provided by CEC2005 special session [23].
These functions are based on classical benchmark
functions after applying some modifications. These
modifications make these functions harder to solve
and resistance to simple search. We applied our
SNMRVU algorithm on 17 functions as reported
Table 8. The functions in Table 8 are shifted, rotated,
expanded and combined variants of classical functions
such as sphere's, Rastrigin's, Rosenbrock's, Ackley's
function. In the next subsection, we study the
performance of SNMRVU on some hard functions,
before comparing the results of SNMRVU algorithm
with the results of two algorithms.
A. The performance of SNMRVU algorithm with the
hard functions.
Figure 4 represents general performance of
SNMRVU algorithm on three functions h1, h2, h6, with
different properties by plotting the number of
iterations versus the function values. Figure 4 shows
that the function values are rapidly decreases as the
number of iterations increases, and the performance of
SNMRVU algorithm is promising and can obtain good
solutions with some of hard functions.
We compared our SNMRVU algorithm with the
following methods:
Table 8: Benchmark Hard functions
H Function name Bounds Global
minimum
H1 Shifted Sphere [-100,100] -450
H2 Shifted Schwefel's 1.2 [-100,100] -450
H3 Shifted rotated high
conditioned elliptic
[-100,100] -450
H4 Shifted Schwefel's 1.2
with noise in fitness
[-100,100] -450
H5 Schwefel's 2.6 with
global optimum on
bounds
[-100,100] -310
H6 Shifted Rosenbrock's [-100,100] 390
H7 Shifted rotated
Griewank's without
bounds]
[0,600] -180
H8 Shifted rotated
Ackley's with global
optimum
[-32,32] -140
H9 Shifted Rastrigin's [-5,5] -330
H10 Shifted rotated
Rastrigin's
[-5,5] -330
H11 Shifted rotated
Weierstrass
[-0.5,0.5] 90
H12 Schwefel's 2.13 [-100,100] -460
H13 Expanded extended
Griewank's +
Rosenbrock's
[-3,1] -130
H14 Expanded rotated
extended Sca_e's
[-100,100] -300
H15 Hybrid composition 1 [-5,5] 120
H16 Rotated hybrid comp. [-5,5] 120
H17 Rotated hybrid comp.
Fn 1 with noise in
fitness
[-5,5] 120
1. DECC-G( Deferential Evolution with cooperative
co-evolution with a group-based problem) [37].
www.ijorcs.org
8. 8 Ahmed Fouad Ali
DECC-G designed to increase the chance of
optimizing interacting variable together by changing
grouping structure dramatically. It splits an objective
vector into smaller subcomponents and evolves each
component with an evolutionary algorithm EA. After
each recycle the DECC-G applied a weight to each
subcomponent and evolves the weight vector with a
certain optimizer. DECC-G uses one termination
criterion, which is the number of evaluation number is
2.5 e6
, 5 e6
, for n = 500, 1000, respectively.
2. AMALGAM-SO (A multi-algorithm genetically
adaptive method for single objective optimization)
[35].
AMALGAM-SO is a hybrid evolutionary search
algorithm. It considers the covariance matrix
adaptation (CMA), genetic algorithm (GA),
evolutionary strategy, deferential evolution(DE),
parental-centric recombination operator (PCX), and
particle swarm optimizer (PSO) for population
evolution. It applies a self-adaptive learning strategy to
favor individual algorithm that demonstrate the
highest reproductive success during the search. The
termination criterion of AMALGAM-SO method is to
reach the global minimum of some functions within
the tolerance limit 10-6
and 10-6
for the others.
Figure 4 : The Performance of SNMRVU Algorithm With the Hard Functions.
www.ijorcs.org
9. Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems 9
B. Comparison between AMALGAM-SO and
SNMRVU.
We compare the SNMRVU algorithm with
AMALGAM-SO algorithm. The two algorithms are
tested on 17 benchmark functions which listed in
Table 8. The average means and standard deviations
are listed in Tables 9. These results for 50 dimensions.
The results between parentheses represent the average
of the final objective function values, when none of
the optimization runs reached to the recorded
tolerance limit in Tol column in Table 9. The reported
results in Table 9 show that the SNMRVU algorithm
is outperforms the other algorithm for functions h1, h2,
h3,h6, h7, h9, h10, h12, h13. The AMALGAM-SO results
for functions h10, h12 are not reported in its original
paper. The two algorithms are unable to locate
solutions within the tolerance limit for function h8, h13,
h14, h16, h17.
Table 9: Functions 1-17 in Dimension n = 50: Mean
Number of Function Evaluations.
AMALGAM-SO SNMRVU
H Tol Mean Std Mean Std
H1 1e-6 7907 209 4653 9.12
H2 1e-6 3.41e4 681 14496 792.13
H3 1e-6 1.29e5 18.3 29732 195.14
H4 1e-6 3.48e5 1.67e5 (3.116e3) (495.19)
H5 1e-6 4.92e5 2.68e4 (1.127e4) (1532.4)
H6 1e-2 2.11e5 1.2e5 39546.24 3245.1
H7 1e-2 9596 463 7214 1.15
H8 1e-2 (21.13) (0.03) (19.99) (8.4e-06)
H9 1e-2 4.76e5 4.09e4 10342.11 95.2
H10 1e-2 - - (198.371) (114.5)
H11 1e-2 2.65e5 1.16e5 (14.23) (3.425)
H12 1e-2 - - 24132.21 1854.4
H13 1e-2 (3.62) (1.08) (1.945) (2.241)
H14 1e-2 (19.69) (0.92) (19.5) (0.45)
H15 1e-2 4.93e5 2.81e4 (9.24) (8.14)
H16 1e-2 (12.9) (3.73) (22.63) (66.4)
H17 1e-2 (65.63) (59.58) (74.5) (61.44)
C. Comparison between DECC-G and SNMRVU.
In order to test the performance of SNMRVU
algorithm with dimensions ≥ 500, SNMRVU is
compared with DECC-G algorithm for 500, 1000
dimensions. The average means of objective function
values and function evaluations are recorded in Table
10. The results in Table 10 shows that the SNMRVU
performed significantly better on functions h1, h5, h6,
h8, h9 and with cheaper cost than the other algorithm.
Table 10 : Mean Number of Function Values and Function
Evaluation with 500, 1000 Dimensions.
Function Values
H n DECC-G SNMRVU
H1 500 3.71e-13 0
1000 6.84e-13 0
H3 500 3.06e8 9.84e8
1000 8.11e8 3.45e9
H5 500 1.15e5 1.09e5
1000 2.20e5 2.14e5
H6 500 1.56e3 4.15e5
1000 2.22e3 8.15e2
H8 500 2.16e1 2.0e1
1000 2.16e1 2.0e1
H9 500 4.5e2 1.20e2
1000 6.32e2 2.11e2
H10 500 5.33e3 1.12e4
1000 9.73e3 2.21e4
H13 500 2.09e2 1.15e3
1000 3.56e2 3.86e3
Function Evaluations
H n DECC-G SNMRVU
H1 500 2.5e6 29455
1000 5.00e6 56324
H3 500 2.5e6 69345
1000 5.00e6 169521
H5 500 2.5e6 65301
1000 5.00e6 132547
H6 500 2.5e6 134520
1000 5.00e6 233548
H8 500 2.5e6 63124
1000 5.00e6 139587
H9 500 2.5e6 63548
1000 5.00e6 91247
H10 500 2.5e6 74501
1000 5.00e6 166487
H13 500 2.5e6 121458
1000 5.00e6 211485
VI. CONCLUSIONS AND FUTURE WORK
In this paper, we proposed a SNMRVU algorithm
in order to solve large scale global optimization
problems. SNMRVU uses three strategies, the first
strategy is variable partitioning process, which help
our algorithm to achieve high performance with high
dimensional problems, the second strategy is the
effective neighborhood structure strategy, which help
the SNMRVU algorithm to explore region around a
current iterate solution by updating variables in the
random selected partitions and the last strategy is the
final intensification by applying the Nelder-Mead
algorithm in the final stage. Invoking these three
strategies together in SNMRVU algorithm represents
the main difference between it and the other related
www.ijorcs.org
10. 10 Ahmed Fouad Ali
methods existing in the literature. SNMRVU
algorithm is evaluated on a set of 27 benchmark
functions with various properties, 10 classical
functions and 17 hard functions. The obtained results
give evidence that the SNMRVU algorithm is
promising algorithm for solving large scale
optimization problems, and in most cases its present
significantly better results and cheaper than the other
algorithms. In future, we intend to develop new
operators to apply our SNMRVU algorithm in higher
dimensional problems till 10,000 dimension.
VII. REFERENCES
[1] K. Bouleimen, H. Lecocq. "A new efficient simulated
annealing algorithm for the resource-constrained project
scheduling problem and its multiple mode version".
European Journal of Operational Research, 149:268-
281, 2003. doi: 10.1016/s0377-2217(02)00761-0.
[2] J. Brest, M. Maucec. "Population size reduction for the
differential evolution algorithm". Appl Intell 29:228-
247, 2008. doi: 10.1007/s10489-007-0091-x.
[3] V. Cerny. "A thermodynamical approach to the
traveling salesman problem". An efficient simulation
algorithm, Journal of Optimization Theory and
Applications,45:41-51, 1985. doi: 10.1007/bf00940812.
[4] H. Cho, Y. Kim. "A simulated annealing algorithm for
resource constrained project scheduling
problems".Journal of the Operational Research Society,
48: 736-744, 1997. doi: 10.2307/3010062.
[5] P. Damodaran, M. Vlez-Gallego. "A simulated
annealing algorithm to minimize makespan of parallel
batch processing machines with unequal job ready
times" .Expert Systems with Applications, 39:1451-
1458, 2012. doi: 10.1016/j.eswa.2011.08.029.
[6] S. Das, A. Abraham, U. Chakraborty, A. Konar.
"Differential evolution using a neighborhood-based
mutation operator" . IEEE Trans Evol Comput,
13(3):526-553, 2009. 10.1109/tevc.2008.2009457.
[7] E. Dolan. " Pattern Search Behaviour in Nonlinear
Optimization". Thesis, 1999.
[8] S. Garcia, M. Lozano, F. Herrera, D. Molina, A.
Snchez. "Global and local real-coded genetic algorithms
based on parentcentric crossover operators". Eur J Oper
Res 185:1088-1113, 2008. doi:
10.1016/j.ejor.2006.06.043.
[9] F. Glover, E. Taillard, D. Werra. "A user's guide to tabu
search". Annals of operations research. 41 (1), pp 1-28.
1993. doi: 10.1007/BF02078647.
[10] M. Hadi, M. Orguna, W. Pedryczb. "Hybrid
optimization with improved tabu search". Appl Soft
Comput, 11(2):1993-2006, 2011. doi:
10.1016/j.asoc.2010.06.015.
[11] P. Hansen, N. Mladenovic, M. Prezreno. "Variable
neighborhood search : methods and applications". Ann
Oper Res, 175(1):367-407, 2010. doi: 10.1007/s10479-
009-0657-6.
[12] Hedar, AF. Ali. "Tabu search with multi-level
neighborhood structures for high dimensional
problems". Appl Intell, 37:189-206, 2012. doi:
110.1007/s10489-011-0321-0.
[13] A. Hedar, A. F. Ali. "Genetic algorithm with population
partitioning and space reduction for high dimensional
problems", in: Proceeding of the 2009, International
Conference on Computer Engineering and Systems,
(ICCES09), PP.151-156 Cairo, Egypt 2009. doi:
10.1109/icces.2009.5383293.
[14] F. Herrera, M. Lozano, J. Verdegay, "Tackling real-
coded genetic algorithms: Operators and tools for
behavioral analysis". Artif Intell Rev, 12:265-319,
1998. doi: 10.1023/a:1006504901164.
[15] F. Herrera, M. Lozano, D. Molina. "Continuous scatter
search: An analysis of the integration of some
combination methods and improvement strategies". Eur
J Oper Res, 169(2):450-476, 2006. doi:
10.1016/j.ejor.2004.08.009.
[16] L. Hvattum, F. Glover, "Finding local optima of high-
dimensional functions using direct search methods",
European Journal of Operational Research, 195:31-45,
2009. doi: 10.1016/j.ejor.2008.01.039.
[17] D. Kim, K. Kim, F. Chen, " Unrelated parallel machine
scheduling with setup times using simulated annealing",
Robotics and Computer-Integrated Manufacturing,
18:223-231, 2002. doi: 10.1016/s0736-5845(02)00013-
3.
[18] S. Kirkpatrick, C. Gelatt, M. Vecchi. "Optimization by
simulated annealing", Science,220(4598):671-680,
1983. doi: 10.1016/s0736-5845(02)00013-3.
[19] T. Kolda, R. Lewies, V. Torczon, "Optimization by
direct search: New perspectives on some classical and
modern methods", SIAM Review,45:385-482, 2003.
doi: 10.1137/s003614450242889.
[20] M. Laguna, R. Mart, "Experimental testing of advanced
scatter search designs for global optimization of
multimodal functions" , J Glob Optim;33(2):235-255,
2005. doi: 10.1007/s10898-004-1936-z.
[21] W. Lee, W. Ch, P. Chen. "A simulated annealing
approach to make span minimization on identical
parallel machines". International Journal of Advanced
Manufacturing Technology, 31:328-334, 2006. doi:
10.1007/s00170-005-0188-5.
[22] J. Liang, P. Suganthan, K. Deb, " Novel composition
test functions for numerical global optimization, in:
Proceedings of 2005 IEEE Swarm Intelligence
Symposium, 8-10 June, pp.68-75, 2005. doi:
10.1109/sis.2005.1501604.
[23] J. Liang, A. Qin, P. Suganthan, S. Baskar,
Comprehensive learning particle swarm optimizer for
global optimization of multimodal functions. IEEE
Trans Evol Comput, 10(3):281-295, 2006. doi:
10.1109/tevc.2005.857610.
[24] Y. Li, X. Zeng, "Multi-population co-genetic algorithm
with double chain-like agents structure for parallel
global numerical optimization", Appl Intell, 32: 0, 292-
310, 2010. doi: 10.1007/s10489-008-0146-7.
[25] M. Locatelli, " Simulated annealing algorithms for
continuous global optimization: Convergence
conditions", Journal of Optimization Theory and
Applications, 29(1):87-102, 2000. doi:
10.1023/A:1004680806815
[26] L. MacQueen, " Some methods for classification and
analysis of multivariate observations", In:L.M. LeCam,
N. Neyman (Eds.), Proceedings of 5th Berkeley
www.ijorcs.org
11. Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global Optimization Problems 11
Symposium on Mathematical Statistics and Probabilty,
University of California press, 281-297, 1967.
[27] N. Mladenovic, M. Drazic, V. Kovac, M. Angalovic, "
General variable neighborhood search for the
continuous optimization", Eur J Oper Res, 191:753-770,
2008. doi: 10.1016/j.ejor.2006.12.064.
[28] M. Montes, T. Stutzle, M. Birattari, M. Dorigo, "PSO: a
composite particle swarm optimization algorithm",
IEEE Trans Evol Comput, 13(5):1120-1132, 2009. doi:
10.1109/TEVC.2009.2021465
[29] J. Nelder, R. Mead, " A simplex method for function
minimization", Computer journal, 7:308-313, 1965. doi:
10.1093/comjnl/7.4.308.
[30] C. Rosin, R. Scott, W. Hart, R. Belew, "A comparison
of global and local search methods in drug docking". In:
thomas Bck(Ed.), Proceeding of the Seventh
international Conference on Genetic Algorithms,
(ICGA97), Morgan Kaufmann, San Francisco, PP.221-
228, 1997.
[31] K. Socha, M. Dorigo, "Ant colony optimization for
continuous domains", Eur J Oper Res, 185(3):1155-
1173 2008.doi: 10.1016/j.ejor.2006.06.046.
[32] F. Solis, R. Wets, "Minimization by random search
techniques", Mathematical Operations Research 6:19-
30, 1981. doi: 10.1287/moor.6.1.19.
[33] J. Spall, "Multivariate stochastic approximation using a
simulation perturbation gradient approximation", IEEE
Transactions on Automatic Control, 37:332-341, 1992.
doi: 10.1109/9.119632.
[34] V. Torezon, "Multi-Directional Search, A Direct search
algorithm for parallel machines", Phd thesis, Rice
University 1989.
[35] J. Vrugt, A. Bruce, J. Hyman, Self-adaptive
multimethode serach for global optimization in real
parameter spaces. IEEE Transactions on Evolutionary
Computation, 13, 2009. doi: 10.1109/tevc.2008.924428.
[36] M. Wright, D. Griffhs, G. Watson, "Direct search
methods, Numerical Analysis 1995. Addison wesey
longman, Harlow, United Kingdom 191-208, 1996.
[37] Z. Yang, K. Tang, X. Yao, " Large scale evolutionary
optimization using cooperative coevolution. Information
Sciences, 178:2986-2999, 2008. doi:
10.1016/j.ins.2008.02.017
How to cite
Ahmed Fouad Ali, “Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale Global
Optimization Problems”. International Journal of Research in Computer Science, 4 (3): pp. 1-11, May 2014. doi:
10.7815/ijorcs.43.2014.084
www.ijorcs.org