Since 1991, tries were made to enhance the stochastic local search techniques (SLS). Some
researchers turned their focus on studying the structure of the propositional satisfiability
problems (SAT) to better understand their complexity in order to come up with better
algorithms. Other researchers focused in investigating new ways to develop heuristics that alter
the search space based on some information gathered prior to or during the search process.
Thus, many heuristics, enhancements and developments were introduced to improve SLS
techniques performance during the last three decades. As a result a group of heuristics were
introduced namely Dynamic Local Search (DLS) that could outperform the systematic search
techniques. Interestingly, a common characteristic of DLS heuristics is that they all depend on
the use of weights during searching for satisfiable formulas.
In our study we experimentally investigated the weights behaviors and movements during
searching for satisfiability using DLS techniques, for simplicity, DDFW DLS heuristic is chosen.
As a results of our studies we discovered that while solving hard SAT problems such as blocks
world and graph coloring problems, weights stagnation occur in many areas within the search
space. We conclude that weights stagnation occurrence is highly related to the level of the
problem density, complexity and connectivity.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Combination of Similarity Measures for Time Series Classification using Genet...Deepti Dohare
In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and presented promising results.
The task scheduling is a key process in large-scale distributed systems like cloud computing infrastructures
which can have much impressed on system performance. This problem is referred to as a NP-hard problem
because of some reasons such as heterogeneous and dynamic features and dependencies among the
requests. Here, we proposed a bi-objective method called DWSGA to obtain a proper solution for
allocating the requests on resources. The purpose of this algorithm is to earn the response quickly, with
some goal-oriented operations. At first, it makes a good initial population by a special way that uses a bidirectional
tasks prioritization. Then the algorithm moves to get the most appropriate possible solution in a
conscious manner by focus on optimizing the makespan, and considering a good distribution of workload
on resources by using efficient parameters in the mentioned systems. Here, the experiments indicate that
the DWSGA amends the results when the numbers of tasks are increased in application graph, in order to
mentioned objectives. The results are compared with other studied algorithms.
Lasso Screening Rules via Dual Polytope ProjectionChester Chen
Lasso is a widely used regression technique to find sparse representations. When the di- mension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Las- so problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have 0 components in the solution vector. Then, the inactive predictors or features can be removed from the op- timization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no exact screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more effective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso.
Keywords: Lasso, Safe Screening, Sparse Regularization, Polytope Projection, Dual Formulation
Solving multiple sequence alignment problems by using a swarm intelligent op...IJECEIAES
In this article, the alignment of multiple sequences is examined through swarm intelligence based an improved particle swarm optimization (PSO). A random heuristic technique for solving discrete optimization problems and realistic estimation was recently discovered in PSO. The PSO approach is a nature-inspired technique based on intelligence and swarm movement. Thus, each solution is encoded as “chromosomes” in the genetic algorithm (GA). Based on the optimization of the objective function, the fitness function is designed to maximize the suitable components of the sequence and reduce the unsuitable components of the sequence. The availability of a public benchmark data set such as the Bali base is seen as an assessment of the proposed system performance, with the potential for PSO to reveal problems in adapting to better performance. This proposed system is compared with few existing approaches such as deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) alignment (DIALIGN), PILEUP8, hidden Markov model training (HMMT), rubber band technique-genetic algorithm (RBT-GA) and ML-PIMA. In many cases, the experimental results are well implemented in the proposed system compared to other existing approaches.
In this research, we broaden the advantages of nonnegative garrote as a feature selection method and empirically show it provides comparable results to panel models. We compare nonnegative garrote to other variable selection methods like ridge, lasso and adaptive lasso and analyze their performance on a dataset, which we have previously analyzed in another research. We conclude by showing that the results from nonnegative garrote are comparable to the robustness checks applied to the panel models to validate statistically significant variables. We conclude that nonnegative garrote is a robust variable selection method for panel orthonormal data as it accounts for the fixed and random effects, which are present in panel datasets.
A constraint is defined as a logical relation among several unknown quantities or variables, each taking a value in a given
domain. Constraint Programming (CP) is an emergent field in operations research. Constraint programming is based on feasibility
which means finding a feasible solution rather than optimization which means finding an optimal solution and focuses on the
constraints and variables domain rather than the objective functions. While defining a set of constraints, this may seem a simple way to
model a real-world problem but finding a good model that works well with a chosen solver is not that easy. A model could be very
hard to solve if it is poorly chosen
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
More Related Content
Similar to WEIGHTS STAGNATION IN DYNAMIC LOCAL SEARCH FOR SAT
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Combination of Similarity Measures for Time Series Classification using Genet...Deepti Dohare
In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and presented promising results.
The task scheduling is a key process in large-scale distributed systems like cloud computing infrastructures
which can have much impressed on system performance. This problem is referred to as a NP-hard problem
because of some reasons such as heterogeneous and dynamic features and dependencies among the
requests. Here, we proposed a bi-objective method called DWSGA to obtain a proper solution for
allocating the requests on resources. The purpose of this algorithm is to earn the response quickly, with
some goal-oriented operations. At first, it makes a good initial population by a special way that uses a bidirectional
tasks prioritization. Then the algorithm moves to get the most appropriate possible solution in a
conscious manner by focus on optimizing the makespan, and considering a good distribution of workload
on resources by using efficient parameters in the mentioned systems. Here, the experiments indicate that
the DWSGA amends the results when the numbers of tasks are increased in application graph, in order to
mentioned objectives. The results are compared with other studied algorithms.
Lasso Screening Rules via Dual Polytope ProjectionChester Chen
Lasso is a widely used regression technique to find sparse representations. When the di- mension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Las- so problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have 0 components in the solution vector. Then, the inactive predictors or features can be removed from the op- timization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no exact screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more effective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso.
Keywords: Lasso, Safe Screening, Sparse Regularization, Polytope Projection, Dual Formulation
Solving multiple sequence alignment problems by using a swarm intelligent op...IJECEIAES
In this article, the alignment of multiple sequences is examined through swarm intelligence based an improved particle swarm optimization (PSO). A random heuristic technique for solving discrete optimization problems and realistic estimation was recently discovered in PSO. The PSO approach is a nature-inspired technique based on intelligence and swarm movement. Thus, each solution is encoded as “chromosomes” in the genetic algorithm (GA). Based on the optimization of the objective function, the fitness function is designed to maximize the suitable components of the sequence and reduce the unsuitable components of the sequence. The availability of a public benchmark data set such as the Bali base is seen as an assessment of the proposed system performance, with the potential for PSO to reveal problems in adapting to better performance. This proposed system is compared with few existing approaches such as deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) alignment (DIALIGN), PILEUP8, hidden Markov model training (HMMT), rubber band technique-genetic algorithm (RBT-GA) and ML-PIMA. In many cases, the experimental results are well implemented in the proposed system compared to other existing approaches.
In this research, we broaden the advantages of nonnegative garrote as a feature selection method and empirically show it provides comparable results to panel models. We compare nonnegative garrote to other variable selection methods like ridge, lasso and adaptive lasso and analyze their performance on a dataset, which we have previously analyzed in another research. We conclude by showing that the results from nonnegative garrote are comparable to the robustness checks applied to the panel models to validate statistically significant variables. We conclude that nonnegative garrote is a robust variable selection method for panel orthonormal data as it accounts for the fixed and random effects, which are present in panel datasets.
A constraint is defined as a logical relation among several unknown quantities or variables, each taking a value in a given
domain. Constraint Programming (CP) is an emergent field in operations research. Constraint programming is based on feasibility
which means finding a feasible solution rather than optimization which means finding an optimal solution and focuses on the
constraints and variables domain rather than the objective functions. While defining a set of constraints, this may seem a simple way to
model a real-world problem but finding a good model that works well with a chosen solver is not that easy. A model could be very
hard to solve if it is poorly chosen
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
2. 80 Computer Science & Information Technology (CS & IT)
of it and to come up with algorithms that could solve the problems in an optimal way. Other
researchers focused on investigating new heuristics that alter the search space based on some
information gathered during searching for a solution (as in 1.2). Thus, many heuristics,
enhancements and developments were introduced to improve SLS techniques performance in the
last three decades. As a result, a group of heuristics were introduced that could outperformed the
systematic search techniques, namely Dynamic Local Search (DLS). Recent DLS algorithms
depend on the use of weights to alter the search space. In other words, weights are used when
there are no moves that could decrease the search cost, to make it possible for the technique to
take unattractive moves which could increase the search cost temporarily.
1.1 Propositional satisfiability (SAT) complexity and hardness
It is proven that hard combinatorial SAT problems are the benchmarks that are used to test the
efficiency, accuracy and optimality of a given algorithm [12]. As easy problems could be solved
by any algorithm in a reasonable manner [21], which in turn does not reflect the real performance
of a solving techniques. Therefore, studies since almost three decades focused on studying the
hardness, complexity, and density of a countless number of SAT problems [6, 8, 15, 9]. Thus, a
large distribution of hard problems was produced. These hard problems are categorized into two
main divisions: 1) satisfiable and 2) un-satisfiable instances. Furthermore, in The International
SAT solver competition (http"//www.satcompetition.org/ ) there are three sub divisions of the
two main divisions: a) Industrial, b) Crafted, and c) Random instances. In another field of studies,
researchers not only investigated whether a problem is satisfiable or not, they also studied how
hard a problem is, as in [17, 21, 1].
1.2 Propositional satisfiability (SAT) dynamic solving techniques
Since the development of the Breakout heuristic [16], clause weighting dynamic local search
(DLS) algorithms for SAT have been intensively investigated, and continually improved [5, 7].
However, the performance of these algorithms remained inferior to their non-weighting
counterparts (e.g. [13]), until the more recent development of weight smoothing heuristics [24,
19, 11, 23]). Such algorithms now represent the state-of-the-art for stochastic local search (SLS)
methods on SAT problems. Interestingly, the most successful DLS algorithms (i.e. DLM [24],
SAPS [11], PAWS [23]), EWS [4], COVER [18] and recently CScore-SAT [3]) have converged
on the same underlying weighting strategy: increasing weights on false clauses in a local
minimum, then periodically reducing weights according to a problem specific parameter setting.
Except for COVER which updates the edge weights in every step of the search.
However, a key issue with DLS algorithms is that their performance depend mainly on the
efficiency of modifying the weights during the search, regardless of some other factors which
may play a crucial role in their performance when applied for solving large and hard SAT
problems such as Blocks World and Graph Coloring problems. For Instance, the size of
backbones [22]1
, the phase transition occurrence, and the density of a given problem.
Our study focuses on another factor and investigate its impact on the performance of DLS solving
techniques. The question addressed in the current paper is that what happens to the weights when
a clause and its neighboring clauses are satisfied?. For instance, if clause ci is connected to n
number of clauses (neighboring area of clause ci, as discussed in sub-section 2.2) and by
3. Computer Science & Information Technology (CS & IT) 81
assuming that clause ci became satisfied, by flipping one of its literals lim, where all its
neighboring clauses are satisfied too, should clause ci and its neighbors keep the weights?.
In the remainder of the paper we generally discuss the clause weighting most known algorithms
such as SAPS, PAWS and DLM and DDFW. Then we discuss elaborately on DDFW technique
as it is used for the purpose of the current study. Then, we show empirically the weights
behaviors and movements during the search via an intensive experimentation on a broad range of
benchmark SAT problems. Then we analyze the results and show the general outcome of the
experiments. Finally, we conclude our work and some guidelines for future work are given.
2. CLAUSE WEIGHTING FOR SAT
Clause weighting local search algorithms for SAT follow the basic procedure of repeatedly
flipping single literals that produce the greatest reduction in the sum of false clause weights.
Typically, all literals are randomly initialized, and all clauses are given a fixed initial weight. The
search then continues until no further cost reduction is possible, at which point the weight on all
unsatisfied clauses is increased, and the search is resumed, punctuated with periodic weight
reductions.
Existing clause weighting algorithms differ primarily in the schemes used to control the clause
weights, and in the definition of the points where weight should be adjusted. Multiplicative
methods, such as SAPS, generally adjust weights when no further improving moves are available
in the local neighborhood. This can be when all possible flips lead to a worse cost, or when no
flip will improve cost, but some flips will lead to equal cost solutions. As multiplicative real-
valued weights have much finer granularity, the presence of equal cost flips is much more
unlikely than for an additive approach (such as DLM or PAWS), where weight is adjusted in
integer units. This means that additive approaches frequently have the choice between adjusting
weight when no improving move is available, or taking an equal cost (flat) move.
Despite these differences, the three most well-known clause weighting algorithms (DLM [24],
SAPS [11] and PAWS [23]) share a similar structure in the way that weights are updated:2
Firstly, a point is reached where no further improvement in cost appears likely. The precise
definition of this point depends on the algorithm, with DLM expending the greatest effort in
searching plateau areas of equal cost moves, and SAPS expending the least by only accepting
cost improving moves. Then all three methods converge on increasing weights on the currently
false clauses (DLM and PAWS by adding one to each clause and SAPS by multiplying the clause
weight by a problem specific parameter α > 1). Each method continues this cycle of searching
and increasing weight, until, after a certain number of weight increases, clause weights are
reduced (DLM and PAWS by subtracting one from all clauses with weight > 1 and SAPS by
multiplying all clause weights by a problem specific parameter ρ< 1). SAPS is further
distinguished by reducing weights probabilistically (according to a parameter Psmooth), whereas
DLM and PAWS reduce weights after a fixed number of increases (again controlled by
parameter). PAWS is mainly distinguished from DLM in being less likely to take equal cost or
flat moves. DLM will take up to θ1 consecutive flat moves, unless all available flat moves have
______________
1
back in 2001, Slaney et. al. [22] studied the impact of backbones in optimization and approximation problems.
He concluded that in some optimization problems, backbones are correlated with the problem hardness. He also
suggested that heuristic methods when used to identify backbones may reduce problem difficulty.
4. 82 Computer Science & Information Technology (CS & IT)
already been used in the last θ2 moves. PAWS does away with these parameters, taking flat
moves with a fixed probability of 15%, otherwise it will increase weight.
2.1 Divide and Distribute Fixed Weights
DDFW introduces two ideas into the area of clause weighting algorithms for SAT. Firstly, it
evenly distributes a fixed quantity of weight across all clauses at the start of the search, and then
escapes local minima by transferring weight from satisfied to unsatisfied clauses. The other
existing state-of-the-art clause weighting algorithms have all divided the weighting process into
two distinct steps: i) increasing weights on false clauses in local minima and ii) decreasing or
normalizing weights on all clauses after a series of increases, so that weight growth does not
spiral out of control. DDFW combines this process into a single step of weight transfer, thereby
dispensing with the need to decide when to reduce or normalize weight. In this respect, DDFW is
similar to the predecessors of SAPS (SDF [19] and ESG [20]), which both adjust and normalize
the weight distribution in each local minimum. Because these methods adjust weight across all
clauses, they are considerably less efficient than SAPS, which normalizes weight after visiting a
series of local minima.3DDFW escapes the inefficiencies of SDF and ESG by only transferring
weights between pairs of clauses, rather than normalizing weight on all clauses. This transfer
involves selecting a single satisfied clause for each currently unsatisfied clause in a local
minimum, reducing the weight on the satisfied clause by an integer amount and adding that
amount to the weight on the unsatisfied clause. Hence DDFW retains the additive (integer)
weighting approach of DLM and PAWS, and combines this with an efficient method of weight
redistribution, i.e. one that keeps all weight reasonably normalized without repeatedly adjusting
weights on all clauses.
________
2
Additionally, a fourth clause weighting algorithm, GLSSAT [14], uses a similar weight update scheme,
additively increasing weights on the least weighted unsatisfied clauses and multiplicatively reducing
weights whenever the weight on any one clause exceeds a predefined threshold.
3
Increasing weight on false clauses in a local minimum is efficient because only a small proportion of the
total clauses will be false at any one time.
5. Computer Science & Information Technology (CS & IT) 83
DDFW's weight transfer approach also bears similarities to the operations research sub-gradient
optimization techniques discussed in [20]. In these approaches, Lagrangian multipliers, analogous
to the clause weights used in SAT, are associated with problem constraints, and are adjusted in
local minima so that multipliers on unsatisfied constraints are increased and multipliers on
satisfied constraints are reduced. This symmetrical treatment of satisfied and unsatisfied
constraints is mirrored in DDFW, but not in the other SAT clause weighting approaches (which
increase weights and then adjust). However, DDFW differs from sub-gradient optimization in
that weight is only transferred between pairs of clauses and not across the board, meaning less
computation is required.
2.2 Exploiting Neighborhood Structure
Second and more original idea developed in DDFW, is the exploitation of neighborhood
relationships between clauses when deciding which pairs of clauses will exchange weight.
We term clause ci to be a neighbor of clause cj, if there exists at least one literal lim ϵ ci and a
second literal ljn ϵ cj such that lim = ljn as in Fig 1. Furthermore, we term ci to be a same sign
neighbor of cj if the sign of any lim ϵ ci is equal to the sign of any ljn ϵ cj where lim = ljn. From this it
follows that each literal lim ϵ ci will have a set of same sign neighboring clauses . Now, if ci
is false, this implies all literals lim ϵ ci evaluate to false. Hence flipping any lim will cause it to
become true in ci, and also to become true in all the same sign neighboring clauses of lim, i.e. it
will increase the number of true literals, thereby increasing the overall level of satisfaction for
those clauses. Conversely, lim has a corresponding set of opposite sign clauses that would be
damaged when lim is flipped.
The reasoning behind the DDFW neighborhood weighting heuristic proceeds as follows: if a
clause ci is false in a local minimum, it needs extra weight in order to encourage the search to
satisfy it. If we are to pick a neighboring clause cj that will donate weight to ci, we should pick
the clause that is most able to pay. Hence, the clause should firstly already be satisfied. Secondly,
it should be a same sign neighbor of ci, as when ci is eventually satisfied by flipping lim, this will
also raise the level of satisfaction of lim's same sign neighbors. However, taking weight from cj
only increases the chance that cj will be helped when ci is satisfied, i.e. not all literals in ci are
necessarily shared as same sign literals in cj , and a non-shared literal may be subsequently
flipped to satisfy ci. The third criteria is that the donating clause should also have the largest store
of weight within the set of satisfied same sign neighbors of ci.
The intuition behind the DDFW heuristic is that clauses that share same sign literals should form
alliances, because a flip that benefits one of these clauses will always benefit some other
member(s) of the group. Hence, clauses that are connected in this way will form groups that tend
towards keeping each other satisfied. However, these groups are not closed, as each clause will
have clauses within its own group that are connected by other literals to other groups. Weight is
therefore able to move between groups as necessary, rather than being uniformly smoothed (as in
existing methods).
6. 84 Computer Science & Information Technology (CS & IT)
3. EXPERIMENTAL EVALUATION AND ANALYSIS
Table 1. structural information about some of the the original DIMACS 2005 problem sets. the problem
were carefully selected taking to consideration their size, density, and connectivity.
As we stressed in section 2.1 and section 2.2, DDFW can exploit the neighboring structure of a
given clause ci and identify the weight alliances of ci. These weight alliances act as the source of
weights donors when a clause within the alliance need more weights. Also, they stabilize the
process of weight transfer as they can lead to keeping weights within each allies as long as no
weight transfer is needed, thus all the clauses within the neighborhood are satisfied. However,
this has further implications as it could lead the search to get into one of the following scenarios
that we discovered while investigating the weight transfer process: i) weights within a
neighborhood (Allie) could not be transferred to another sub area of the search space. ii) weights
of a specific neighborhood may keep circulating within their neighborhood. iii) if the level of
connectivity between any two neighboring allies is low, weight transfer may suffer from
stagnation, hence it can make the search process longer than it suppose to be.
In order to show the above three scenarios and their impact on the search process of any given
DLM technique, we first studied the general structure of some benchmark problems. Table 1
illustrates the structure of the benchmark problems that were carefully selected based on their
7. Computer Science & Information Technology (CS & IT) 85
size, complexity and hardness. We attempted to reproduce a problem set similar to that used in
the random category of the SAT competition (as this is the domain where local search techniques
have dominated). To do this we selected the 50 satisfiable k3 problems from the SAT2004
competition benchmark. Secondly, we obtained the 10 SATLIB quasi-group existence problems
used in [2]. These problems are relevant because they exhibit a balance between randomness and
structure. Finally, we obtained the structured problem set used to originally evaluate SAPS [11].
These problems have been widely used to evaluate clause weighting algorithms (e.g. in [23]) and
contain a representative cross-section taken from the DIMACS and SATLIB libraries. In this set
we also included 4 of the well-known DIMACS 16-bit parity learning problems.
For each selected problem we firstly show the number of atoms (variables) of the problem, the
number of the clauses of the problem and the number of literals. Secondly, we show the
minimum and the maximum number of literals that form a clause within the problem structure.
Finally, we show the number of minimum sized clauses as will as the number of maximum sized
clauses. We designed our experiment to be as follows:
for each problem we ran DDFW 1000 run. Each run time out was set to 10,000,000 flips.
in each run, we recorded the change of weights in every 10,000 flips.
in every local minimum, we recorded whether DDFW heuristic selected a neighboring
satisfied clause from the weight allie of the false clause, or it picked a satisfied clause, to
be weight donor, randomly from outside the neighboring area of the false clause.
plots were made for each problem to illustrates the change of weights of the false clauses
from the starting point of the search process until the solution is found or it reaches a
time out. This is done for all the figures included in the paper.
Table 2 show the detailed results of the runs. For each problem we firstly show the success rate
(which reflect the percentage of whether a solution is found or not). Then we show the total
number of local minima that DDFW heuristics found before reaching a global optima. Then we
show the number of times the DDFW heuristic randomly picked a clause as a weight donor. Next
we show the number of times that DDFW heuristic picked a neighboring satisfied clause as a
weight donor from the false clause allies. Finally we show the the average number of times
DDFW heuristic deterministically picked a clause as a weight donor.
In order to show the weights transfer and movements during the search we plotted the false
clauses and their weights changes and the number of neighboring clauses of each false clause.
This to show the relationship between the change of weights and the number of neighboring
clauses of a false clause. Out of all problem sets we discuss four problems as they are of great
importance to this work, namely, the Uniform Random 3SAT, the Parity problem, the Blocks
World and the Graph Coloring problem because they explicitly show the previously mentioned
three scenarios. These four problem sets are of different sizes and level of hardness. The Uniform
Random 3SAT (uf100 and uf250) is the smallest set that has 100 variables and 430 clauses,
where the Uniform Random 3SAT 250 has 250 variables and 1065 clauses. Fig 2 show the results
for both problems. We can see that both problems were easy to solve. Also, the weight movement
was smooth as their was enough neighboring clauses to donate weights to a false clause and more
importantly when a sub area become satisfied, the connectivity between clauses allow the transfer
8. 86 Computer Science & Information Technology (CS & IT)
of weights easily (no weight stagnations occur). This is also true with the second problem, The
Parity problem even that the level of hardness of the Parity 16 and the Parity 32 is higher than the
Uniform Random 3SAT, as in Fig 3. What was experimentally interesting is the Graph Coloring
problem (both the g125.17 and the g125.18 problems as in Fig 4, where g125.17 has 2125
variables 66272 clauses and the g125.18 has 2250 variables and 70163 clauses ) where firstly,
DDFW could not reach 100% success rate on the g125.17. Secondly, the figure show a clear gap
between the movement of the weights. Which means the occurrence of weights stagnation. Thus,
the connectivity between the clauses is either very low which make transferring weights among
the clauses is limited to the clauses that are directly connected, or the connectivity of the clauses
is very high which means a larger number of neighbors that could keep the weights for longer
time and prevent other false clauses some where else in the search space from using them. Finally
the Blocks World, Fig 5, which has 6325 variables and 131973 clauses., which has a similar
weight transfer behaviors as the parity problem and the uniform random 3SAT problem with the
exception of the level of hardness as the block world problem was harder to solve and the weights
were moving more often during the search space.
Table 2. The table show the number of successful tries made by DDFW, the number of local minima faced
the search, the number of random weights distribution during the search, the number of deterministic
weight distribution and the average number of deterministc weight distribution. Weights behaviors and
movements during the search.
9. Computer Science & Information Technology (CS & IT) 87
Fig. 2. false clauses and their weights during the search, the uf100 problem top and the uf250 bottom
Fig. 3. false clauses and their weights during the search, the par16 top and par32 bottom
10. 88 Computer Science & Information Technology (CS & IT)
Fig. 4. false clauses and their weights during the search, the g125-17 top and the g125-18 bottom
Fig. 5. false clauses and their weights during the search, the bw-d.large
4. CONCLUSION
As a conclusion, DLS weighting techniques performance could suffer from weight stagnation that
leads to the slowness of a chosen techniques. Our experiments show that these weight stagnations
are not general to all problems but rather they are a problem specific characteristic. We have
looked into each problem characteristic such as its hardness, complexity, and density. As a result,
each problem characteristics may contribute to the occurrence of weight stagnations in some
stages during the search. The DDFW algorithm is a relatively simple application of neighborhood
weighting, and further experiments (not reported here) indicate more complex heuristics can be
more effective on individual problems. In particular, we have looked at adjusting the amount of
weight that is redistributed and allowing DDFW to randomly pick donor clauses according to a
noise parameter. However, we have yet to discover a general neighborhood heuristic as effective
11. Computer Science & Information Technology (CS & IT) 89
as DDFW over the range of problems considered. In future work we consider it will be promising
to extend a DDFW-like approach to handle weight stagnations via adjusting weights in stagnated
alliances regardless of whether they are satisfied or not. This could be done by improving the
exploitation of neighboring areas.
ACKNOWLEDGEMENT
The authors would like to acknowledge the financial support of the Scientific Research
Committee at Petra University. Also we would like to thank all faculty members of the
Information Technology faculty who contributed directly and indirectly to this work.
REFERENCES
[1] D. Achlioptas, C. Gomes, H. A. Kautz, and B. Selman, Generating satisfiable instances," in
Proceedings of 17th AAAI, 2000, pp. 256{261.
[2] Anbulagan, D. Pham, J. Slaney, and A. Sattar, Old resolution meets modern SLS," in Proceedings of
20th AAAI, 2005, pp. 354{359.
[3] S. Cai, C. Luo, and K. Su, Scoring functions based on second level score for k-sat with long clauses,"
J. Artif. Intell. Res. (JAIR), vol. 51, pp. 413{441, 2014.
[4] S. Cai, K. Su, and Q. Chen, EWLS: A new local search for minimum vertex cover," in Proceedings
of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia,
USA, July 11-15, 2010, 2010.
[5] B. Cha and K. Iwama, Adding new clauses for faster local search," in Proceedings of 13th AAAI,
1996, pp. 332{337.
[6] S. A. Cook, The complexity of theorem-proving procedures," in Proceedings of the Third Annual
ACM Symposium on Theory of Computing, ser. STOC'71. New York, NY, USA: ACM, 1971, pp.
151{158. [Online]. Available: http://doi.acm.org/10.1145/800157.805047
[7] J. Frank, Learning short-term clause weights for GSAT," in Proceedings of 15th IJCAI, 1997, pp.
384{389.
[8] M. R. Garey and D. S. Johnson, Computers and Intractability; A Guide to the Theory of NP-
Completeness. New York, NY, USA: W. H. Freeman & Co., 1990.
[9] I. P. Gent and T. Walsh, The hardest random SAT problems," in KI-94: Advances in Artificial
Intelligence, 18th Annual German Conference on Artificial Intelligence, Saarbrucken, Germany,
September 18-23, 1994, Proceedings, 1994, pp. 355{366.
[10] H. Hoos and T. Stulze, Stochastic Local Search. Cambridge, Massachusetts: Morgan Kaufmann,
2005.
[11] F. Hutter, D. Tompkins, and H. Hoos, Scaling and Probabilistic Smoothing: Efficient dynamic local
search for SAT," in Proceedings of 8th CP, 2002, pp. 233{248.
[12] M. Jarvisalo, D. Le Berre, O. Roussel, and L. Simon, The international SAT solver competitions," AI
Magazine, vol. 33, no. 1, pp. 89{92, 2012.
12. 90 Computer Science & Information Technology (CS & IT)
[13] D. McAllester, B. Selman, and H. Kautz, Evidence for invariants in local search," in Proceedings of
14th AAAI, 1997, pp. 321{326.
[14] P. Mills and E. Tsang, Guided local search applied to the satisfiability (SAT) problem," in
Proceedings of 15th ASOR, 1999, pp. 872{883.
[15] D. Mitchell, B. Selman, and H. Levesque, Hard and easy distributions of sat problems," in
ghiuyguyigy, 1992, pp. 459{465.
[16] P. Morris, The Breakout method for escaping from local minima," in Proceedings of 11th AAAI,
1993, pp. 40{45.
[17] P. Prosser, Binary constraint satisfaction problems: Some are harder than others," in ECAI.
PITMAN, 1994, pp. 95{95.
[18] S. Richter, M. Helmert, and C. Gretton, A stochastic local search approach to vertex cover," in KI
2007: Advances in Artificial Intelligence, 30th Annual German Conference on AI, KI 2007,
Osnabruck, Germany, September 10-13, 2007, Proceedings, 2007, pp. 412{426.
[19] D. Schuurmans and F. Southey, Local search characteristics of incomplete SAT procedures," in
Proceedings of 10th AAAI, 2000, pp. 297{302.
[20] D. Schuurmans, F. Southey, and R. Holte, The exponentiated subgradient algorithm for heuristic
boolean programming," in Proceedings of 17th IJCAI, 2001, pp. 334{341.
[21] B. Selman, D. Mitchell, and H. Levesque, Generating hard satisfiability problems," Artificial
Intelligence, vol. 81, pp. 17{29, 1996.
[22] J. Slaney and T. Walsh, Backbones in optimization and approximation," in Proceedings of the 17th
International Joint Conference on Artificial Intelligence - Volume1, ser. IJCAI'01, 2001, pp. 254{259.
[23] J. Thornton, D. N. Pham, S. Bain, and V. Ferreira Jr., Additive versus multiplicative clause
weighting for SAT," in Proceedings of 19th AAAI, 2004, pp. 191{196.
[24] Z. Wu and B. Wah, An efficient global-search strategy in discrete Lagrangian methods for solving
hard satisfiability problems," in Proceedings of 17th AAAI, 2000, pp. 310{315