The document summarizes an optimization problem that uses genetic algorithms to optimize De Jong's function. De Jong's function is a commonly used benchmark function for testing continuous optimization algorithms. The genetic algorithm is applied using different selection schemes like roulette wheel selection, random selection, and best fit selection. The performance of these selection methods is analyzed by running the genetic algorithm with each method for various numbers of iterations and recording the resulting fitness values. Best fit selection consistently produced the best fitness values compared to the other selection methods.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
This document summarizes a research paper that analyzes the performance and convergence of a novel genetic algorithm model towards finding global minima. The paper introduces genetic algorithms, which are probabilistic search algorithms inspired by natural evolution. It describes the components of genetic algorithms, including chromosomes, fitness functions, reproduction, crossover, and mutation operators. It also discusses encoding solutions as chromosomes and two common genetic algorithm models: Holland's original model and the common model. The paper aims to present an analysis of applying genetic algorithms to optimize test functions and finding their global minima.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
OWA BASED MAGDM TECHNIQUE IN EVALUATING DIAGNOSTIC LABORATORY UNDER FUZZY ENV...ijfls
The aim of this paper is to present an evaluation process using OWA operator in fuzzy Multi-attribute
group decision making (MAGDM) technique for helping the health-care department to choose a suitable
diagnostic laboratory among several alternatives. In the process of decision making, experts provide
linguistic terms to evaluate each of the alternatives, which are parameterized by generalized triangular
fuzzy numbers (GTFNs). Subsequently fuzzy MAGDM method is applied to determine the overall
performance value for each alternative (laboratory) to make a final decision. Finally, the diagnostic
laboratory evaluation problem is presented involving seven evaluation attributes, five laboratories and five
experts.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
Genetic Approach to Parallel SchedulingIOSR Journals
Genetic algorithms were used to solve the parallel task scheduling problem of minimizing overall completion time. The genetic algorithm represents each scheduling as a chromosome. It initializes a population of random schedules and evaluates their fitness based on completion time. Selection, crossover, and mutation operators evolve the population over generations. The best schedule found schedules tasks to processors to minimize completion time. Testing on task graphs of varying sizes showed that the genetic algorithm finds improved schedules over generations and that tournament selection works better than roulette wheel selection.
This document summarizes a research paper that presents two algorithms for solving Raven's Progressive Matrices tests visually without propositional representations. The paper introduces the Raven's test and existing computational accounts that use propositions. It then describes two new algorithms called "Affine" and "Fractal" that use visual representations and similarity-preserving transformations to solve the problems. The paper analyzes the performance of the algorithms on all 60 problems from the Standard Progressive Matrices test and finds they perform best on problems requiring visual/spatial skills and less on verbal problems.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
This document summarizes a research paper that analyzes the performance and convergence of a novel genetic algorithm model towards finding global minima. The paper introduces genetic algorithms, which are probabilistic search algorithms inspired by natural evolution. It describes the components of genetic algorithms, including chromosomes, fitness functions, reproduction, crossover, and mutation operators. It also discusses encoding solutions as chromosomes and two common genetic algorithm models: Holland's original model and the common model. The paper aims to present an analysis of applying genetic algorithms to optimize test functions and finding their global minima.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
OWA BASED MAGDM TECHNIQUE IN EVALUATING DIAGNOSTIC LABORATORY UNDER FUZZY ENV...ijfls
The aim of this paper is to present an evaluation process using OWA operator in fuzzy Multi-attribute
group decision making (MAGDM) technique for helping the health-care department to choose a suitable
diagnostic laboratory among several alternatives. In the process of decision making, experts provide
linguistic terms to evaluate each of the alternatives, which are parameterized by generalized triangular
fuzzy numbers (GTFNs). Subsequently fuzzy MAGDM method is applied to determine the overall
performance value for each alternative (laboratory) to make a final decision. Finally, the diagnostic
laboratory evaluation problem is presented involving seven evaluation attributes, five laboratories and five
experts.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
Genetic Approach to Parallel SchedulingIOSR Journals
Genetic algorithms were used to solve the parallel task scheduling problem of minimizing overall completion time. The genetic algorithm represents each scheduling as a chromosome. It initializes a population of random schedules and evaluates their fitness based on completion time. Selection, crossover, and mutation operators evolve the population over generations. The best schedule found schedules tasks to processors to minimize completion time. Testing on task graphs of varying sizes showed that the genetic algorithm finds improved schedules over generations and that tournament selection works better than roulette wheel selection.
This document summarizes a research paper that presents two algorithms for solving Raven's Progressive Matrices tests visually without propositional representations. The paper introduces the Raven's test and existing computational accounts that use propositions. It then describes two new algorithms called "Affine" and "Fractal" that use visual representations and similarity-preserving transformations to solve the problems. The paper analyzes the performance of the algorithms on all 60 problems from the Standard Progressive Matrices test and finds they perform best on problems requiring visual/spatial skills and less on verbal problems.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
The document summarizes a meta-analysis exploring the effects of shared cognitive measures on team performance. It outlines 6 shared cognitive constructs examined: shared mental models, team mental models, information sharing, transactive memory systems, cognitive consensus, and group learning. It then lists the research questions and data collection methods used to analyze articles measuring these constructs and their relationship to team performance.
This document summarizes theories of divided attention from psychological literature. It describes dual task experiments and factors like task similarity, difficulty, and practice that influence performance. Early theories proposed either a single, limited central processor (Kahneman) or multiple specialized modules (Allport). Later theories like multiple resource theory (Navon & Gopher) and Baddeley's model of working memory provided a synthesis, combining a central executive with modality-specific subsystems to better explain dual task findings. However, all theories have limitations in fully specifying the cognitive architecture underlying divided attention.
Blind Image Quality Assessment with Local Contrast Features ijcisjournal
The aim of this research is to create a tool to evaluate distortion in images without the information about
original image. Work is to extract the statistical information of the edges and boundaries in the image and
to study the correlation between the extracted features. Change in the structural information like shape and
amount of edges of the image derives quality prediction of the image. Local contrast features are effectively
detected from the responses of Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Using
the joint adaptive normalisation, G and L are normalised. Normalised values are quantized into M and N
levels respectively. For these quantised M levels of G and N levels of L, Probability (P) and conditional
probability(C) are calculated. Four sets of values namely marginal distributions of gradient magnitude Pg,
marginal distributions of Laplacian of Gaussian Pl, conditional probability of gradient magnitude Cg and
probability of Laplacian of Gaussian Cl are formed. These four segments or models are Pg, Pl, Cg and Cl.
The assumption is that the dependencies between features of gradient magnitude and Laplacian of
Gaussian can formulate the level of distortion in the image. To find out them, Spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. Four different correlation values of each image are
the area of interest. Results are also compared with classical tool Structural Similarity Index Measure
An Information-Theoretic Approach for Clonal Selection AlgorithmsMario Pavone
This document presents an information-theoretic approach for clonal selection algorithms (CSA) used to solve global optimization problems. It introduces CSA as biologically-inspired algorithms, describes the key operators of cloning, hypermutation, and aging. It also discusses using Kullback-Leibler, Rényi, and Von Neumann divergences to analyze the learning process of CSA. The document outlines applying CSA to benchmark optimization functions and comparing results.
This document proposes an improved genetic algorithm called DGA that combines genetic algorithm and differential evolution. DGA uses adaptive differential evolution as its mutation operator instead of simple genetic algorithm's crossover and mutation. It also adds strategies of optimal reservation and worst elimination. Simulation results show DGA has stronger global optimization ability, faster convergence speed and better stability compared to simple genetic algorithm.
Collocation Extraction Performance Ratings Using Fuzzy logicWaqas Tariq
The performance of Collocation extraction cannot quantified or properly express by a single dimension. It is very imprecise to interpret collocation extraction metrics without knowing what application (users) are involved. Most of the existing collocation extraction techniques are of Berry-Roughe, Church and Hanks, Kita, Shimohata, Blaheta and Johnson, and Pearce. The extraction techniques need to be frequently updated based on feedbacks from implementation of previous policies. These feedbacks are always stated in the form of ordinal ratings, e.g. “high speed”, “average performance”, “good condition”. Different people can describe different values to these ordinal ratings without a clear-cut reason or scientific basis. There is need for a way or means to transform vague ordinal ratings to more appreciable and precise numerical estimates. The paper transforms the ordinal performance ratings of some Collocation performance techniques to numerical ratings using Fuzzy logic. Keywords: Fuzzy Set Theory, collocation extraction, Transformation, performance Techniques, Criteria.
This document summarizes several key dimensionality reduction algorithms:
1) FOCUS and RELIEF are two of the earliest and most widely cited dimensionality reduction algorithms. FOCUS implements feature selection by preferring consistent hypotheses with few features, while RELIEF ranks features based on distance between instances of the same and different classes.
2) RELIEF was later expanded to handle multi-class problems (RELIEF-F) and incomplete data (RELIEF-D). Liu and Dash also reviewed dimensionality reduction methods and categorized them based on their generation and evaluation procedures.
3) Liu et al. proposed the LVF (Las Vegas Filter) algorithm for feature selection, which uses random search guided by in
Diversity-Aware Mutation Adequacy Criterion for Improving Fault Detection Cap...Donghwan Shin
This document summarizes a presentation on improving mutation testing by considering diversity among mutants. It proposes a "distinguishing mutation adequacy criterion" (d-criterion) that requires test suites to distinguish all pairs of mutants, in addition to killing them as in the traditional "kill-only criterion" (k-criterion). An empirical study found the d-criterion improved fault detection rates without significantly increasing test suite size compared to the k-criterion. The d-criterion formally extends the k-criterion by also requiring tests to produce different outputs for different mutants.
A Theoretical Framework for Understanding Mutation-Based Testing MethodsDonghwan Shin
The document presents a theoretical framework for understanding mutation-based testing methods. It introduces a new testing factor called a "test differentiator" to formally describe the behavioral differences between programs and mutants. It extends the differentiator to a "d-vector" to represent differences over a set of tests. Positions in a multi-dimensional space are defined based on d-vectors to quantitatively analyze differences. A position deviance relation and position deviance lattice are defined as a graphical model to represent positions and their relationships. The framework can guide understanding of mutation-based testing and applications like identifying a minimal set of mutants.
This document discusses machine intelligence and machine learning. It covers topics such as behavior-based AI vs knowledge-based AI, supervised vs unsupervised learning, classification vs prediction, and decision tree induction for classification. Decision trees are built using an algorithm that selects the attribute that best splits the data at each step to create partitions. Pruning techniques are used to avoid overfitting.
Test for HIV-associated cognitive impairment in IndiaKimberly Schafer
This document describes the development of a brief screening battery to detect HIV-associated neurocognitive impairment (HAND) in India. Researchers administered a comprehensive neuropsychological (NP) battery to 206 HIV-positive Indian patients. Statistical analysis identified that combinations of two tests - the Brief Visuospatial Memory Test-Revised for learning and either the Color Trails 1 test for processing speed, Grooved Pegboard test for motor skills, or Digit Symbol test for processing speed - achieved high sensitivity and specificity for detecting HAND. The study aims to develop a quick iPad-based screening tool to assess cognitive functioning in resource-limited settings like India.
MultiModal Identification System in Monozygotic TwinsCSCJournals
This document presents a multimodal biometric system for identifying identical twins using face, fingerprint, and iris recognition. It utilizes Fisher's linear discriminant analysis to extract features from faces, principal component analysis for fingerprints, and local binary pattern features for iris matching. These features are then fused for identification. The system is tested on a database of 50 pairs of identical twins and shows promising results compared to other techniques. Receiver operating characteristics also indicate the proposed method performs better than other studied techniques in distinguishing identical twins.
Paper Gloria Cea - Goal-Oriented Design Methodology Applied to User Interface...WTHS
This document describes a user interface designed for a mobile application called the Functional Assessment System (FAS). The FAS allows users to assess their aerobic fitness on their own without specialized equipment. The design of the mobile application interface was guided by the Goal-Oriented Design methodology. This methodology focuses on representing users as characters with specific goals and designing scenarios to help users achieve those goals. The document also discusses evaluating the usability of the interface using the AttrakDiff questionnaire to assess pragmatic and hedonic qualities. The results showed satisfactory user interaction with the FAS mobile application interface.
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
This document summarizes a study on the impact of classes playing roles in design patterns. The study analyzed classes playing zero, one, or two roles across six Java programs. It found that on average 8.24% of classes played one role and 17.81% played two roles. Classes playing roles, especially two roles, had significantly higher values for internal metrics like coupling and cohesion. Classes playing two roles also changed significantly more than other classes. The results confirm previous findings and justify further study into ranking design pattern occurrences based on class roles and metrics.
This document summarizes frequent itemset mining algorithms. It introduces data mining and the Apriori algorithm. Apriori generates candidate itemsets and prunes those that are not frequent by scanning the database multiple times. The document proposes two new algorithms to improve efficiency: Impression reduces scans by pruning candidates using an impression table, while Transaction Database Spin reduces the database size between iterations by removing transactions not containing large itemsets. Both aim to reduce database access compared to Apriori.
This document summarizes a research paper that proposes a technique for classifying brain CT scan images using principal component analysis (PCA), wavelet transform, and K-nearest neighbors (K-NN) classification. The methodology involves extracting features from CT scan images using PCA and wavelet transform, then training a K-NN classifier on the extracted features to classify images as normal or abnormal. PCA achieved 100% accuracy on brain CT scans, while wavelet transform achieved 100% accuracy on Brodatz texture images. The technique provides an automated way to analyze CT scans and could help radiologists in diagnosis.
1) The document discusses channel estimation techniques for 4G wireless networks using OFDM modulation.
2) Channel estimation is important for coherent detection and diversity techniques in wireless systems, which have time-varying channels. Accurate channel estimation allows techniques like maximal ratio combining.
3) OFDM divides the channel into multiple sub-carriers to combat multipath fading and make channel equalization easier compared to single carrier systems. Channel estimation is needed to characterize the time-varying frequency response of the wireless channel.
This document summarizes a research paper on handwritten script recognition using soft computing techniques. The paper aims to recognize Hindi, English, and Urdu scripts using a combined approach of discrete cosine transform (DCT) and discrete wavelet transform (DWT) for feature extraction, and a neural network classifier. A database containing 961 handwritten samples across the three scripts was created, with 320 samples per script varying in font size. The system achieved a recognition accuracy of 82.70% on the test dataset containing 480 samples. The paper provides background on challenges in multi-script recognition and discusses preprocessing, segmentation, feature extraction and representation steps prior to classification.
This document discusses two designs of microstrip patch antennas. Design 1 is a rectangular microstrip patch antenna that achieves a 13.1% bandwidth. Design 2 is a gap-coupled reduced size rectangular microstrip patch antenna that achieves an enhanced 20.5% bandwidth through the use of parasitic patches placed along the edges of the fed rectangular patch. Simulation results show that Design 2 provides both an improvement in bandwidth and directivity over Design 1.
This paper proposes using particle swarm optimization (PSO) with the shortest position value (SPV) rule to solve task scheduling problems in grid computing.
The key points are:
1. Task scheduling in grid computing aims to minimize completion time and optimally utilize distributed resources like computers and data. PSO is applied to find the optimal task-resource assignment.
2. Individuals in the PSO population represent possible task schedules. The SPV rule converts continuous position values to discrete task sequences.
3. A fitness function that minimizes total task completion time is used to evaluate schedules. The PSO algorithm is run to iteratively update schedules until an optimal one is found.
4.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
The document summarizes a meta-analysis exploring the effects of shared cognitive measures on team performance. It outlines 6 shared cognitive constructs examined: shared mental models, team mental models, information sharing, transactive memory systems, cognitive consensus, and group learning. It then lists the research questions and data collection methods used to analyze articles measuring these constructs and their relationship to team performance.
This document summarizes theories of divided attention from psychological literature. It describes dual task experiments and factors like task similarity, difficulty, and practice that influence performance. Early theories proposed either a single, limited central processor (Kahneman) or multiple specialized modules (Allport). Later theories like multiple resource theory (Navon & Gopher) and Baddeley's model of working memory provided a synthesis, combining a central executive with modality-specific subsystems to better explain dual task findings. However, all theories have limitations in fully specifying the cognitive architecture underlying divided attention.
Blind Image Quality Assessment with Local Contrast Features ijcisjournal
The aim of this research is to create a tool to evaluate distortion in images without the information about
original image. Work is to extract the statistical information of the edges and boundaries in the image and
to study the correlation between the extracted features. Change in the structural information like shape and
amount of edges of the image derives quality prediction of the image. Local contrast features are effectively
detected from the responses of Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Using
the joint adaptive normalisation, G and L are normalised. Normalised values are quantized into M and N
levels respectively. For these quantised M levels of G and N levels of L, Probability (P) and conditional
probability(C) are calculated. Four sets of values namely marginal distributions of gradient magnitude Pg,
marginal distributions of Laplacian of Gaussian Pl, conditional probability of gradient magnitude Cg and
probability of Laplacian of Gaussian Cl are formed. These four segments or models are Pg, Pl, Cg and Cl.
The assumption is that the dependencies between features of gradient magnitude and Laplacian of
Gaussian can formulate the level of distortion in the image. To find out them, Spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. Four different correlation values of each image are
the area of interest. Results are also compared with classical tool Structural Similarity Index Measure
An Information-Theoretic Approach for Clonal Selection AlgorithmsMario Pavone
This document presents an information-theoretic approach for clonal selection algorithms (CSA) used to solve global optimization problems. It introduces CSA as biologically-inspired algorithms, describes the key operators of cloning, hypermutation, and aging. It also discusses using Kullback-Leibler, Rényi, and Von Neumann divergences to analyze the learning process of CSA. The document outlines applying CSA to benchmark optimization functions and comparing results.
This document proposes an improved genetic algorithm called DGA that combines genetic algorithm and differential evolution. DGA uses adaptive differential evolution as its mutation operator instead of simple genetic algorithm's crossover and mutation. It also adds strategies of optimal reservation and worst elimination. Simulation results show DGA has stronger global optimization ability, faster convergence speed and better stability compared to simple genetic algorithm.
Collocation Extraction Performance Ratings Using Fuzzy logicWaqas Tariq
The performance of Collocation extraction cannot quantified or properly express by a single dimension. It is very imprecise to interpret collocation extraction metrics without knowing what application (users) are involved. Most of the existing collocation extraction techniques are of Berry-Roughe, Church and Hanks, Kita, Shimohata, Blaheta and Johnson, and Pearce. The extraction techniques need to be frequently updated based on feedbacks from implementation of previous policies. These feedbacks are always stated in the form of ordinal ratings, e.g. “high speed”, “average performance”, “good condition”. Different people can describe different values to these ordinal ratings without a clear-cut reason or scientific basis. There is need for a way or means to transform vague ordinal ratings to more appreciable and precise numerical estimates. The paper transforms the ordinal performance ratings of some Collocation performance techniques to numerical ratings using Fuzzy logic. Keywords: Fuzzy Set Theory, collocation extraction, Transformation, performance Techniques, Criteria.
This document summarizes several key dimensionality reduction algorithms:
1) FOCUS and RELIEF are two of the earliest and most widely cited dimensionality reduction algorithms. FOCUS implements feature selection by preferring consistent hypotheses with few features, while RELIEF ranks features based on distance between instances of the same and different classes.
2) RELIEF was later expanded to handle multi-class problems (RELIEF-F) and incomplete data (RELIEF-D). Liu and Dash also reviewed dimensionality reduction methods and categorized them based on their generation and evaluation procedures.
3) Liu et al. proposed the LVF (Las Vegas Filter) algorithm for feature selection, which uses random search guided by in
Diversity-Aware Mutation Adequacy Criterion for Improving Fault Detection Cap...Donghwan Shin
This document summarizes a presentation on improving mutation testing by considering diversity among mutants. It proposes a "distinguishing mutation adequacy criterion" (d-criterion) that requires test suites to distinguish all pairs of mutants, in addition to killing them as in the traditional "kill-only criterion" (k-criterion). An empirical study found the d-criterion improved fault detection rates without significantly increasing test suite size compared to the k-criterion. The d-criterion formally extends the k-criterion by also requiring tests to produce different outputs for different mutants.
A Theoretical Framework for Understanding Mutation-Based Testing MethodsDonghwan Shin
The document presents a theoretical framework for understanding mutation-based testing methods. It introduces a new testing factor called a "test differentiator" to formally describe the behavioral differences between programs and mutants. It extends the differentiator to a "d-vector" to represent differences over a set of tests. Positions in a multi-dimensional space are defined based on d-vectors to quantitatively analyze differences. A position deviance relation and position deviance lattice are defined as a graphical model to represent positions and their relationships. The framework can guide understanding of mutation-based testing and applications like identifying a minimal set of mutants.
This document discusses machine intelligence and machine learning. It covers topics such as behavior-based AI vs knowledge-based AI, supervised vs unsupervised learning, classification vs prediction, and decision tree induction for classification. Decision trees are built using an algorithm that selects the attribute that best splits the data at each step to create partitions. Pruning techniques are used to avoid overfitting.
Test for HIV-associated cognitive impairment in IndiaKimberly Schafer
This document describes the development of a brief screening battery to detect HIV-associated neurocognitive impairment (HAND) in India. Researchers administered a comprehensive neuropsychological (NP) battery to 206 HIV-positive Indian patients. Statistical analysis identified that combinations of two tests - the Brief Visuospatial Memory Test-Revised for learning and either the Color Trails 1 test for processing speed, Grooved Pegboard test for motor skills, or Digit Symbol test for processing speed - achieved high sensitivity and specificity for detecting HAND. The study aims to develop a quick iPad-based screening tool to assess cognitive functioning in resource-limited settings like India.
MultiModal Identification System in Monozygotic TwinsCSCJournals
This document presents a multimodal biometric system for identifying identical twins using face, fingerprint, and iris recognition. It utilizes Fisher's linear discriminant analysis to extract features from faces, principal component analysis for fingerprints, and local binary pattern features for iris matching. These features are then fused for identification. The system is tested on a database of 50 pairs of identical twins and shows promising results compared to other techniques. Receiver operating characteristics also indicate the proposed method performs better than other studied techniques in distinguishing identical twins.
Paper Gloria Cea - Goal-Oriented Design Methodology Applied to User Interface...WTHS
This document describes a user interface designed for a mobile application called the Functional Assessment System (FAS). The FAS allows users to assess their aerobic fitness on their own without specialized equipment. The design of the mobile application interface was guided by the Goal-Oriented Design methodology. This methodology focuses on representing users as characters with specific goals and designing scenarios to help users achieve those goals. The document also discusses evaluating the usability of the interface using the AttrakDiff questionnaire to assess pragmatic and hedonic qualities. The results showed satisfactory user interaction with the FAS mobile application interface.
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
This document summarizes a study on the impact of classes playing roles in design patterns. The study analyzed classes playing zero, one, or two roles across six Java programs. It found that on average 8.24% of classes played one role and 17.81% played two roles. Classes playing roles, especially two roles, had significantly higher values for internal metrics like coupling and cohesion. Classes playing two roles also changed significantly more than other classes. The results confirm previous findings and justify further study into ranking design pattern occurrences based on class roles and metrics.
This document summarizes frequent itemset mining algorithms. It introduces data mining and the Apriori algorithm. Apriori generates candidate itemsets and prunes those that are not frequent by scanning the database multiple times. The document proposes two new algorithms to improve efficiency: Impression reduces scans by pruning candidates using an impression table, while Transaction Database Spin reduces the database size between iterations by removing transactions not containing large itemsets. Both aim to reduce database access compared to Apriori.
This document summarizes a research paper that proposes a technique for classifying brain CT scan images using principal component analysis (PCA), wavelet transform, and K-nearest neighbors (K-NN) classification. The methodology involves extracting features from CT scan images using PCA and wavelet transform, then training a K-NN classifier on the extracted features to classify images as normal or abnormal. PCA achieved 100% accuracy on brain CT scans, while wavelet transform achieved 100% accuracy on Brodatz texture images. The technique provides an automated way to analyze CT scans and could help radiologists in diagnosis.
1) The document discusses channel estimation techniques for 4G wireless networks using OFDM modulation.
2) Channel estimation is important for coherent detection and diversity techniques in wireless systems, which have time-varying channels. Accurate channel estimation allows techniques like maximal ratio combining.
3) OFDM divides the channel into multiple sub-carriers to combat multipath fading and make channel equalization easier compared to single carrier systems. Channel estimation is needed to characterize the time-varying frequency response of the wireless channel.
This document summarizes a research paper on handwritten script recognition using soft computing techniques. The paper aims to recognize Hindi, English, and Urdu scripts using a combined approach of discrete cosine transform (DCT) and discrete wavelet transform (DWT) for feature extraction, and a neural network classifier. A database containing 961 handwritten samples across the three scripts was created, with 320 samples per script varying in font size. The system achieved a recognition accuracy of 82.70% on the test dataset containing 480 samples. The paper provides background on challenges in multi-script recognition and discusses preprocessing, segmentation, feature extraction and representation steps prior to classification.
This document discusses two designs of microstrip patch antennas. Design 1 is a rectangular microstrip patch antenna that achieves a 13.1% bandwidth. Design 2 is a gap-coupled reduced size rectangular microstrip patch antenna that achieves an enhanced 20.5% bandwidth through the use of parasitic patches placed along the edges of the fed rectangular patch. Simulation results show that Design 2 provides both an improvement in bandwidth and directivity over Design 1.
This paper proposes using particle swarm optimization (PSO) with the shortest position value (SPV) rule to solve task scheduling problems in grid computing.
The key points are:
1. Task scheduling in grid computing aims to minimize completion time and optimally utilize distributed resources like computers and data. PSO is applied to find the optimal task-resource assignment.
2. Individuals in the PSO population represent possible task schedules. The SPV rule converts continuous position values to discrete task sequences.
3. A fitness function that minimizes total task completion time is used to evaluate schedules. The PSO algorithm is run to iteratively update schedules until an optimal one is found.
4.
This document summarizes a research paper that proposes a method for generating concise and non-redundant association rules from multi-level datasets. The method defines hierarchical redundancy in rules extracted from hierarchical data and introduces an approach called ReliableExactRule to derive a lossless representation of non-redundant rules. It first discusses related work on mining frequent itemsets and association rules from single-level and multi-level data. It then presents the ReliableExactRule approach, which uses closed itemsets and generators to represent rules without redundancy, but notes this still allows for hierarchical redundancy. The paper aims to address hierarchical redundancy and present a definition and technique to eliminate it without information loss.
The document analyzes microstrip transmission lines using a quasi-static approach. It presents numerically efficient and accurate formulas to analyze microstrip line structures. The analysis derives formulas for characteristic impedance of microstrip lines based on variables like the normalized strip width, effective permittivity, height of the substrate, and thickness of the microstrip line. It also defines the structure of a microstrip line and formulates the quasi-static analysis by introducing the concept of an effective relative dielectric constant to account for the microstrip being surrounded by different dielectrics like air and the substrate material.
This document summarizes conventional and soft computing techniques for color image segmentation. It begins with an introduction to image segmentation and discusses how color images contain more information than grayscale images. The document then provides an overview of conventional segmentation algorithms, categorizing them as edge-based, region-based, or clustering-based methods. It also introduces soft computing techniques like fuzzy logic, neural networks, and genetic algorithms as promising approaches for color image segmentation, noting that these methods are complementary rather than competitive.
This paper proposes a parallel evolutionary algorithm to solve single variable optimization problems. Specifically:
- It presents a genetic algorithm approach that runs in parallel using a master-slave model, where the master performs genetic operations and distributes individuals to slaves for evaluation.
- The algorithm is tested on single variable optimization problems to find minimum/maximum values.
- Experimental results show the parallel genetic algorithm is effective at finding optimal solutions to these problems and represents an efficient parallel approach for optimization.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
Biology-Derived Algorithms in Engineering OptimizationXin-She Yang
The document discusses biology-derived algorithms and their applications in engineering optimization. It describes several biology-inspired algorithms including genetic algorithms, photosynthetic algorithms, neural networks, and cellular automata. Genetic algorithms and photosynthetic algorithms are discussed in more detail. The document also provides examples of how these algorithms can be applied to problems in engineering optimization, such as parameter estimation and inverse analysis.
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...ijcsa
The document describes a non-revisiting genetic algorithm with adaptive mutation for optimizing multi-dimensional numeric functions. The proposed algorithm avoids revisiting previously evaluated solutions by replacing any duplicates with a mutated version of either the best or random individual. The mutation rate adapts over generations, starting with more exploration and ending with more exploitation. Experimental results on 9 benchmark functions in 2 and 4 dimensions show the proposed algorithm achieves better best and average fitness than a standard genetic algorithm, with improvements ranging from 10-98% depending on the function.
For three decades, many mathematical programming methods have been developed to solve optimization problems. However, until now, there has not been a single totally efficient and robust method to coverall optimization problems that arise in the different engineering fields.Most engineering application design problems involve the choice of design variable values that better describe the behaviour of a system.At the same time, those results should cover the requirements and specifications imposed by the norms for that system. This last condition leads to predicting what the entrance parameter values should be whose design results comply with the norms and also present good performance, which describes the inverse problem.Generally, in design problems the variables are discreet from the mathematical point of view. However, most mathematical optimization applications are focused and developed for continuous variables. Presently, there are many research articles about optimization methods; the typical ones are based on calculus,numerical methods, and random methods.
The calculus-based methods have been intensely studied and are subdivided in two main classes: 1) the direct search methods find a local maximum moving a function over the relative local gradient directions and 2) the indirect methods usually find the local ends solving a set of non-linear equations, resultant of equating the gradient from the object function to zero, i.e., by means of multidimensional generalization of the notion of the function’s extreme points from elementary calculus given smooth function without restrictions to find a possible maximum which is to be restricted to those points whose slope is zero in all directions. The real world has many discontinuities and noisy spaces, which is why it is not surprising that the methods depending upon the restrictive requirements of continuity and existence of a derivative, are unsuitable for all, but a very limited problem domain. A number of schemes have been applied in many forms and sizes. The idea is quite direct inside a finite search space or a discrete infinite search space, where the algorithms can locate the object function values in each space point one at a time. The simplicity of this kind of algorithm is very attractive when the numbers of possibilities are very small. Nevertheless, these outlines are often inefficient, since they do not complete the requirements of robustness in big or highly-dimensional spaces, making it quite a hard task to find the optimal values. Given the shortcomings of the calculus-based techniques and the numerical ones the random methods have increased their popularity.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses various artificial intelligence techniques for robot path planning, including ant colony optimization. It provides background on particle swarm optimization, genetic algorithms, tabu search, simulated annealing, reactive search optimization, and ant colony algorithms. It then proposes a solution for robotic path planning that uses ant colony optimization. The proposed solution involves defining a source and destination point for the robot, moving it forward one step at a time while checking for obstacles, having it take three steps back if an obstacle is encountered, and applying ant colony optimization algorithms to help the robot find an optimal path to bypass obstacles and reach the destination point.
Performance Analysis of Genetic Algorithm as a Stochastic Optimization Tool i...paperpublications3
Abstract: Engineering design problems are complex by nature because of their critical objective functions involving many variables and Constraints. Engineers have to ensure the compatibility with the imposed specifications keeping the manufacturing costs low. Moreover, the methodology may vary according to the design problem.
The main issue is to choose the proper tool for optimization. In the earlier days, a design problem was optimized by some of the conventional optimization techniques like gradient Search, evolutionary optimization, random search etc. These are known as classical methods.
The method is to be properly Chosen depending on the nature of the problem- an incorrect choice may sometimes fail to give the optimal solution. So the methods are less robust.
Now-a-days soft-computing techniques are being widely used for optimizing a function. These are more robust. Genetic algorithm is one such method. It is an effective tool in the realm of stochastic optimization (non-classical). The algorithm produces many strings and generation to reach the optimal point.
The main objective of the paper is to optimize engineering design problems using Genetic Algorithm and to analyze how the algorithm reaches the optima effectively and closely. We choose a mathematical expression for the objective function in terms of the design variables and optimize the same under given constraints using GA.
List the problems that can be efficiently solved by Evolutionary P.pdfinfomalad
List the problems that can be efficiently solved by Evolutionary Programming?
List the problems that can be efficiently solved by Evolutionary Programming?
Solution
Evolutionary Programming is a Global Optimization algorithm and is an instance of an
Evolutionary Algorithm from the field of Evolutionary Computation. The approach is a sibling
of other Evolutionary Algorithms such as the Genetic Algorithm, and Learning Classifier
Systems. It is sometimes confused with Genetic Programming given the similarity in name, and
more recently it shows a strong functional similarity to Evolution Strategies.
Inspiration
Evolutionary Programming is inspired by the theory of evolution by means of natural selection.
Specifically, the technique is inspired by macro-level or the species-level process of evolution
(phenotype, hereditary, variation) and is not concerned with the genetic mechanisms of evolution
(genome, chromosomes, genes, alleles).
Metaphor
A population of a species reproduce, creating progeny with small phenotypical variation. The
progeny and the parents compete based on their suitability to the environment, where the
generally more fit members constitute the subsequent generation and are provided with the
opportunity to reproduce themselves. This process repeats, improving the adaptive fit between
the species and the environment.
Strategy
The objective of the Evolutionary Programming algorithm is to maximize the suitability of a
collection of candidate solutions in the context of an objective function from the domain. This
objective is pursued by using an adaptive model with surrogates for the processes of evolution,
specifically hereditary (reproduction with variation) under competition. The representation used
for candidate solutions is directly assessable by a cost or objective function from the domain.
Problems for evolutionary algorithms
Evolutionary programming are useful for problems for which no mechanistic method is available
– or when unreasonable assumptions need to be made. Problems for which you cannot calculate
or derive a solution. Problems for which you have to find a solution.
Aspects of problems that lend them to Evolutionary Programming solution include:
Assignment of individuals into groups, wherever simple ranking and truncation will not work.
This usually involves interactions, whereby whether an individual should be in a group depends
on what other individuals will be in that group. Example: developing multiplex groupings for
genotyping.
Problems where thresholds are involved, for example when the value of a solution depends on
whether certain thresholds have been passed, or the fate of an individual depends on passing one
or more thresholds. Example: Supply chain optimizing to target multiple product end-points
and/or turnoff dates.
Combinatorially tedious problems, where many combinations of components can exist. Example:
The setting up of animal matings. Below are the few applications:
Learning classifier systems, or .
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Survey on evolutionary computation tech techniques and its application in dif...ijitjournal
In computer science, 'evolutionary computation' is an algorithmic tool based on evolution. It implements
random variation, reproduction and selection by altering and moving data within a computer. It helps in
building, applying and studying algorithms based on the Darwinian principles of natural selection. In this
paper, studies about different evolutionary computation techniques used in some applications specifically
image processing, cloud computing and grid computing is carried out briefly. This work is an effort to help
researchers from different fields to have knowledge on the techniques of evolutionary computation
applicable in the above mentioned areas.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
Multi-Population Methods with Adaptive Mutation for Multi-Modal Optimization ...ijscai
This paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by
using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity,
the multi-population technique can be applied to maintain the diversity in the population and the
convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive
mutation operator, which determines two different mutation probabilities for different sites of the
solutions. The probabilities are updated by the fitness and distribution of solutions in the search space
during the evolution process. The experimental results demonstrate the performance of the proposed
algorithm based on a set of benchmark problems in comparison with relevant algorithms.
Dynamic Radius Species Conserving Genetic Algorithm for Test Generation for S...ijseajournal
This document summarizes a research paper that proposes a new approach called Dynamic-radius Species-conserving Genetic Algorithm (DSGA) for generating structural test cases using a genetic algorithm. DSGA aims to generate a complete test suite with a single run by finding test cases that cover different areas of the program structure. It begins by finding test cases that cover some areas, then excludes those areas to search for test cases covering other uncovered areas, similar to how humans generate structural test cases. The paper evaluates DSGA on the Triangle Classification algorithm and finds it able to generate a complete test suite without limitations of other genetic algorithm approaches for structural test case generation.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document proposes a novel technique to detect multiple faults in an automobile engine using sound signals collected from a single microphone sensor. It describes experiments conducted using a Maruti Alto 800cc 4-cylinder engine. Three types of faults are considered: 1) knocking fault, 2) insufficient lubricant fault, and 3) excessive lubricant fault. Sound features are extracted from the engine and analyzed using artificial neural networks to classify the engine condition as normal or faulty. The technique aims to provide simple fault detection using a single sensor compared to existing methods that use separate sensors for each fault.
This document presents a new algorithm for automatically detecting driver drowsiness based on electroencephalography (EEG) using Mahalanobis distance. EEG signals are measured by placing electrodes on the driver's head. Two main approaches for detecting drowsiness are analyzing physical changes like head position and measuring physiological changes like brain activity. This algorithm focuses on the second approach using EEG signals, which can accurately track alertness levels second-to-second. It first establishes a model of alert brain activity using multivariate normal distribution of EEG theta and alpha rhythms. Mahalanobis distance is then used to detect drowsiness by measuring deviation from the alert model.
This document provides an overview of grid computing. It discusses that grid computing enables sharing, selection, and aggregation of distributed resources like supercomputers, storage, and data sources. Grid computing allows for these resources to be used as a unified virtual machine. The document then discusses the services offered by grids including computational, data, application, information, and knowledge services. It also discusses the types of grids like computational grids, data grids, and scavenging grids. Finally, it discusses some of the key advantages of grid computing like making better use of available hardware and idle computing resources.
1) The document discusses image segmentation in satellite images using optimal texture measures. It evaluates four texture measures from the gray-level co-occurrence matrix (GLCM) with six different window sizes.
2) Principal Component Analysis (PCA) is applied to reduce the texture measures to a manageable size while retaining discrimination information.
3) The methodology consists of selecting an optimal window size and optimal texture measure. A 7x7 window size provided superior performance for classification. PCA is used to analyze correlations between texture measures and window sizes.
This document discusses using a relevance vector machine (RVM) for classifying remotely sensed images. It proposes a methodology that involves extracting features from remote sensing images using wavelet transforms, then classifying the features using an RVM. The RVM classification results in fewer "relevance vectors" than other methods, allowing for faster classification, which is important for applications requiring low complexity or real-time classification. The document provides background on RVMs and describes the key steps of the proposed classification methodology.
This document discusses the development of an embedded web server using an ARM processor to monitor and control systems remotely. It provides background on the growing use of embedded web servers and Internet of Things applications. The paper then describes implementing TCP/IP networking on an ARM processor to enable Ethernet connectivity and allow the device to function as a web server. This allows various devices to connect and be controlled over the Internet through a standardized web interface using only a browser. The embedded web server provides a uniform interface for accessing traditional devices remotely. The rest of the paper details the hardware, web server implementation, and software concepts to realize this embedded web server functionality.
1) The document discusses security threats related to data mining tools used in programs like the Terrorism Information Awareness (TIA) program. It outlines threats such as predicting classified information, detecting hidden information, and mining open source data to predict events.
2) The document proposes some methods to improve security, such as restricting access, using data mining for crime detection/prevention, and employing multilevel security models.
3) The authors acknowledge they are in the early stages of research on using technology-based analysis tools rather than statistical approaches for identifying potential terrorists in large pools of data. They outline future work such as person identification without relying only on statistical comparisons.
This document summarizes a research paper that proposes a secure routing protocol called CA-AOMDV for mobile ad hoc networks (MANETs). CA-AOMDV extends the AOMDV routing protocol to be aware of channel conditions and selects multiple disjoint paths based on predicted link lifetimes. It uses the Secure Hash Algorithm 1 (SHA-1) to guarantee integrity in the network. The paper reviews AOMDV and introduces how CA-AOMDV incorporates channel properties into route discovery and maintenance to choose more reliable paths based on predicted link lifetimes calculated from node speeds and a channel model.
This document provides a comparative analysis of various cloud service providers. It begins with an introduction to cloud computing and techniques for optimal service selection. Then it presents a table comparing prominent cloud service providers like Amazon AWS, Google App Engine, Windows Azure, Force.com, Rackspace and GoGrid. The table compares their cloud tools, platforms supported, programming languages, premium support pricing policies and data backup strategies to help users understand and reasonably choose a suitable provider. The aim is to focus on decision making for optimal service selection through this brief comparative analysis.
This document summarizes research on improving search engine efficiency by maximizing the retrieval of information related to person names and aliases. It discusses how search engines work, including web crawling to index pages and information retrieval techniques to match queries. The authors propose using anchor text mining to create a graph of co-occurrence relationships between names and aliases in order to automatically discover association orders between them. This would allow search engines to better tag aliases according to their order of association, improving recall and mean reciprocal rank when searching for information on person names.
This document summarizes a research paper on modeling DC-DC converters with high frequencies using state space analysis. The paper presents an approach to modeling that avoids assuming constant current ripples, allowing for a better representation at high frequencies. State space averaging is commonly used to model PWM DC-DC converters but has limitations. The presented approach generalizes state space averaging to account for harmonics' effects, transforming time-varying models into time-invariant linear models. Equations for the state space model of a buck converter are provided both when operating and when turned off, and the average state model is derived. The goal is to improve performance for load and input variations through implicit feedforward compensation.
This document summarizes a research paper that proposes a Cooperative Multi-Hop Clustering Protocol to reduce the energy consumption of mobile devices using WLAN. The protocol uses Bluetooth to form clusters with one cluster head and multiple regular nodes. The cluster head remains connected to the WLAN to allow regular nodes to access the WLAN through Bluetooth at a lower power. The protocol selects cluster heads based on factors like energy, number of neighbors, and distance to the access point. It dynamically reforms clusters based on node energy usage and bandwidth needs. Simulation results show the approach effectively reduces WLAN power consumption for networks of over 200 nodes.
This document proposes an adaptive mobility-aware medium access control (MAC) protocol called MMAC-SW for wireless sensor networks. MMAC-SW uses a hybrid TDMA/CSMA approach and incorporates sleep-wake cycling to improve energy efficiency. It dynamically adjusts the frame length based on a mobility prediction model to adapt to changing network conditions. Simulation results show that MMAC-SW outperforms the baseline MMAC protocol in terms of energy consumption, packet delivery ratio, and average packet delay.
This document discusses an efficient deconvolution algorithm using dual-tree complex wavelet transform. It begins with an introduction to deconvolution and its challenges. Specifically, it notes that deconvolution is an ill-posed inverse problem and traditional methods can amplify noise. The document then reviews previous work on Fourier-domain and wavelet-based deconvolution techniques. It proposes a new two-step algorithm using a Wiener filter for global blur compensation followed by local denoising with dual-tree complex wavelet transform. This approach aims to convert the deconvolution problem into an easier non-white noise removal problem while exploiting properties of the dual-tree complex wavelet like shift-invariance and directionality to remove noise without assumptions on the
This document proposes a solution called CloudVision to help cloud providers troubleshoot problems reported by users. CloudVision would automatically track configuration changes to virtual machine instances and store this information in a database. When users report problems, CloudVision analyzes the configuration history to identify potential causes. It then takes predefined actions to check and solve problems by interacting with the configuration of VM instances. The goal is to help providers address user problems more quickly through automated problem reasoning and interactive troubleshooting based on visibility into VM configuration events and lifecycles.
This document discusses network traffic monitoring using the Winpcap packet capturing tool. It begins with an introduction to enterprise network monitoring and requirements. It then provides an overview of Winpcap, including its architecture and how it works. Key aspects covered include the packet capture driver, Packet.dll, and WinPcap.dll libraries. The document also discusses related tools like Jpcap for Java packet capturing. It concludes with an overview of a sample network traffic monitoring application that implements packet capturing using Winpcap.
This document proposes a solution to detect and remove black hole attacks in mobile ad-hoc networks. It begins by describing the black hole attack problem, where a malicious node pretends to have routes to destinations and absorbs network traffic. It then presents a detection technique that involves: (1) making the requesting node wait for multiple route replies instead of immediately sending data, (2) storing the replies in tables to compare sequence numbers and times, and (3) repeating the route discovery with a different destination to obtain multiple reply tables to identify inconsistencies that reveal black hole nodes. This proposed solution aims to identify black holes and find safe routes that avoid them.
This document describes the design of a high-speed Gray to binary code converter using a novel two transistor XOR gate. It introduces a low power and area efficient Gray to binary converter implemented using a two transistor XOR gate designed with two PMOS transistors. The converter and XOR gate are designed and simulated using Mentor Graphics tools. Simulation results show the converter has very low power dissipation and area requirements compared to other code converter designs.
This document summarizes a research paper that proposes modeling direct torque control of an inverter-fed induction motor as a hybrid system using discrete event modeling.
The paper first describes direct torque control and its issues with high torque ripple and variable switching frequency when using hysteresis comparators. It then presents a three step method: 1) Modeling direct torque control as a hybrid system with discrete voltage vector events and continuous flux and torque dynamics. 2) Abstracting the continuous dynamics as discrete events like flux entering/exiting regions. 3) Using supervisory control theory to control the induction motor.
The document focuses on modeling the inverter and induction motor as a discrete event system to address issues with direct torque control,
This document discusses data security and authentication using steganography and the STS protocol. It proposes a new approach that uses steganography to hide encrypted messages within images by generating a stego-key through the STS key exchange protocol. The STS protocol provides authentication by requiring signatures, while steganography further protects the data by concealing the encrypted messages within cover files like images. The document analyzes how combining steganography with cryptography and key exchange protocols like STS can enhance data security.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation F...AlexanderRichford
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation Functions to Prevent Interaction with Malicious QR Codes.
Aim of the Study: The goal of this research was to develop a robust hybrid approach for identifying malicious and insecure URLs derived from QR codes, ensuring safe interactions.
This is achieved through:
Machine Learning Model: Predicts the likelihood of a URL being malicious.
Security Validation Functions: Ensures the derived URL has a valid certificate and proper URL format.
This innovative blend of technology aims to enhance cybersecurity measures and protect users from potential threats hidden within QR codes 🖥 🔒
This study was my first introduction to using ML which has shown me the immense potential of ML in creating more secure digital environments!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211