This document summarizes an analysis of using Hamming distance to classify one-dimensional cellular automata rules and improve the statistical properties of certain rules for use in pseudo-random number generation. The analysis showed that Hamming distance can effectively distinguish between Wolfram's categories of rules and identify chaotic rules suitable for cryptographic applications. Applying von Neumann density correction and combining the output of two rules was found to significantly improve statistical test results, with one combination passing all Diehard tests.
The document discusses an algorithm for discovering patterns (motifs) in time series data. It proposes a modification to an existing motif discovery algorithm to improve performance by eliminating the influence of adjacent subsequences. The modified algorithm is compared to the original algorithm based on metrics like number of motifs discovered, execution time, and mean distance of motifs from the main pattern (1-motif), with the modification showing same or better performance in all cases tested.
This document discusses improving two algorithms in the Spot model checking library that create automata with more transitions than necessary. It proposes using a feedback arc set (FAS) to minimize the number of transitions.
It describes an existing heuristic called GR that approximates a minimal FAS in linear time by ordering states. It adapts this heuristic to work on automata rather than graphs by generalizing concepts like in/out degree.
The document evaluates applying the improved FAS computation to the complementation of deterministic Büchi automata and conversion of Rabin automata to Büchi automata. It finds this reduces the size of resulting automata by up to 31% in experiments.
This document discusses using an observability index to decompose the Kalman filter into two filters applied sequentially: 1) A filter estimating the transitional process caused by uncertainty in initial conditions, which treats the system as deterministic. 2) A filter estimating the steady state that treats the system as stochastic. The observability index measures observability as a signal-to-noise ratio to evaluate how long it takes to estimate states in the presence of noise. This decomposition simplifies filter implementation and reduces computational requirements by restricting estimated states and dividing the observation period into transitional and steady state estimation.
Probability-Based Analysis to Determine the Performance of Multilevel Feedbac...Eswar Publications
Operating System may work on different types of CPU scheduling algorithms with different mechanism and concepts. The Multilevel Feedback Queue (MLFQ) Scheduling manages a variety of processes among various queues in a better and efficient manner. CPU scheduler appears transition mechanism over various queues. This paper is presented with various schemes of under a probability-based model. The scheduler has random movement over queues with given time quantum. This paper designs general transition model for its functioning
and justifying comparison under different scheduling schemes through a simulation study applied on different data sets in particular cases.
This document summarizes algorithms for solving the Range 1 Query (R1Q) problem. The R1Q problem involves preprocessing a d-dimensional binary matrix to answer queries about whether a given range contains a 1. The document presents:
1) A deterministic linear-space data structure that solves orthogonal R1Qs in constant time for any dimension d.
2) Randomized sublinear-space data structures that solve orthogonal R1Qs and provide a tradeoff between query time and error probability.
3) Deterministic near-linear space data structures that solve non-orthogonal R1Qs (e.g. for axis-parallel triangles and spheres) in constant time.
A Modular approach on Statistical Randomness Study of bit sequencesEswar Publications
Randomness studies of bit sequences, created either by a ciphering algorithm or by a pseudorandom bit generator are a subject of prolonged research interest. During the recent past the 15 statistical tests of NIST turn out to be the most important as well as dependable tool for the same. For searching a right pseudorandom bit generator from among many such algorithms, large time is required to run the complete NIST statistical test suite. In this paper three test modules are considered in succession to reduce the searching time. The module-1 has one program and is executed almost instantly. The module-2 has four programs and takes about half an hour. The module-3 has fifteen programs and takes about four to five hours depending on the machine configuration. To choose the right pseudorandom bits generator, the algorithms rejected by the first module are not considered by the second module
while the third module does consider only those passed by the second module.
Chronological Decomposition Heuristic: A Temporal Divide-and-Conquer Strateg...Alkis Vazacopoulos
This document summarizes the chronological decomposition heuristic (CDH), a temporal divide-and-conquer strategy for solving production scheduling problems. The CDH decomposes the scheduling time horizon into smaller time chunks that are solved sequentially using MILP. It uses a depth-first search with backtracking to find feasible solutions. The document provides an example application of the CDH to a small crude oil blending scheduling problem, showing it finds optimal or near-optimal solutions faster than solving the full problem at once.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcsitconf
A quantum computation problem is discussed in this paper. Many new features that make
quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform
algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is
presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is
analysed. The probability distribution of the measuring result of phase value is presented and
the computational efficiency is discussed.
The document discusses an algorithm for discovering patterns (motifs) in time series data. It proposes a modification to an existing motif discovery algorithm to improve performance by eliminating the influence of adjacent subsequences. The modified algorithm is compared to the original algorithm based on metrics like number of motifs discovered, execution time, and mean distance of motifs from the main pattern (1-motif), with the modification showing same or better performance in all cases tested.
This document discusses improving two algorithms in the Spot model checking library that create automata with more transitions than necessary. It proposes using a feedback arc set (FAS) to minimize the number of transitions.
It describes an existing heuristic called GR that approximates a minimal FAS in linear time by ordering states. It adapts this heuristic to work on automata rather than graphs by generalizing concepts like in/out degree.
The document evaluates applying the improved FAS computation to the complementation of deterministic Büchi automata and conversion of Rabin automata to Büchi automata. It finds this reduces the size of resulting automata by up to 31% in experiments.
This document discusses using an observability index to decompose the Kalman filter into two filters applied sequentially: 1) A filter estimating the transitional process caused by uncertainty in initial conditions, which treats the system as deterministic. 2) A filter estimating the steady state that treats the system as stochastic. The observability index measures observability as a signal-to-noise ratio to evaluate how long it takes to estimate states in the presence of noise. This decomposition simplifies filter implementation and reduces computational requirements by restricting estimated states and dividing the observation period into transitional and steady state estimation.
Probability-Based Analysis to Determine the Performance of Multilevel Feedbac...Eswar Publications
Operating System may work on different types of CPU scheduling algorithms with different mechanism and concepts. The Multilevel Feedback Queue (MLFQ) Scheduling manages a variety of processes among various queues in a better and efficient manner. CPU scheduler appears transition mechanism over various queues. This paper is presented with various schemes of under a probability-based model. The scheduler has random movement over queues with given time quantum. This paper designs general transition model for its functioning
and justifying comparison under different scheduling schemes through a simulation study applied on different data sets in particular cases.
This document summarizes algorithms for solving the Range 1 Query (R1Q) problem. The R1Q problem involves preprocessing a d-dimensional binary matrix to answer queries about whether a given range contains a 1. The document presents:
1) A deterministic linear-space data structure that solves orthogonal R1Qs in constant time for any dimension d.
2) Randomized sublinear-space data structures that solve orthogonal R1Qs and provide a tradeoff between query time and error probability.
3) Deterministic near-linear space data structures that solve non-orthogonal R1Qs (e.g. for axis-parallel triangles and spheres) in constant time.
A Modular approach on Statistical Randomness Study of bit sequencesEswar Publications
Randomness studies of bit sequences, created either by a ciphering algorithm or by a pseudorandom bit generator are a subject of prolonged research interest. During the recent past the 15 statistical tests of NIST turn out to be the most important as well as dependable tool for the same. For searching a right pseudorandom bit generator from among many such algorithms, large time is required to run the complete NIST statistical test suite. In this paper three test modules are considered in succession to reduce the searching time. The module-1 has one program and is executed almost instantly. The module-2 has four programs and takes about half an hour. The module-3 has fifteen programs and takes about four to five hours depending on the machine configuration. To choose the right pseudorandom bits generator, the algorithms rejected by the first module are not considered by the second module
while the third module does consider only those passed by the second module.
Chronological Decomposition Heuristic: A Temporal Divide-and-Conquer Strateg...Alkis Vazacopoulos
This document summarizes the chronological decomposition heuristic (CDH), a temporal divide-and-conquer strategy for solving production scheduling problems. The CDH decomposes the scheduling time horizon into smaller time chunks that are solved sequentially using MILP. It uses a depth-first search with backtracking to find feasible solutions. The document provides an example application of the CDH to a small crude oil blending scheduling problem, showing it finds optimal or near-optimal solutions faster than solving the full problem at once.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcsitconf
A quantum computation problem is discussed in this paper. Many new features that make
quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform
algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is
presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is
analysed. The probability distribution of the measuring result of phase value is presented and
the computational efficiency is discussed.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcscpconf
A quantum computation problem is discussed in this paper. Many new features that make quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is analysed. The probability distribution of the measuring result of phase value is presented and the computational efficiency is discussed.
Analysis of single server fixed batch service queueing system under multiple ...Alexander Decker
This document analyzes a single server queueing system with fixed batch service, multiple vacations, and the possibility of catastrophes. The system uses a Poisson arrival process and exponential service times. The server provides service in batches of size k. If fewer than k customers remain after service, the server takes an exponential vacation. If a catastrophe occurs, all customers are lost and the server vacations. The document derives the generating functions and steady state probabilities for the number of customers when the server is busy or vacationing. It also provides closed form solutions for performance measures like mean number of customers and variance. Numerical studies examine these measures for varying system parameters.
ICDE-2015 Shortest Path Traversal Optimization and Analysis for Large Graph C...Waqas Nawaz
Waqas Nawaz Khokhar presented research on optimizing shortest path traversal and analysis for large graph clustering. The presentation outlined challenges with traditional graph clustering approaches for big real-world graphs. It proposed four optimizations: 1) a collaborative similarity measure to reduce complexity from O(n3) to O(n2logn); 2) identifying overlapping shortest path regions to avoid redundant traversals; 3) confining traversals within clusters to limit unnecessary graph regions; and 4) allowing parallel shortest path queries to reduce latency. Experimental results on real and synthetic graphs showed the approaches improved efficiency by 40% in time and an order of magnitude in space while maintaining clustering quality. Future work aims to address intermediate data explosion
This document contains a lesson plan for an Advanced Data Structures and Algorithms course. The course is divided into 5 units covering iterative and recursive algorithms, optimization algorithms, dynamic programming algorithms, shared objects and concurrent objects, and concurrent data structures. Each unit lists topics, reference materials, and the number of periods required. There are a total of 45 periods for instruction and 5 tutorial hours. Staff, HOD, and principal signatures confirm approval of the lesson plan.
On selection of periodic kernels parameters in time series predictioncsandit
This document discusses parameter selection for periodic kernels used in time series prediction. Periodic kernels are a type of kernel function used in kernel regression to perform nonparametric time series prediction. The document examines how the parameters of two periodic kernels - the first periodic kernel (FPK) and second periodic kernel (SPK) - influence prediction error. It presents an easy methodology for finding parameter values based on grid search. This methodology was tested on benchmark and real datasets and showed satisfactory results.
This document presents an optimization of algorithms for real-time ECG beat classification. It compares algorithms using voltage values in the time domain versus those using Daubechies wavelet analysis. It extracts features around reference peaks within the QRS complex and uses clustering methods to classify beats in real-time as normal, premature ventricular contraction, or unclassified. Evaluating algorithms on 32 MIT-BIH records, the method using Daubechies wavelets and correlation measure achieved 93.25% sensitivity and 91.43% positive predictivity for premature ventricular contraction detection, making it suitable for real-time systems due to low computational cost.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
The box-fitting least squares (BLS) algorithm is used to detect periodic transits in light curves. It assumes the light curve only has two discrete values, fits box-shaped functions to folded light curves at different trial periods, and calculates the signal residue to identify the period with the maximum signal. The algorithm iterates over trial periods and transit durations to find the best-fitting five parameters that describe the period, depths, duration, and epoch of any transits present.
A review of two alignment-free methods for sequence comparison. In this presentation two alignment-free methods are studied:
- "Similarity analysis of DNA sequences based on LZ complexity and dynamic programming algorithm" by Guo et al.
- "Alignment-free comparison of genome sequences by a new numerical characterization" by Huang et al.
On Projected Newton Barrier Methods for Linear Programming and an Equivalence...SSA KPI
This document describes projected Newton barrier methods for solving linear programming problems. It begins by reviewing classical barrier function methods for nonlinear programming which apply a logarithmic transformation to inequality constraints. For linear programs, the transformed problem can be solved using a "projected Newton barrier" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. Details are then given of a specific barrier algorithm and its implementation, along with numerical results on test problems. Implications for future developments in linear programming are discussed.
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]AI Robotics KR
The particle filter is a statistical approach to estimation that works well for problems where the Kalman filter fails due to nonlinearities. It approximates the conditional probability distribution of the state using weighted particles. The weights are updated using Bayes' rule based on new measurements. However, particle filters can suffer from sample impoverishment over time, where most particles have negligible weight. Various techniques like roughening, prior editing, and Markov chain Monte Carlo resampling are used to address this issue.
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
One of the most significant challenges in Computing Determinant of Rectangular Matrices is high time
complexity of its algorithm. Among all definitions of determinant of rectangular matrices, Radic’s
definition has special features which make it more notable. But in this definition, C(N
M
) sub matrices of the
order m×m needed to be generated that put this problem in np-hard class. On the other hand, any row or
column reduction operation may hardly lead to diminish the volume of calculation. Therefore, in this paper
we try to present the parallel algorithm which can decrease the time complexity of computing the
determinant of non-square matrices to O(N).
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]AI Robotics KR
This document discusses Wiener filtering and recursive least squares estimation. It begins with an introduction to Wiener filtering, providing an overview of its history and development. It then discusses how the power spectrum of a stochastic process changes when passed through a linear time-invariant system. Next, it formulates the problem of using a linear filter to extract a signal from additive noise. It derives expressions for the power spectrum of the error and its variance. Finally, it considers optimizing a parametric filter by assuming the optimal filter is a first-order low-pass filter and that the signal and noise spectra are known forms. It derives an expression for the optimal parameter T based on minimizing the error variance.
In the classical model, the fundamental building block is represented by bits exists in two states a 0 or a 1. Computations are done by logic gates on the bits to produce other bits. By increasing the number of bits, the complexity of problem and the time of computation increases. A quantum algorithm is a sequence of operations on a register to transform it into a state which when measured yields the desired result. This paper provides introduction to quantum computation by developing qubit, quantum gate and quantum circuits.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
Recurrence Quantification Analysis :Tutorial & application to eye-movement dataDeb Aks
This document provides an overview of recurrence quantification analysis (RQA) and its application to analyzing eye movement data. RQA uses time series analysis techniques like phase space reconstruction to detect recurring patterns in complex systems. It was applied to study whether the recurring dynamics of eye movements can serve as a memory to sustain object tracking over time and during interruptions. The document reviews key concepts in RQA like delay coordinates, embedding dimension estimation, recurrence plots, and measures like determinism, laminarity, and trapping time. It includes examples of RQA applied to simulated sine waves and analyses the steps involved in conducting RQA on human eye tracking data.
This document summarizes research on a mathematical model called long range first passage percolation (LRFPP), which can be used to model processes like disease spread. The researchers used computer simulations to study how the growth of infection clusters varies with the parameter alpha under this model in 1, 2, and 3 dimensions. They observed that for certain alpha ranges, growth is stretched exponential, polynomial, or linear. Comparing results across dimensions, they found boundaries get smoother as alpha increases. The model provides insights into classifying and predicting real disease spread based on initial growth patterns. Further questions are proposed about optimal paths, fractal dimensions, and time distributions under this model.
Quantum algorithm for solving linear systems of equationsXequeMateShannon
Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b, find a vector x such that Ax=b. We consider the case where one doesn't need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse, N by N and has condition number kappa, classical algorithms can find x and estimate x'Mx in O(N sqrt(kappa)) time. Here, we exhibit a quantum algorithm for this task that runs in poly(log N, kappa) time, an exponential improvement over the best classical algorithm.
NNPDF3.0: parton distributions for the LHC Run IIjuanrojochacon
NNPDF3.0 is a new PDF determination that includes updated data and theory improvements compared to NNPDF2.3. It includes all HERA-II data and new LHC measurements. The fitting code was rewritten in C++ and validated using closure tests. NNPDF3.0 shows reasonable agreement with NNPDF2.3 while improving descriptions of data and reducing uncertainties in some regions. It provides PDFs for use at the LHC Run II.
Networks must be able to transfer data from one device to another with acceptable
accuracy. For most applications, a system must guarantee that the data received are
identical to the data transmitted. Any time data are transmitted from one node to the
next, they can become corrupted in passage. Many factors can alter one or more bits of
a message. Some applications require a mechanism for detecting and correcting errors.
Some applications can tolerate a small level of error. For example, random errors
in audio or video transmissions may be tolerable, but when we transfer text, we expect
a very high level of accuracy.
At the data-link layer, if a frame is corrupted between the two nodes, it needs to be
corrected before it continues its journey to other nodes. However, most link-layer protocols
simply discard the frame and let the upper-layer protocols handle the retransmission
of the frame. Some multimedia applications, however, try to correct the corrupted frame.
This chapter is divided into five sections.
❑ The first section introduces types of errors, the concept of redundancy, and distinguishes
between error detection and correction.
❑ The second section discusses block coding. It shows how error can be detected
using block coding and also introduces the concept of Hamming distance.
❑ The third section discusses cyclic codes. It discusses a subset of cyclic code, CRC,
that is very common in the data-link layer. The section shows how CRC can be
easily implemented in hardware and represented by polynomials.
❑ The fourth section discusses checksums. It shows how a checksum is calculated for
a set of data words. It also gives some other approaches to traditional checksum.
❑ The fifth section discusses forward error correction. It shows how Hamming distance
can also be used for this purpose. The section also describes cheaper methods
to achieve the same goal, such as XORing of packets, interleaving chunks, or
compounding high and low resolutions packets.
COMPUTATIONAL PERFORMANCE OF QUANTUM PHASE ESTIMATION ALGORITHMcscpconf
A quantum computation problem is discussed in this paper. Many new features that make quantum computation superior to classical computation can be attributed to quantum coherence
effect, which depends on the phase of quantum coherent state. Quantum Fourier transform algorithm, the most commonly used algorithm, is introduced. And one of its most important
applications, phase estimation of quantum state based on quantum Fourier transform, is presented in details. The flow of phase estimation algorithm and the quantum circuit model are
shown. And the error of the output phase value, as well as the probability of measurement, is analysed. The probability distribution of the measuring result of phase value is presented and the computational efficiency is discussed.
Analysis of single server fixed batch service queueing system under multiple ...Alexander Decker
This document analyzes a single server queueing system with fixed batch service, multiple vacations, and the possibility of catastrophes. The system uses a Poisson arrival process and exponential service times. The server provides service in batches of size k. If fewer than k customers remain after service, the server takes an exponential vacation. If a catastrophe occurs, all customers are lost and the server vacations. The document derives the generating functions and steady state probabilities for the number of customers when the server is busy or vacationing. It also provides closed form solutions for performance measures like mean number of customers and variance. Numerical studies examine these measures for varying system parameters.
ICDE-2015 Shortest Path Traversal Optimization and Analysis for Large Graph C...Waqas Nawaz
Waqas Nawaz Khokhar presented research on optimizing shortest path traversal and analysis for large graph clustering. The presentation outlined challenges with traditional graph clustering approaches for big real-world graphs. It proposed four optimizations: 1) a collaborative similarity measure to reduce complexity from O(n3) to O(n2logn); 2) identifying overlapping shortest path regions to avoid redundant traversals; 3) confining traversals within clusters to limit unnecessary graph regions; and 4) allowing parallel shortest path queries to reduce latency. Experimental results on real and synthetic graphs showed the approaches improved efficiency by 40% in time and an order of magnitude in space while maintaining clustering quality. Future work aims to address intermediate data explosion
This document contains a lesson plan for an Advanced Data Structures and Algorithms course. The course is divided into 5 units covering iterative and recursive algorithms, optimization algorithms, dynamic programming algorithms, shared objects and concurrent objects, and concurrent data structures. Each unit lists topics, reference materials, and the number of periods required. There are a total of 45 periods for instruction and 5 tutorial hours. Staff, HOD, and principal signatures confirm approval of the lesson plan.
On selection of periodic kernels parameters in time series predictioncsandit
This document discusses parameter selection for periodic kernels used in time series prediction. Periodic kernels are a type of kernel function used in kernel regression to perform nonparametric time series prediction. The document examines how the parameters of two periodic kernels - the first periodic kernel (FPK) and second periodic kernel (SPK) - influence prediction error. It presents an easy methodology for finding parameter values based on grid search. This methodology was tested on benchmark and real datasets and showed satisfactory results.
This document presents an optimization of algorithms for real-time ECG beat classification. It compares algorithms using voltage values in the time domain versus those using Daubechies wavelet analysis. It extracts features around reference peaks within the QRS complex and uses clustering methods to classify beats in real-time as normal, premature ventricular contraction, or unclassified. Evaluating algorithms on 32 MIT-BIH records, the method using Daubechies wavelets and correlation measure achieved 93.25% sensitivity and 91.43% positive predictivity for premature ventricular contraction detection, making it suitable for real-time systems due to low computational cost.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
The box-fitting least squares (BLS) algorithm is used to detect periodic transits in light curves. It assumes the light curve only has two discrete values, fits box-shaped functions to folded light curves at different trial periods, and calculates the signal residue to identify the period with the maximum signal. The algorithm iterates over trial periods and transit durations to find the best-fitting five parameters that describe the period, depths, duration, and epoch of any transits present.
A review of two alignment-free methods for sequence comparison. In this presentation two alignment-free methods are studied:
- "Similarity analysis of DNA sequences based on LZ complexity and dynamic programming algorithm" by Guo et al.
- "Alignment-free comparison of genome sequences by a new numerical characterization" by Huang et al.
On Projected Newton Barrier Methods for Linear Programming and an Equivalence...SSA KPI
This document describes projected Newton barrier methods for solving linear programming problems. It begins by reviewing classical barrier function methods for nonlinear programming which apply a logarithmic transformation to inequality constraints. For linear programs, the transformed problem can be solved using a "projected Newton barrier" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. Details are then given of a specific barrier algorithm and its implementation, along with numerical results on test problems. Implications for future developments in linear programming are discussed.
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]AI Robotics KR
The particle filter is a statistical approach to estimation that works well for problems where the Kalman filter fails due to nonlinearities. It approximates the conditional probability distribution of the state using weighted particles. The weights are updated using Bayes' rule based on new measurements. However, particle filters can suffer from sample impoverishment over time, where most particles have negligible weight. Various techniques like roughening, prior editing, and Markov chain Monte Carlo resampling are used to address this issue.
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
One of the most significant challenges in Computing Determinant of Rectangular Matrices is high time
complexity of its algorithm. Among all definitions of determinant of rectangular matrices, Radic’s
definition has special features which make it more notable. But in this definition, C(N
M
) sub matrices of the
order m×m needed to be generated that put this problem in np-hard class. On the other hand, any row or
column reduction operation may hardly lead to diminish the volume of calculation. Therefore, in this paper
we try to present the parallel algorithm which can decrease the time complexity of computing the
determinant of non-square matrices to O(N).
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]AI Robotics KR
This document discusses Wiener filtering and recursive least squares estimation. It begins with an introduction to Wiener filtering, providing an overview of its history and development. It then discusses how the power spectrum of a stochastic process changes when passed through a linear time-invariant system. Next, it formulates the problem of using a linear filter to extract a signal from additive noise. It derives expressions for the power spectrum of the error and its variance. Finally, it considers optimizing a parametric filter by assuming the optimal filter is a first-order low-pass filter and that the signal and noise spectra are known forms. It derives an expression for the optimal parameter T based on minimizing the error variance.
In the classical model, the fundamental building block is represented by bits exists in two states a 0 or a 1. Computations are done by logic gates on the bits to produce other bits. By increasing the number of bits, the complexity of problem and the time of computation increases. A quantum algorithm is a sequence of operations on a register to transform it into a state which when measured yields the desired result. This paper provides introduction to quantum computation by developing qubit, quantum gate and quantum circuits.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
Recurrence Quantification Analysis :Tutorial & application to eye-movement dataDeb Aks
This document provides an overview of recurrence quantification analysis (RQA) and its application to analyzing eye movement data. RQA uses time series analysis techniques like phase space reconstruction to detect recurring patterns in complex systems. It was applied to study whether the recurring dynamics of eye movements can serve as a memory to sustain object tracking over time and during interruptions. The document reviews key concepts in RQA like delay coordinates, embedding dimension estimation, recurrence plots, and measures like determinism, laminarity, and trapping time. It includes examples of RQA applied to simulated sine waves and analyses the steps involved in conducting RQA on human eye tracking data.
This document summarizes research on a mathematical model called long range first passage percolation (LRFPP), which can be used to model processes like disease spread. The researchers used computer simulations to study how the growth of infection clusters varies with the parameter alpha under this model in 1, 2, and 3 dimensions. They observed that for certain alpha ranges, growth is stretched exponential, polynomial, or linear. Comparing results across dimensions, they found boundaries get smoother as alpha increases. The model provides insights into classifying and predicting real disease spread based on initial growth patterns. Further questions are proposed about optimal paths, fractal dimensions, and time distributions under this model.
Quantum algorithm for solving linear systems of equationsXequeMateShannon
Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b, find a vector x such that Ax=b. We consider the case where one doesn't need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse, N by N and has condition number kappa, classical algorithms can find x and estimate x'Mx in O(N sqrt(kappa)) time. Here, we exhibit a quantum algorithm for this task that runs in poly(log N, kappa) time, an exponential improvement over the best classical algorithm.
NNPDF3.0: parton distributions for the LHC Run IIjuanrojochacon
NNPDF3.0 is a new PDF determination that includes updated data and theory improvements compared to NNPDF2.3. It includes all HERA-II data and new LHC measurements. The fitting code was rewritten in C++ and validated using closure tests. NNPDF3.0 shows reasonable agreement with NNPDF2.3 while improving descriptions of data and reducing uncertainties in some regions. It provides PDFs for use at the LHC Run II.
Networks must be able to transfer data from one device to another with acceptable
accuracy. For most applications, a system must guarantee that the data received are
identical to the data transmitted. Any time data are transmitted from one node to the
next, they can become corrupted in passage. Many factors can alter one or more bits of
a message. Some applications require a mechanism for detecting and correcting errors.
Some applications can tolerate a small level of error. For example, random errors
in audio or video transmissions may be tolerable, but when we transfer text, we expect
a very high level of accuracy.
At the data-link layer, if a frame is corrupted between the two nodes, it needs to be
corrected before it continues its journey to other nodes. However, most link-layer protocols
simply discard the frame and let the upper-layer protocols handle the retransmission
of the frame. Some multimedia applications, however, try to correct the corrupted frame.
This chapter is divided into five sections.
❑ The first section introduces types of errors, the concept of redundancy, and distinguishes
between error detection and correction.
❑ The second section discusses block coding. It shows how error can be detected
using block coding and also introduces the concept of Hamming distance.
❑ The third section discusses cyclic codes. It discusses a subset of cyclic code, CRC,
that is very common in the data-link layer. The section shows how CRC can be
easily implemented in hardware and represented by polynomials.
❑ The fourth section discusses checksums. It shows how a checksum is calculated for
a set of data words. It also gives some other approaches to traditional checksum.
❑ The fifth section discusses forward error correction. It shows how Hamming distance
can also be used for this purpose. The section also describes cheaper methods
to achieve the same goal, such as XORing of packets, interleaving chunks, or
compounding high and low resolutions packets.
This document discusses error detection and correction techniques for digital data transmission. It introduces different types of errors that can occur, such as single-bit and burst errors. It describes how redundancy is used to detect and correct errors using block coding techniques. Specific examples are provided to illustrate how block codes are constructed and used to detect and correct errors. Key concepts discussed include linear block codes, Hamming distance, minimum Hamming distance, and how these relate to the error detection and correction capabilities of different coding schemes.
The document discusses error detection and correction techniques. It introduces Hamming codes, which can detect up to two bit errors and correct one bit errors. It explains how Hamming codes work by adding redundant parity bits to messages based on the values of data bits. A 4x7 generator matrix is used to encode 4-bit data words into 7-bit codewords for the Hamming(7,4) code. An example shows how a data word of 1010 encodes to the codeword 1011010. The summary then tests a C++ implementation of Hamming code encoding.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
Error coding uses mathematical formulas to encode data bits into longer code words for transmission. This allows errors caused by environmental interference to be detected and sometimes corrected at the destination. There are two main types of error coding: error-detecting codes and error-correcting codes. Error-detecting codes add enough redundancy to allow errors to be detected but not corrected, while error-correcting codes add more redundancy to allow errors to be corrected. Common error-detecting coding techniques include parity checks, checksums, and cyclic redundancy checks (CRCs). These techniques use additional redundant bits appended to the data to facilitate error detection. CRC is particularly powerful as it can detect all single-bit errors and many burst errors.
Data compression huffman coding algorithamRahul Khanwani
The document discusses Huffman coding, a lossless data compression algorithm that uses variable-length codes to encode symbols based on their frequency of occurrence. It explains that Huffman coding assigns shorter codes to more frequent symbols for efficient data compression. The document provides details on how the Huffman coding algorithm works by constructing a binary tree from the frequency of symbols and assigning codes based on paths in the tree. It also discusses different types of Huffman coding like static, dynamic and adaptive probability distributions and provides examples to illustrate the adaptive Huffman coding process.
Error correction techniques include retransmission of data when errors are detected and forward error correction where error correction codes are used to automatically correct errors. Hamming codes can detect and correct errors by using redundancy bits in the data and calculating parity checks to determine the position of errors. The document provides an example of how hamming codes work by adding redundancy bits to input data and using the parity checks to detect errors in the encoded data.
Download to View - Animated Examples
------------------------------------------------------------
Error detection uses the concept of redundancy, which means adding extra bits for detecting error at the destination.
Parity Check is one of the Error Correcting Codes.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
This document discusses error detection and correction in digital communication. It covers the types of errors like single-bit and burst errors. It then explains various error detection techniques like parity checks, checksums, and cyclic redundancy checks (CRC) that work by adding redundant bits. Finally, it discusses Hamming codes that can not only detect errors but also correct single-bit errors through the strategic placement of redundant bits.
This document discusses error detection and correction in digital communication. It describes how coding schemes add redundancy to messages through techniques like block coding and convolution coding to detect or correct errors. The encoder adds redundant bits to original messages to create relationships between bits that the decoder can use to check for errors. It also explains the use of modular arithmetic, specifically modulo-2 arithmetic which uses only 1s and 0s, for error detection and correction operations.
The document discusses various methods for error detection in digital communication, including:
1. Parity checking, which adds an extra parity bit to ensure an even or odd number of 1s. Two-dimensional parity divides data into a grid and adds a redundant row.
2. Checksums, which divide data into sections, add the sections using one's complement arithmetic, complement the sum, and send it along with the data for verification.
3. Vertical redundancy checking adds a parity bit to each character to detect single-bit errors and some multiple-bit errors. It is often used with ASCII encoding.
FEC-Forward Error Correction for Optics Professionals..www.mapyourtech.comMapYourTech
Forward error correction (FEC) adds redundancy to transmitted data to allow the detection and correction of errors without retransmission. FEC works by encoding data at the transmitter and decoding it at the receiver. It allows reliable data transmission over noisy communication channels and improves performance metrics like bit error rate. Common FEC codes include Reed-Solomon codes, which offer good error correction ability and are widely used in optical communication systems to improve transmission distance and efficiency.
The document discusses error detection and correction techniques used in data communication. It describes different types of errors like single bit errors and burst errors. It then explains various error detection techniques like vertical redundancy check (VRC), longitudinal redundancy check (LRC), and cyclic redundancy check (CRC). VRC adds a parity bit, LRC calculates parity bits for each column, and CRC uses a generator polynomial to calculate redundant bits. The document also discusses Hamming code, an error correcting code that uses redundant bits to detect and correct single bit errors.
This document provides information about error detection and correction techniques used in computer networks. It discusses different types of errors that can occur like single-bit and burst errors. It explains that redundancy is needed to detect or correct errors by adding extra bits. Detection techniques discussed include parity checks, checksumming, and cyclic redundancy checks. Parity checks can only detect odd number of errors. Cyclic redundancy checks use polynomial arithmetic to generate a checksum. Forward error correction allows detection and correction of errors by adding redundant bits to distinguish different error possibilities. Hamming code is an example of an error correcting code that can detect and correct single bit errors.
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
DCE: A NOVEL DELAY CORRELATION MEASUREMENT FOR TOMOGRAPHY WITH PASSIVE REAL...ijdpsjournal
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
This document provides an overview of using quantum computers for quantum simulation. It discusses how quantum computers can efficiently store quantum states using superpositions, while classical computers require exponential resources. The document reviews Lloyd's method for implementing time evolution under a Hamiltonian via Trotterization. It also discusses other techniques like pseudo-spectral methods and quantum lattice gases. The goal of quantum simulation is to study properties of quantum systems that cannot be efficiently simulated classically.
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA BOUNDARY CONDITIONSijcsit
We present the findings of analysis of elementary cellular automata (ECA) boundary conditions. Fixed and variable boundaries are attempted. The outputs of linear feedback shift registers (LFSRs) act as continuous inputs to the two boundaries of a one-dimensional (1-D) Elementary Cellular Automata (ECA) are analyzed and compared. The results show superior randomness features and the output string has passed the Diehard statistical battery of tests. The design has strong correlation immunity and it is inherently amenable for VLSI implementation. Therefore it can be considered to be a good and viable candidate for parallel pseudo random number generation
Choice of Numerical Integration Method for Wind Time History Analysis of Tall...inventy
Wind tunnel tests are being performed routinely around the world for designing tall buildings but the advent of powerful computational tools will make time-history analysis for wind more common in near future. As the duration of wind storms ranges from tens of minutes to hours while earthquake durations are typically less than a three to four minutes, the choice of a time step size (Δt) for wind studies needs to be much larger both to reduce the computational time and to save disk space. As the error in any numerical solution of the equation of motion is dependent on step size (Δt), careful investigations on the choice of numerical integration methods for wind analyses are necessary. From a wide variety of integration methods available, it was decided to investigate three methods that seem appropriate for 3D-time history analysis of tall buildings for wind. These are modal time history analysis, the Hilber-Hughes-Taylor (HHT) method or α-method with α=- 0.1, and the Newmark method with β=0.25 and γ=0.5 ( i.e., trapezoidal rule). SAP2000, a common structural analysis software tool, and a 64-story structure are used to conduct all the analyses in this paper. A boundary layer wind tunnel (BLWT) pressure time history measured at 120 locations around the building envelope of a similar structure is used for the analyses. Analyses performed with both the HHT and Newmark-method considering P-delta effects show that second order effects have a considerable impact on both displacement and acceleration response. This result shows that it is necessary to account P-delta effect for wind analysis of tall buildings. As the direct integration time history analysis required very large computation times and very large computer physical memory for a wind duration of hours, a modal analysis with reduced stiffness is considered as a good alternative. For that purpose, a non-linear static analysis of the structure with a load combination of 1.0D + 1.0L is performed in SAP2000 and the reduced stiffness of the structure after the analysis is used to conduct an eigenvalue analysis to extract the mode shapes and frequencies of this structure. Then the first 20- modes are used to perform a modal time history analysis for wind load. The result shows that the responses from modal analysis with “20-mode (reduced stiffness)” are comparable with that from the P-Δ analyses of Newmark-method
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
Time alignment techniques for experimental sensor dataIJCSES Journal
Experimental data is subject to data loss, which presents a challenge for representing the data with a
proper time scale. Additionally, data from separate measurement systems need to be aligned in order to
use the data cooperatively. Due to the need for accurate time alignment, various practical techniques are
presented along with an illustrative example detailing each step of the time alignment procedure for actual
experimental data from an Unmanned Aerial Vehicle (UAV). Some example MATLAB code is also
provided.
Joint Timing and Frequency Synchronization in OFDMidescitation
1) The document discusses synchronization challenges in OFDM systems, including packet detection, timing synchronization, and frequency offset calculation. It analyzes various synchronization techniques like auto-correlation difference, auto-correlation sum, and cross-correlation methods for packet detection and timing synchronization.
2) For frequency offset calculation, it describes data-aided and non-data aided algorithms, including the van De Beek algorithm which relies on cyclic prefix redundancy rather than pilot symbols.
3) The key synchronization challenges are maintaining orthogonality between subcarriers in the presence of frequency offsets introduced by the wireless channel, which can significantly degrade performance if not corrected. Accurate synchronization algorithms are important for OFDM receivers to function properly.
A QUANTITATIVE ANALYSIS OF HANDOVER TIME AT MAC LAYER FOR WIRELESS MOBILE NET...ijwmn
Extensive studies have been carried out for reducing the handover time of wireless mobile network at
medium access control (MAC) layer. However, none of them show the impact of reduced handover time
on the overall performance of wireless mobile networks. This paper presents a quantitative analysis to
show the impact of reduced handover time on the performance of wireless mobile networks. The proposed
quantitative model incorporates many critical performance parameters involve in reducing the handover
time for wireless mobile networks. In addition, we analyze the use of active scanning technique with
comparatively shorter beacon interval time in a handoff process. Our experiments verify that the active
scanning can reduce the overall handover time at MAC layer if comparatively shorter beacon intervals are
utilized for packet transmission. The performance measures adopted in this paper for experimental
verifications are network throughput under different network loads.
A Hybrid Deep Neural Network Model For Time Series ForecastingMartha Brown
The document presents a hybrid deep neural network model consisting of a convolutional neural network (CNN) and long short-term memory (LSTM) architecture for time series forecasting. The model combines the CNN's ability to extract features with the LSTM's ability to learn long-term sequential dependencies. The hybrid CNN-LSTM model is evaluated on two datasets and compared to RNN, LSTM, GRU, and bidirectional LSTM models. The experiments show that the proposed hybrid CNN-LSTM model outperforms the other models on both datasets, demonstrating robustness against error for time series forecasting.
Abstract—this document gives a general idea of Access Control
Lists, how are they implemented, a brief and short history of
them, a treatment of the main issues that they present and some studies realized in order to perform them.
- The document details a state space solver approach for analog mixed-signal simulations using SystemC. It models analog circuits as sets of linear differential equations and solves them using the Runge-Kutta method of numerical integration.
- Two examples are provided: a digital voltage regulator simulation and a digital phase locked loop simulation. Both analog circuits are modeled in state space and simulated alongside a digital design to verify mixed-signal behavior.
- The state space approach allows modeling analog circuits without transistor-level details, improving simulation speed over traditional mixed-mode simulations while still capturing system-level behavior.
This document summarizes a research paper on chaos in small-world networks. It presents a nonlinear small-world network model to investigate the effects of nonlinear interaction, time delay, and recovery on small-world network dynamics. Both numerical simulations and analytical analysis show that the network response can become chaotic when the network scale exceeds a critical length scale. However, the network may behave regularly at smaller scales. The addition of negative feedback through site recovery is also shown to control chaos in the network response.
This document provides a tutorial on the RESTART rare event simulation technique. RESTART introduces nested sets of states (C1 to CM) defined by importance thresholds. When the process enters a set Ci, Ri simulation retrials are performed within that set until exiting. This oversamples high importance regions to estimate rare event probabilities more efficiently than crude simulation. The document analyzes RESTART's efficiency, deriving formulas for the variance of its estimator and gain over crude simulation. Guidelines are also provided for choosing an optimal importance function to maximize efficiency. Examples applying the guidelines to queueing networks and reliable systems are presented.
Oscar Nieves (11710858) Computational Physics Project - Inverted PendulumOscar Nieves
This document describes a numerical simulation of an inverted pendulum system created in MATLAB using a 4th order Runge-Kutta algorithm. The simulation models an inverted pendulum attached to a horizontally moving cart. Forces like air drag and friction are included, and parameters like mass, pendulum length, and initial conditions can be varied. Small changes to initial conditions can lead to large differences in motion, demonstrating the system's chaotic behavior. The document also outlines the methodology for adapting the Runge-Kutta algorithm to solve systems of coupled differential equations.
Design and analysis of a model predictive controller for active queue managementISA Interchange
Model predictive (MP) control as a novel active queue management (AQM) algorithm in dynamic computer networks is proposed. According to the predicted future queue length in the data buffer, early packets at the router are dropped reasonably by the MPAQM controller so that the queue length reaches the desired value with minimal tracking error. The drop probability is obtained by optimizing the network performance. Further, randomized algorithms are applied to analyze the robustness of MPAQM successfully, and also to provide the stability domain of systems with uncertain network parameters. The performances of MPAQM are evaluated through a series of simulations in NS2. The simulation results show that the MPAQM algorithm outperforms RED, PI, and REM algorithms in terms of stability, disturbance rejection, and robustness.
This document discusses ARIMA (autoregressive integrated moving average) models for time series forecasting. It covers the basic steps for identifying and fitting ARIMA models, including plotting the data, identifying possible AR or MA components using the autocorrelation function (ACF) and partial autocorrelation function (PACF), estimating model parameters, checking the residuals to validate the model fit, and choosing the best model. An example analyzes quarterly US GNP data to demonstrate these steps.
This document discusses smoothed particle hydrodynamics (SPH), a Lagrangian numerical method for simulating fluid flows. SPH uses discrete particles that move with the fluid and sample hydrodynamic properties by averaging values over neighboring particles. The document outlines the basic concepts of SPH, including how it approximates continuous fields with discrete particles and calculates spatial derivatives. It also discusses how SPH can be used to model astrophysical phenomena through numerical simulations since laboratory experiments are often impossible at astrophysical scales.
PERFORMANCE AND COMPLEXITY ANALYSIS OF A REDUCED ITERATIONS LLL ALGORITHMIJCNCJournal
Multiple-input multiple-output (MIMO) systems are playing an increasing and interesting role in the recent
wireless communication. The complexity and the performance of the systems are driving the different
studies and researches. Lattices Reduction techniques bring more resources to investigate the complexity
and performances of such systems.
In this paper, we look to modify a fixed complexity verity of the LLL algorithm to reduce the computation
operations by reducing the number of iterations without important performance degradation. Our proposal
shows that we can achieve a good performance results while avoiding extra iteration that doesn’t bring
much performance.
Similar to Hamming Distance and Data Compression of 1-D CA (20)
MRI IMAGES THRESHOLDING FOR ALZHEIMER DETECTIONcsitconf
This document summarizes a research paper that proposes a new method for thresholding MRI images to detect Alzheimer's disease. The method improves on Otsu's thresholding method by using a mixture of gamma distributions to model the histogram of MRI images, which allows it to handle asymmetric distributions. It introduces a "valley-emphasis" approach that selects threshold values located in valleys of the histogram to better detect small objects. Experimental results on MRI images demonstrate the method can effectively segment images and may help with early Alzheimer's detection.
EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTIONcsitconf
Radar images can reveal information about the shape of the surface terrain as well as its
physical and biophysical properties. Radar images have long been used in geological studies to
map structural features that are revealed by the shape of the landscape. Radar imagery also has
applications in vegetation and crop type mapping, landscape ecology, hydrology, and
volcanology. Image processing is using for detecting for objects in radar images. Edge
detection; which is a method of determining the discontinuities in gray level images; is a very
important initial step in Image processing. Many classical edge detectors have been developed
over time. Some of the well-known edge detection operators based on the first derivative of the
image are Roberts, Prewitt, Sobel which is traditionally implemented by convolving the image
with masks. Also Gaussian distribution has been used to build masks for the first and second
derivative. However, this distribution has limit to only symmetric shape. This paper will use to
construct the masks, the Weibull distribution which was more general than Gaussian because it
has symmetric and asymmetric shape. The constructed masks are applied to images and we
obtained good results.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcsitconf
Biometrics has become important in security applications. In comparison with many other
biometric features, iris recognition has very high recognition accuracy because it depends on
iris which is located in a place that still stable throughout human life and the probability to find
two identical iris's is close to zero. The identification system consists of several stages including
segmentation stage which is the most serious and critical one. The current segmentation
methods still have limitation in localizing the iris due to circular shape consideration of the
pupil. In this research, Daugman method is done to investigate the segmentation techniques.
Eyelid detection is another step that has been included in this study as a part of segmentation
stage to localize the iris accurately and remove unwanted area that might be included. The
obtained iris region is encoded using haar wavelets to construct the iris code, which contains
the most discriminating feature in the iris pattern. Hamming distance is used for comparison of
iris templates in the recognition stage. The dataset which is used for the study is UBIRIS
database. A comparative study of different edge detector operator is performed. It is observed
that canny operator is best suited to extract most of the edges to generate the iris code for
comparison. Recognition rate of 89% and rejection rate of 95% is achieved.
PLANNING BY CASE-BASED REASONING BASED ON FUZZY LOGICcsitconf
The treatment of complex systems often requires the manipulation of vague, imprecise and
uncertain information. Indeed, the human being is competent in handling of such systems in a
natural way. Instead of thinking in mathematical terms, humans describes the behavior of the
system by language proposals. In order to represent this type of information, Zadeh proposed to
model the mechanism of human thought by approximate reasoning based on linguistic
variables. He introduced the theory of fuzzy sets in 1965, which provides an interface between
language and digital worlds. In this paper, we propose a Boolean modeling of the fuzzy
reasoning that we baptized Fuzzy-BML and uses the characteristics of induction graph
classification. Fuzzy-BML is the process by which the retrieval phase of a CBR is modelled not
in the conventional form of mathematical equations, but in the form of a database with
membership functions of fuzzy rules.
SUPERVISED FEATURE SELECTION FOR DIAGNOSIS OF CORONARY ARTERY DISEASE BASED O...csitconf
Feature Selection (FS) has become the focus of much research on decision support systems
areas for which datasets with tremendous number of variables are analyzed. In this paper we
present a new method for the diagnosis of Coronary Artery Diseases (CAD) founded on Genetic
Algorithm (GA) wrapped Bayes Naïve (BN) based FS.
Basically, CAD dataset contains two classes defined with 13 features. In GA–BN algorithm, GA
generates in each iteration a subset of attributes that will be evaluated using the BN in the
second step of the selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces 85.50%
classification accuracy in the diagnosis of CAD. Thus, the asset of the Algorithm is then
compared with the use of Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and
C4.5 decision tree Algorithm. The result of classification accuracy for those algorithms are
respectively 83.5%, 83.16% and 80.85%. Consequently, the GA wrapped BN Algorithm is
correspondingly compared with other FS algorithms. The Obtained results have shown very
promising outcomes for the diagnosis of CAD.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScsitconf
This paper presents a new idea for fault detection and isolation (FDI) technique which is
applied to industrial system. This technique is based on Neural Networks fault-free and Faulty
behaviours Models (NNFMs). NNFMs are used for residual generation, while decision tree
architecture is used for residual evaluation. The decision tree is realized with data collected
from the NNFM’s outputs and is used to isolate detectable faults depending on computed
threshold. Each part of the tree corresponds to specific residual. With the decision tree, it
becomes possible to take the appropriate decision regarding the actual process behaviour by
evaluating few numbers of residuals. In comparison to usual systematic evaluation of all
residuals, the proposed technique requires less computational effort and can be used for on line
diagnosis. An application example is presented to illustrate and confirm the effectiveness and
the accuracy of the proposed approach.
FEEDBACK SHIFT REGISTERS AS CELLULAR AUTOMATA BOUNDARY CONDITIONScsitconf
This summarizes a document describing a new method for random number generation using linear feedback shift registers (LFSRs) as boundary conditions for a one-dimensional cellular automaton (CA). The outputs of two uncoupled LFSRs are used as inputs to the left and right boundary cells of the CA. Testing the output string of the central CA cell using the Diehard statistical tests showed it passed all tests, performing better than previous methods using fixed or periodic boundary conditions. The design exhibits good randomness, parallelism, and is suitable for VLSI implementation.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
2. 2 Computer Science & Information Technology (CS & IT)
reproduce by the adversary, a necessary condition for cryptographic applications. The problem is
to find the suitable rule or rules out of a large size of rule space. Researchers have long sought to
classify CA rules [1-3]. A seminal and widely referenced attempt is that due to [4]. Wolfram's
classification scheme was influential, and thorough. The extensive computer simulation carried out
by Wolfram has relied heavily on the inferences drawn from phenomenological study of the space-
time diagrams of the evolution of all the
2 1
2
2
r+
rules, where r is the radius of the neighborhood of
the center cell that is being updated in discrete time steps running under Galois Field (2)GF
[2,5,6]. Although 1r = was mostly adopted in order to make the rule space practically realizable
with the availability of the computational powers of the existing computers, larger values of r
nevertheless have also been attempted mostly with genetic algorithms [7]. Some prominent
researchers have introduced ad hoc parameters in their attempts to classify the rule space [8,9].
Unfortunately, none of these methods have culminated in a well-defined classification of the CA
rule space. For a binary one dimensional (1-D) CA and a neighborhood of radius r the rule space
is
2 1
2
2
r+
. Even for an elementary 1-D CA where 1r = the rule space is reduced to 8
2 256= and still
making an exhaustive search a difficult and time consuming process. For a mere 1-bit larger
neighborhood radius 2r = produce humongous rule space of
32
2 rendering any linear search
scheme prohibitively and computationally unfeasible. One useful and statistically dependable
approach is cross correlation between two delayed versions of the evolution runs of the CA. This
research presents a new approach that can partially resolve the search problem by attempting to
use the Hamming Distance (HD) between consecutive configurations in the time evolution of the
CA and observing the cyclic behavior of this metric. This approach can in a straight forward
manner show that rules that result in a cyclic HD are actually cyclic and therefore can be decided
to be unsuitable for PRN generation. Since this operation does not require large amount of data,
the search process can be finished in a relatively very short time. It has been observed that the HD
approach can discover Wolfram’s category IV (the so called complex rules) much faster than
expected. In fact the difference between category II and category IV almost diminish. Both of
these categories as well as category I are unsuitable for PRN generation.
2. PRELIMINARIES
This paper deals with a homogeneous lattice of one dimensional cellular automata 1-D CA. The
present state of any cell at time t is denoted by ( , )t lσ where l L∈ is the spatial index of a
lattice length of L bits.
The CA can evolve using a single rule or can use multiple rules in either or both the space and
time dimension. When more than one rule is used it is usually referred to as Hybrid CA. In this
paper a single rule will be used and the CA will be referred to as a uniform 1-D CA. In order to
limit the size of the lattice cyclic boundary conditions will be applied. This means the end cells
will wrap around the lattice. If the rules deal with the center cell and the two nearest neighbors
such that the radius from the center cell to the neighboring left and right cells is 1r = the CA is
usually referred to as Elementary CA (ECA). Therefore the rule acting on cell ( ,1)tσ will
consider the left neighboring cell ( , )t Lσ and the right neighboring cell ( ,2)tσ as depicted in
Figure 1. Similarly the rule will act on the right most cell ( , )t Lσ such that the left neighboring
cell will be ( , 1)t Lσ − and the right neighboring cell will be ( ,1)tσ . The center cell at an
arbitrary location l and time t will be denoted by ( , )t lσ and the left neighboring cell as
3. Computer Science & Information Technology (CS & IT) 3
( , 1)t lσ − whereas the right neighboring cell will be ( , 1)t lσ + . The initial configuration will
thus be denoted by
0
{ (0,1), (0,2),..., (0, 1), (0, ), (0, 1),
..., (0, 1), (0, )} (1)
l l l
L L
σ σ σ σ σ
σ σ
Γ = − +
−
while an arbitrary configuration will be
{ ( ,1), ( ,2),..., ( , 1), ( , ), ( , 1),
..., ( , 1), ( , )} (2)
t
t t t l t l t l
t L t L
σ σ σ σ σ
σ σ
Γ = − +
−
where t T∈ and T is the total evolution time. The rule n
R where nis the rule number according
to the numbering scheme adopted by [4], is a mapping 3
:{0,1} {0,1}n
→R and the next state of
the cell under this rule can be represented by
( 1, ) : ( ( , 1), ( , ), ( , 1)) (3)t l f t l t l t lσ σ σ σ+ − +
The Hamming distance measures the distance between two binary strings by counting the number
of different bits and can be defined by
1
1
( ) ( ( , ) ( 1, )) (4)
L
t t
l
HD t t l t lσ σ+
=
Γ ⊕Γ = ⊕ +∑@
( , 2)t lσ − ( , 1)t lσ − ( , )t lσ ( , 1)t lσ + ( , 2)t lσ + ...... ..................
Figure 1. Local Rule Representation
3. SPACE RULE CLASSIFICATION
When applying the HD on an arbitrary time-space set of data two results can be extracted. One is
the transient from the initial configuration until the start of a cycle if that cycle exists within the
time evolution of the data set. The second is the length of the cycle if the cycle is captured during
the time evolution. For example the variation in the for a rule that belongs to category I
according to Wolfram’s [4] typical classification is a very short transient that terminates very
sharply to an 0. The small transient length seems to be a typical feature of category I rules, as
shown in Figure 2 for Rule 255 in both cases of random initial seed or an active center cell and
the rest of the cells are inactive. The difference in the first is of course due to the initial seed .
Category II rules, represented by Rule 1, Figure 3, exhibit a relatively longer transient but again
stabilizes at a constant which has different values depending on the initial seed . Category IV
rules, represented by Rule 35, Figure 4, again seem to exhibit similar behavior. The transient
length is again different depending on the initial seed while the asymptotically stabilizes to a
constant value. This behavior is also recurrent with category III rules, Figure 5, albeit on a larger
4. 4 Computer Science & Information Technology (CS & IT)
scale but the main thing is that in this case it is not clear whether the is indicative of the cycle
length or whether it is a symptom of some hidden but repetitive behavior that cannot be captured
from the space-time diagram. It is a worthwhile topic for further investigation and research. This
process is simple and fast since it requires a relatively very short evolution time to produce results
that may prove to be significant in the testing of PRNs. It can be conjectured that the may be able
to be used as a fast and efficient tool for testing PRNs for suitability in cryptographic
applications. Based on the data it can also be concluded that category III rules are the best suited
for PRNs.
Figure 2. Time-Space and HD plots for Rule 255
Figure 3. Time-Space and HD plots for Rule 1
5. Computer Science & Information Technology (CS & IT) 5
Figure 4. Time-Space and HD plots for Rule 35
Figure 5. Time-Space and HD plots for Rule 255
6. 6 Computer Science & Information Technology (CS & IT)
4. CA DENSITY CORRECTION AND DATA COMPRESSION
The rule space classification usually does not touch upon the density of the CA evolution. Such
metric is an essential criterion for suitability to generate cryptographically strong PRNs. Since the
rules that may be suitable for PRN generation is restricted to category III it can be seen that some
of the rules in this category do not produce uniform density. The density must be uniform such
that the number of one’s should be equal or differ by at most one bit from the number of zero’s in
the data according to Golomb’s randomness postulate number 1 [10]. Such a requirement isolates
a number of rules in category III that can possibly be considered as candidates for PRNs. For
example Rule 22 and Rule 126 both cannot produce the 0.5 uniform density but they are still
chaotic and belong to category III. The performance of such rules when tested using Diehard is
consequently very poor. If the density of these rules can be corrected then these rules can be
reconsidered for PRN generation and the repertoire of rules available for PRN generation can be
widened. Luckily there exists a very effective and yet very simple approach that is originally
attributed to von Neumann. The method effectively compresses the data according to the steps
depicted in Table 1.
Table 1 von Neumann correction Scheme
Original Data Resultant Data
01 0
10 1
11 delete
00 delete
As an example, a 1-D CA of lattice length 31L = bit was run for an evolution time of T =
2,645,000 time steps under Rule 126 produced a density of one’s equal to 0.527746. When von
Neumann reduction scheme described in Table 1 was applied on the same data the density was
corrected to 0.5. In addition this density correction is usually accompanied with two important
features in as far as PRN generation is concerned. The first is that the resultant data is now
extremely hard to reproduce, a fundamental and necessary requirement for cryptographically
strong PRNs. This is clearly due to the loss of information from both rules in the correction
process. Therefore the process can be considered as an irreversible process. The second is a
byproduct which is an improvement in the statistical properties of the rule. For this particular
example the data was tested for statistical strength by the Diehard battery of tests and passed two
tests but another test was also passed when the density was corrected. A more significant example
is Rule 30 under the same parameters passed 51 tests whereas the number of passes jumped to
129 when the data was run after the application of von Neumann correction scheme. It is very
clear from the time-space diagrams depicted in Figure 6 that Rule 126 and Rule 30 have
undergone significant randomization which were reflected the time-space diagrams as well as in
the number of passes for both rules but it was more pronounced with Rule 30 as mentioned
above.
7. Computer Science & Information Technology (CS & IT) 7
Figure 6. Space-Time diagrams for Rules 30 and 126
When the two rules in their uncompressed and compressed forms were linearly mixed with a
mutual exclusion operation as depicted in Figure 7 some astonishingly remarkable results were
produced as shown in Table 2. The three combinations R30 uncompressed with R126
uncompressed, R30 uncompressed with R126 compressed, R30 compressed with R126
uncompressed, produced identical results when tested with the Diehard test suite and the density
was also maintained at the favorable 0.5 level. The combination of R30 compressed with R126
uncompressed Figure 8, produced the best results and passed all the 229 Diehard tests and of
course maintained the same ideal density of 0.5. It is generally accepted that passing all the
Diehard tests is a strong indication that the PRN generator is suitable for cryptographic
applications. This is in addition to the above stated hardness in reproducing the sequence
generated. Further research and more details are deemed necessary in order to validate the initial
findings in this paper. It can also be conjectured that the other chaotic rules can produce the same
results.
Table 3 shows the variation in the Diehard test results for all the runs for Rules 30 and 126 as
well as their mixtures. It can be seen that the Overlapping Sums test number 15 and the GCD test
number 2 were the most difficult to pass except for the PRN8 (The combination of R30
compressed with R126 uncompressed) case.
Table 2. p-values and Density of Rules 30 and 126 mixtures
8. 8 Computer Science & Information Technology (CS & IT)
Figure 7. Rules 30 and 126 mixing scheme
Table 3. Diehard Results for Rules 30 and 126
9. Computer Science & Information Technology (CS & IT) 9
Figure 8. Space-Time diagrams for Rules 30 and 126
5. CONCLUSIONS
In this paper the Hamming Distance was revisited and applied to the 1-D CA. The original
motivation was the classification of the rule space of the CA. This has been achieved in a very
simple and yet effective approach. The results show a well defined behavior of the chaotic rules
of category III as compared to the behavior of the rules of the other three categories. The
oscillations of the hamming distance in the transient stage are indicative of the chaotic nature of
the rule. In other words, the high value of the hamming distance in the transient stage is actually
indicative of rules Category I or II. The hamming distance values during the oscillation period do
not vary very much as is the case during the transient stage. It can be concluded that Category III
rules are the best rules suited for PRN generation. The behavior of category I rules seem to be
very clear and their time evolution reach a hamming distance equal to 0 after one or two time
steps only depending on the initial seed. Category II and IV Rules seem to behave in a similar
manner. They both reach a constant hamming distance after a very short transient cycle with a
slight difference in the values of the hamming distance during the transient cycle but the
asymptotic behavior is the same. Therefore, the new categorization of the rule space is that they
are indeed of three distinct types, Category I, Category II and IV combined, and the third is
Category III. This seems to agree with the findings of some past researchers that argued strongly
against the separate categorization of Category IV. The finding in this paper can reduce the rule
search significantly. The correlation technique that is usually used in the analysis of pseudo
random number generation can indicate the amount of correlation between two delayed versions
of the data as well as the distance between the cycles if the cycles exist. In this paper the
hamming distance is used as an alternative. The advantage of the Hamming Distance approach as
compared with the Correlation approach is that the hamming distance can show the transient
stage (the number of time steps to finish the transient orbit or system time constant) as well as
showing the cycles with clear repetition a feature that the correlation technique is unable to
produce. In addition the hamming distance can arrive at the results in a very short time while the
cross correlation technique requires the full length of the data and much more computational
effort.
10. 10 Computer Science & Information Technology (CS & IT)
It is also clear from the results in this paper and the findings of previous research that not all rules
of Category III are suitable for PRN generation. One stagnant problem with the rules that are
deemed unsuitable is attributed to the non-uniform density output of some of these rules, such as
Rule 126. The application of von Neumann reduction scheme proved to be beneficial. The density
has been corrected to the desirable value of 0.5. However, a byproduct to this was the
improvement in the randomization as depicted in the images produced which was also validated
in the increase of test passes. A more significant improvement was in the number of tests passed
by Rule 30 that jumped from 51 prior to the application of the reduction scheme to 129 after the
application of the scheme. Another remarkable result was achieved when the two types of rules
R126 and R30 were linearly mixed together. When a reduced output of R30 was mutually
exclusive ORed with the output of unreduced output of R126, the output data has passed all the
229 Diehard tests. A result that is extremely difficult to achieve by other PRN sources. This result
may require further effort to validate the findings in this paper as well show that the approach is
equally applicable to the other chaotic rules.
REFERENCES
[1] G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lipschitz-Hankel type involving
products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955.
(references)
[2] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892,
pp.68–73.
[3] I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol.
III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–350.
[4] K. Elissa, “Title of paper if known,” unpublished.
[5] R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.
[6] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical
media and plastic substrate interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987
[Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
[7] M. Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989.