The document describes a parallel communicating extended finite automata system. It introduces the concept of multiple extended finite automata working in parallel and communicating with each other by requesting state information. The system consists of several extended finite automata that operate independently but can communicate their current states. This model increases computational power through cooperation and communication compared to existing models. The system is formally defined and its deterministic and centralized variants are described.
Compressing the dependent elements of multisetIRJET Journal
This document discusses compressing the dependent elements of multisets through lossless data compression algorithms. It begins with an introduction to multiset theory and lossless data compression. The authors propose a lossless compression algorithm that treats the dependent and independent elements of a multiset differently, compressing each into tree-based arithmetic codes. The algorithm compresses and decompresses the multiset elements based on set operations performed on the multisets like union, intersection, sum, and difference. This allows both dependent and independent elements of a multiset to be compressed and decompressed simultaneously through generating different codes for each type of element. The document reviews related work applying multiset theory in other domains and representations before describing the proposed solution and results.
Improved Parallel Prefix Algorithm on OTIS-Mesh of TreesIDES Editor
A parallel algorithm for prefix computation reported
recently on interconnection network called OTIS-Mesh Of
Trees[4]. Using n4 processors, algorithm shown to run in 13log
n + O(1) electronic moves and 2 optical moves for n4 data
points. In this paper we present new and improved parallel
algorithm for prefix on OTIS-Mesh of Trees. The algorithm
requires 10log n + O(1) electronic steps + 1 optical step for
prefix computation on the same number of processors and
data points as considered in [4].
Using petri net with inherent fuzzy in the recognition of ecg signalsIAEME Publication
This document presents a method for classifying ECG signals using a fuzzy Petri net with inherent fuzziness. The fuzzy Petri net structure is organized into a neural network called a fuzzy Petri network. Features are extracted from ECG signals using wavelet transforms. A best basis technique is used to select optimal features that reduce the dimensionality of input vectors and complexity. The fuzzy Petri network parameters are learned using backpropagation. The system was tested on 8 classes of ECG beats and achieved accurate classification.
This document presents a new generic approach to pattern matching of amino acid sequences using matching policy and pattern policy. It introduces two algorithms - matching policy and pattern policy. Matching policy uses the ASCII values of patterns and text to skip comparisons and reduce complexity when the values are unequal. Pattern policy selects patterns based on descending order of amino acid occurrences in the text, determined by heap sorting. Experimental results on phosphorylated peptide datasets show the proposed approach achieves a higher success ratio than conventional methods, especially when the two policies are used together. Complexity analysis shows it has low time complexity of O(N×LS×log(LS)). The approach provides an efficient and robust way to perform pattern matching of amino acid sequences.
This document discusses using repeated simulations of a crisp neural network to obtain quasi-fuzzy weight sets (QFWS) that can be used to initialize fuzzy neural networks. The key points are:
1) A crisp neural network is repeatedly trained on input-output data to model an unknown function. The connection weights change with each simulation.
2) Recording the weights from multiple simulations produces quasi-fuzzy weight sets, where each weight is a fuzzy set rather than a single value.
3) These QFWS can provide initial solutions for training type-I fuzzy neural networks with reduced computational complexity compared to random initialization.
4) The QFWS follow fuzzy arithmetic and allow both numerical and linguistic data to
Combining text and pattern preprocessing in an adaptive dna pattern matcherIAEME Publication
This paper presents an adaptive DNA pattern matching algorithm that combines both text and pattern preprocessing to efficiently detect patterns in DNA sequences. It initializes a pattern preprocessor similar to KMP and builds a suffix tree for the text. It then marks regions in the text and finds the maximum overlap between each region and the pattern. The algorithm searches the region with highest overlap first before moving to other regions. Experiments show it performs faster than KMP, with running time of O(m) where m is the pattern length.
This document discusses applications of differential equations across various disciplines including computer science, engineering, physics, and more. It provides examples of how differential equations are used in software development, constraint logic programming, gaming features, algorithms, modeling natural phenomena, artificial intelligence, networking, and developing theories and explanations. Some specific applications mentioned include modeling character velocity in games, machine learning, fluid dynamics, electromagnetism, quantum mechanics, relativity, and more.
This document discusses using convolutional neural networks and self-organized maps to visualize knowledge graphs. It presents a compositional model for embedding nodes and relationships in a knowledge graph into a vector space. A self-organized map is used to cluster the embeddings and extract semantic fingerprints. The fingerprints are useful for knowledge discovery and classification tasks. The technique is applied to a subset of the CTD knowledge graph containing compound-gene/protein interactions, and the results are comparable to structural models.
Compressing the dependent elements of multisetIRJET Journal
This document discusses compressing the dependent elements of multisets through lossless data compression algorithms. It begins with an introduction to multiset theory and lossless data compression. The authors propose a lossless compression algorithm that treats the dependent and independent elements of a multiset differently, compressing each into tree-based arithmetic codes. The algorithm compresses and decompresses the multiset elements based on set operations performed on the multisets like union, intersection, sum, and difference. This allows both dependent and independent elements of a multiset to be compressed and decompressed simultaneously through generating different codes for each type of element. The document reviews related work applying multiset theory in other domains and representations before describing the proposed solution and results.
Improved Parallel Prefix Algorithm on OTIS-Mesh of TreesIDES Editor
A parallel algorithm for prefix computation reported
recently on interconnection network called OTIS-Mesh Of
Trees[4]. Using n4 processors, algorithm shown to run in 13log
n + O(1) electronic moves and 2 optical moves for n4 data
points. In this paper we present new and improved parallel
algorithm for prefix on OTIS-Mesh of Trees. The algorithm
requires 10log n + O(1) electronic steps + 1 optical step for
prefix computation on the same number of processors and
data points as considered in [4].
Using petri net with inherent fuzzy in the recognition of ecg signalsIAEME Publication
This document presents a method for classifying ECG signals using a fuzzy Petri net with inherent fuzziness. The fuzzy Petri net structure is organized into a neural network called a fuzzy Petri network. Features are extracted from ECG signals using wavelet transforms. A best basis technique is used to select optimal features that reduce the dimensionality of input vectors and complexity. The fuzzy Petri network parameters are learned using backpropagation. The system was tested on 8 classes of ECG beats and achieved accurate classification.
This document presents a new generic approach to pattern matching of amino acid sequences using matching policy and pattern policy. It introduces two algorithms - matching policy and pattern policy. Matching policy uses the ASCII values of patterns and text to skip comparisons and reduce complexity when the values are unequal. Pattern policy selects patterns based on descending order of amino acid occurrences in the text, determined by heap sorting. Experimental results on phosphorylated peptide datasets show the proposed approach achieves a higher success ratio than conventional methods, especially when the two policies are used together. Complexity analysis shows it has low time complexity of O(N×LS×log(LS)). The approach provides an efficient and robust way to perform pattern matching of amino acid sequences.
This document discusses using repeated simulations of a crisp neural network to obtain quasi-fuzzy weight sets (QFWS) that can be used to initialize fuzzy neural networks. The key points are:
1) A crisp neural network is repeatedly trained on input-output data to model an unknown function. The connection weights change with each simulation.
2) Recording the weights from multiple simulations produces quasi-fuzzy weight sets, where each weight is a fuzzy set rather than a single value.
3) These QFWS can provide initial solutions for training type-I fuzzy neural networks with reduced computational complexity compared to random initialization.
4) The QFWS follow fuzzy arithmetic and allow both numerical and linguistic data to
Combining text and pattern preprocessing in an adaptive dna pattern matcherIAEME Publication
This paper presents an adaptive DNA pattern matching algorithm that combines both text and pattern preprocessing to efficiently detect patterns in DNA sequences. It initializes a pattern preprocessor similar to KMP and builds a suffix tree for the text. It then marks regions in the text and finds the maximum overlap between each region and the pattern. The algorithm searches the region with highest overlap first before moving to other regions. Experiments show it performs faster than KMP, with running time of O(m) where m is the pattern length.
This document discusses applications of differential equations across various disciplines including computer science, engineering, physics, and more. It provides examples of how differential equations are used in software development, constraint logic programming, gaming features, algorithms, modeling natural phenomena, artificial intelligence, networking, and developing theories and explanations. Some specific applications mentioned include modeling character velocity in games, machine learning, fluid dynamics, electromagnetism, quantum mechanics, relativity, and more.
This document discusses using convolutional neural networks and self-organized maps to visualize knowledge graphs. It presents a compositional model for embedding nodes and relationships in a knowledge graph into a vector space. A self-organized map is used to cluster the embeddings and extract semantic fingerprints. The fingerprints are useful for knowledge discovery and classification tasks. The technique is applied to a subset of the CTD knowledge graph containing compound-gene/protein interactions, and the results are comparable to structural models.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
The document proposes a new text categorization framework called LATTICE-CELL that is based on concepts lattice and cellular automata. It models concept structures using a Cellular Automaton for Symbolic Induction (CASI) in order to reduce the time complexity of categorization caused by concept lattices. The framework consists of a preprocessing module to create a vector representation of documents and a categorization module that generates the categorization model by representing the concept lattice structure as a cellular lattice. Experiments show the approach improves performance while reducing categorization time compared to other algorithms such as Naive Bayes and k-nearest neighbors.
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
In this paper, we make use of the fractional differential operator method to find the modified Riemann-Liouville (R-L) fractional derivatives of some fractional functions include fractional polynomial function, fractional exponential function, fractional sine and cosine functions. The Mittag-Leffler function plays an important role in our article, and the fractional differential operator method can be applied to find the particular solutions of non-homogeneous linear fractional differential equations (FDE) with constant coefficients in a unified way and it is a generalization of the method of finding particular solutions of classical ordinary differential equations. On the other hand, several examples are illustrative for demonstrating the advantage of our approach and we compare our results with the traditional differential calculus cases.
Low Power 32×32 bit Multiplier Architecture based on Vedic Mathematics Using ...VIT-AP University
In this paper the most significant aspect of the proposed method is that, the developed multiplier architecture is based on vertical and crosswise structure of Ancient Indian Vedic Mathematics. As per this proposed architecture, for two 32-bit numbers; the multiplier and multiplicand, each are grouped as 16-bit numbers so that it decomposes into 16×16 multiplication modules. It is also illustrated that the further hierarchical decomposition of 8×8 modules into 4×4 modules and then 2×2 modules will have a significant VHDL coding of for 32x32 bits multiplication and their used FPGA family Virtex 7 low power implementation by Xilinx Synthesis 16.1 tool done. The synthesis results show that the computation time for calculating the product of 32x32 bits is delay 29.256 ns. (11.499ns logic, 11.994ns route) (48.9% logic, 51.1% route).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Quantum inspired evolutionary algorithm for solving multiple travelling sales...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Novel Methods of Generating Self-Invertible Matrix for Hill Cipher Algorithm.CSCJournals
The document proposes novel methods for generating self-invertible matrices for use in the Hill cipher encryption algorithm. It discusses how the Hill cipher works and the issue that the encryption matrix may not be invertible, preventing decryption. It then presents three methods for generating self-invertible matrices of sizes 2x2, 3x3, and 4x4 that can be used as the encryption key matrix in Hill cipher to allow for decryption without needing to calculate the inverse matrix.
Algorithmic Information Theory and Computational BiologyHector Zenil
I present cutting-edge concepts and tools drawn from algorithmic information theory (AIT) for new generation genetic sequencing, network biology and bioinformatics in general. AIT is the most advanced mathematical theory of information theory formally characterising the concepts and differences between simplicity, randomness and structure. Measures of AIT will empower computational medicine and systems biology to deal with big data, sophisticated analytics and a powerful new understanding framework.
This document discusses affine array accesses in compiler design. It defines an affine access as one where the array index is expressed as an affine expression of loop indexes and constants. Affine accesses can be represented as a matrix-vector calculation that maps the iteration space to data space. Examples are given of affine accesses and how they can be written as tuples representing the mapping between indexes and array elements. Non-affine accesses are also discussed, such as those involving sparse matrices. Exercises are provided to represent given array accesses as affine tuples.
A Combined Model between Artificial Neural Networks and ARIMA Modelspaperpublications3
Abstract: The main objective of this study is to reach an appropriate model to predict the stock market index EGX 30. The study examined the application of the following models to predict EGX30:
•Artificial neural networks.
•ARIMA models.
•Combination between neural networks and time series analysis using observations. previous residuals and estimated values of ARIMA model.
The study showed that the most appropriate model to predict the index of stock market EGX30 is the model of combination between neural networks and ARIMA models, where it gives more accurate results rather than ARIMA and ANN each separately, that is because the combination between these models combining the flexibility of the time series and the power of artificial neural networks, where one of these models compensates the shortage of the other model, further more artificial neural networks give the best prediction rather than ARIMA models in accordance with the standards of prediction accuracy MAPE and MSRE.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a new method for diffeomorphic MRI brain registration using mean-shift algorithm. The method uses multiple brain structure segmentations derived from Freesurfer, in addition to MRI intensities, to drive a diffeomorphic registration algorithm. This provides anatomically guided registration that can more accurately align individual brain structures across images. The paper also describes extending the registration to a groupwise framework to build an average atlas without choosing a single template image. Evaluation of the registration for tasks like tensor-based morphometry is also discussed.
The document discusses connected components labeling algorithms for graphs. It evaluates parallel algorithms on CPU and GPU architectures for different types of graphs. The key algorithms discussed are disjoint set union and depth first search. It proposes a simple auto-tuned approach to select the best technique for a given graph and evaluates the algorithms on real and synthetic datasets ranging from 1-7 million nodes.
This document presents two new Boolean combination methods called "orthogonalizing difference-building" and "orthogonal OR-ing". These methods calculate the difference and complement of functions, as well as the EXOR and EXNOR of minterms or functions, resulting in orthogonal forms. Orthogonal forms have advantages for further calculations. The methods are based on set theory and treat minterms and functions as ternary-vector lists. Equations are provided to define the difference and disjunction of minterms or functions in terms of their ternary-vector list representations.
A Time-Area-Power Efficient High Speed Vedic Mathematics Multiplier using Com...Kumar Goud
Abstract: With the advent of new technology in the fields of VLSI and communication, there is also an ever growing demand for high speed processing and low area design. It is also a well known fact that the multiplier unit forms an integral part of processor design. Due to this regard, high speed multiplier architectures become the need of the day. In this paper, we introduce a novel architecture to perform high speed multiplication using ancient Vedic maths techniques. A new high speed approach utilizing 4:2 compressors and novel 7:2 compressors for addition has also been incorporated in the same and has been explored. Upon comparison, the compressor based multiplier introduced in this paper, is almost two times faster than the popular methods of multiplication. With regards to area, a 1% reduction is seen. The design and experiments were carried out on a Xilinx Spartan 3e series of FPGA and the timing and area of the design, on the same have been calculated.
Keywords—4:2 Compressor, 7:2 Compressor, Booth’s multiplier, high speed multiplier, modified Booth’s multiplier, Urdhwa Tiryakbhyam Sutra, Vedic Mathematics.
A minimum process synchronous checkpointing algorithmiaemedu
This document summarizes a proposed minimum process checkpointing algorithm for mobile distributed systems. The algorithm aims to minimize the number of processes that take checkpoints to conserve battery life and bandwidth on mobile hosts. It captures dependencies between processes by piggybacking dependency vectors onto messages. The algorithm proceeds in two phases, with processes first taking mutable checkpoints locally before converting them to tentative checkpoints in coordination. It aims to minimize wasted effort if a process fails to checkpoint by only aborting mutable checkpoints in the first phase. The algorithm buffers messages during an "uncertainty period" between mutable and tentative checkpoints to prevent inconsistencies.
1. The document discusses redesigning the inlet valve of an HCCI engine to improve performance and lifetime.
2. It proposes using a titanium alloy for the valve material instead of nickel chromium, and adding a hollow path inside filled with sodium liquid metal to help conduct heat away from the valve.
3. Analysis of the redesigned valve using CAD software shows improvements in withstanding pressure and reducing temperature fluctuations compared to the original design, indicating better efficiency and lifespan.
Compaction sintering and mechanical propertiesiaemedu
The document summarizes research on developing aluminum-silicon carbide (Al-SiCp) composites using powder metallurgy techniques. Specifically, it discusses fabricating unreinforced aluminum and aluminum composites containing 5 wt% silicon carbide particles. The composites were sintered and then heat treated. Testing showed that microhardness and compressive strength increased with the addition of silicon carbide particles. Microstructural analysis revealed mostly uniform dispersion of silicon carbide particles in the aluminum matrix, with some clustering. The research demonstrated that powder metallurgy can be used to effectively fabricate Al-SiCp composites with improved mechanical properties over unreinforced aluminum.
Experimental analysis of heat transfer enhancementin circulariaemedu
This document summarizes an experimental study on enhancing heat transfer in a circular double tube heat exchanger using rectangular inserts. Air was passed through the inner tube while hot water flowed through the outer tube. Heat transfer coefficients and friction factors were determined for the plain tube and with inserts. The results showed that heat transfer was enhanced by 0.9 to 1.9 times with the inserts due to flow disruption, while friction factors increased by 1 to 1.7 times. Heat transfer coefficients increased with Reynolds number whereas friction factors decreased. Validation experiments on a plain tube agreed well with theoretical predictions within 10% uncertainty.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
The document proposes a new text categorization framework called LATTICE-CELL that is based on concepts lattice and cellular automata. It models concept structures using a Cellular Automaton for Symbolic Induction (CASI) in order to reduce the time complexity of categorization caused by concept lattices. The framework consists of a preprocessing module to create a vector representation of documents and a categorization module that generates the categorization model by representing the concept lattice structure as a cellular lattice. Experiments show the approach improves performance while reducing categorization time compared to other algorithms such as Naive Bayes and k-nearest neighbors.
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
In this paper, we make use of the fractional differential operator method to find the modified Riemann-Liouville (R-L) fractional derivatives of some fractional functions include fractional polynomial function, fractional exponential function, fractional sine and cosine functions. The Mittag-Leffler function plays an important role in our article, and the fractional differential operator method can be applied to find the particular solutions of non-homogeneous linear fractional differential equations (FDE) with constant coefficients in a unified way and it is a generalization of the method of finding particular solutions of classical ordinary differential equations. On the other hand, several examples are illustrative for demonstrating the advantage of our approach and we compare our results with the traditional differential calculus cases.
Low Power 32×32 bit Multiplier Architecture based on Vedic Mathematics Using ...VIT-AP University
In this paper the most significant aspect of the proposed method is that, the developed multiplier architecture is based on vertical and crosswise structure of Ancient Indian Vedic Mathematics. As per this proposed architecture, for two 32-bit numbers; the multiplier and multiplicand, each are grouped as 16-bit numbers so that it decomposes into 16×16 multiplication modules. It is also illustrated that the further hierarchical decomposition of 8×8 modules into 4×4 modules and then 2×2 modules will have a significant VHDL coding of for 32x32 bits multiplication and their used FPGA family Virtex 7 low power implementation by Xilinx Synthesis 16.1 tool done. The synthesis results show that the computation time for calculating the product of 32x32 bits is delay 29.256 ns. (11.499ns logic, 11.994ns route) (48.9% logic, 51.1% route).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Quantum inspired evolutionary algorithm for solving multiple travelling sales...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Novel Methods of Generating Self-Invertible Matrix for Hill Cipher Algorithm.CSCJournals
The document proposes novel methods for generating self-invertible matrices for use in the Hill cipher encryption algorithm. It discusses how the Hill cipher works and the issue that the encryption matrix may not be invertible, preventing decryption. It then presents three methods for generating self-invertible matrices of sizes 2x2, 3x3, and 4x4 that can be used as the encryption key matrix in Hill cipher to allow for decryption without needing to calculate the inverse matrix.
Algorithmic Information Theory and Computational BiologyHector Zenil
I present cutting-edge concepts and tools drawn from algorithmic information theory (AIT) for new generation genetic sequencing, network biology and bioinformatics in general. AIT is the most advanced mathematical theory of information theory formally characterising the concepts and differences between simplicity, randomness and structure. Measures of AIT will empower computational medicine and systems biology to deal with big data, sophisticated analytics and a powerful new understanding framework.
This document discusses affine array accesses in compiler design. It defines an affine access as one where the array index is expressed as an affine expression of loop indexes and constants. Affine accesses can be represented as a matrix-vector calculation that maps the iteration space to data space. Examples are given of affine accesses and how they can be written as tuples representing the mapping between indexes and array elements. Non-affine accesses are also discussed, such as those involving sparse matrices. Exercises are provided to represent given array accesses as affine tuples.
A Combined Model between Artificial Neural Networks and ARIMA Modelspaperpublications3
Abstract: The main objective of this study is to reach an appropriate model to predict the stock market index EGX 30. The study examined the application of the following models to predict EGX30:
•Artificial neural networks.
•ARIMA models.
•Combination between neural networks and time series analysis using observations. previous residuals and estimated values of ARIMA model.
The study showed that the most appropriate model to predict the index of stock market EGX30 is the model of combination between neural networks and ARIMA models, where it gives more accurate results rather than ARIMA and ANN each separately, that is because the combination between these models combining the flexibility of the time series and the power of artificial neural networks, where one of these models compensates the shortage of the other model, further more artificial neural networks give the best prediction rather than ARIMA models in accordance with the standards of prediction accuracy MAPE and MSRE.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a new method for diffeomorphic MRI brain registration using mean-shift algorithm. The method uses multiple brain structure segmentations derived from Freesurfer, in addition to MRI intensities, to drive a diffeomorphic registration algorithm. This provides anatomically guided registration that can more accurately align individual brain structures across images. The paper also describes extending the registration to a groupwise framework to build an average atlas without choosing a single template image. Evaluation of the registration for tasks like tensor-based morphometry is also discussed.
The document discusses connected components labeling algorithms for graphs. It evaluates parallel algorithms on CPU and GPU architectures for different types of graphs. The key algorithms discussed are disjoint set union and depth first search. It proposes a simple auto-tuned approach to select the best technique for a given graph and evaluates the algorithms on real and synthetic datasets ranging from 1-7 million nodes.
This document presents two new Boolean combination methods called "orthogonalizing difference-building" and "orthogonal OR-ing". These methods calculate the difference and complement of functions, as well as the EXOR and EXNOR of minterms or functions, resulting in orthogonal forms. Orthogonal forms have advantages for further calculations. The methods are based on set theory and treat minterms and functions as ternary-vector lists. Equations are provided to define the difference and disjunction of minterms or functions in terms of their ternary-vector list representations.
A Time-Area-Power Efficient High Speed Vedic Mathematics Multiplier using Com...Kumar Goud
Abstract: With the advent of new technology in the fields of VLSI and communication, there is also an ever growing demand for high speed processing and low area design. It is also a well known fact that the multiplier unit forms an integral part of processor design. Due to this regard, high speed multiplier architectures become the need of the day. In this paper, we introduce a novel architecture to perform high speed multiplication using ancient Vedic maths techniques. A new high speed approach utilizing 4:2 compressors and novel 7:2 compressors for addition has also been incorporated in the same and has been explored. Upon comparison, the compressor based multiplier introduced in this paper, is almost two times faster than the popular methods of multiplication. With regards to area, a 1% reduction is seen. The design and experiments were carried out on a Xilinx Spartan 3e series of FPGA and the timing and area of the design, on the same have been calculated.
Keywords—4:2 Compressor, 7:2 Compressor, Booth’s multiplier, high speed multiplier, modified Booth’s multiplier, Urdhwa Tiryakbhyam Sutra, Vedic Mathematics.
A minimum process synchronous checkpointing algorithmiaemedu
This document summarizes a proposed minimum process checkpointing algorithm for mobile distributed systems. The algorithm aims to minimize the number of processes that take checkpoints to conserve battery life and bandwidth on mobile hosts. It captures dependencies between processes by piggybacking dependency vectors onto messages. The algorithm proceeds in two phases, with processes first taking mutable checkpoints locally before converting them to tentative checkpoints in coordination. It aims to minimize wasted effort if a process fails to checkpoint by only aborting mutable checkpoints in the first phase. The algorithm buffers messages during an "uncertainty period" between mutable and tentative checkpoints to prevent inconsistencies.
1. The document discusses redesigning the inlet valve of an HCCI engine to improve performance and lifetime.
2. It proposes using a titanium alloy for the valve material instead of nickel chromium, and adding a hollow path inside filled with sodium liquid metal to help conduct heat away from the valve.
3. Analysis of the redesigned valve using CAD software shows improvements in withstanding pressure and reducing temperature fluctuations compared to the original design, indicating better efficiency and lifespan.
Compaction sintering and mechanical propertiesiaemedu
The document summarizes research on developing aluminum-silicon carbide (Al-SiCp) composites using powder metallurgy techniques. Specifically, it discusses fabricating unreinforced aluminum and aluminum composites containing 5 wt% silicon carbide particles. The composites were sintered and then heat treated. Testing showed that microhardness and compressive strength increased with the addition of silicon carbide particles. Microstructural analysis revealed mostly uniform dispersion of silicon carbide particles in the aluminum matrix, with some clustering. The research demonstrated that powder metallurgy can be used to effectively fabricate Al-SiCp composites with improved mechanical properties over unreinforced aluminum.
Experimental analysis of heat transfer enhancementin circulariaemedu
This document summarizes an experimental study on enhancing heat transfer in a circular double tube heat exchanger using rectangular inserts. Air was passed through the inner tube while hot water flowed through the outer tube. Heat transfer coefficients and friction factors were determined for the plain tube and with inserts. The results showed that heat transfer was enhanced by 0.9 to 1.9 times with the inserts due to flow disruption, while friction factors increased by 1 to 1.7 times. Heat transfer coefficients increased with Reynolds number whereas friction factors decreased. Validation experiments on a plain tube agreed well with theoretical predictions within 10% uncertainty.
Job satisfaction and contributing variables among the bank employees in cudda...iaemedu
This document summarizes a study on job satisfaction and contributing variables among bank employees in Cuddalore District, India. The study found that the majority (65.7%) of employees reported high job satisfaction, while 16% reported low satisfaction and 18.3% reported medium satisfaction. A regression analysis showed that job involvement, organizational climate, and organizational commitment significantly contributed to job satisfaction. Job involvement had the highest influence on satisfaction. The study provides suggestions for improving job satisfaction, such as ensuring job security, improving relationships, and fulfilling employee needs.
An investigation on faculty development and retention in technical educationiaemedu
This document summarizes a study on faculty development and retention in technical education institutions in India. Some key points:
1. There has been a large expansion of engineering colleges in India in recent decades, especially private self-financing institutions, which has sometimes led to decreasing standards due to a lack of qualified teachers.
2. Attracting and retaining talented faculty is challenging as other sectors offer higher salaries. Technical institutions must offer competitive salaries and career growth to recruit and keep good teachers.
3. Ongoing faculty development is needed to help teachers take on new roles like curriculum development, research, and administrative work, and to keep their knowledge current. Training programs at different stages of a teacher's career
Surface reconstruction and display from range and color data under realistic ...iaemedu
This document summarizes a research paper on surface reconstruction and display of 3D objects from range and color data. It discusses using a stereo camera system to scan the surface geometry and color of objects. The scans are registered into a single coordinate system and integrated into a surface model using space carving. The mesh is optimized and simplified. Two methods are presented for view-dependent display: projecting color onto a single surface model, or rendering separate textured triangle meshes from the viewpoint. The paper covers topics such as data acquisition, surface reconstruction algorithms, mesh optimization, and displaying realistic images of scanned objects.
Developing creative and innovative culture in organizationiaemedu
This document summarizes research on developing creative and innovative cultures in organizations. It discusses the differences between creativity and innovation, with creativity being the generation of novel ideas and innovation being the implementation of those ideas. Several key dimensions of innovation culture are identified, including risk-taking, resources, knowledge, goals, rewards, tools, and relationships. Factors that support or hinder organizational creativity are also reviewed from the literature. The roles of both creativity and innovation in organizations are discussed.
Seismic response of frp strengthened rc frameiaemedu
This document discusses research on strengthening reinforced concrete (RC) frames with fiber-reinforced plastic (FRP). It summarizes previous studies on using FRP to strengthen beams and columns. However, few studies have analyzed FRP-strengthened RC frames as a whole system. The present study uses finite element analysis to model RC frames strengthened with varying FRP thicknesses and investigates their seismic response. Models of 2-bay, 3-story and 3-bay, 5-story frames are analyzed for different crack locations. The results are intended to help develop design criteria for seismic retrofitting of RC frames with FRP.
Parallel communicating flip pushdown automata systems communicating by stacksIAEME Publication
This document discusses parallel communicating flip pushdown automata systems, which are systems of multiple flip pushdown automata that can communicate by sending stack contents to each other. The key points are:
- Parallel communicating flip pushdown automata systems of degree 2 or more are shown to be able to recognize all recursively enumerable languages.
- Two variants of transition relations are defined for these systems - one where stack contents are preserved after communication ("non-returning") and one where stacks return to their initial state ("returning").
- It is proven that the families of languages recognized by returning and non-returning systems of all types (centralized, parallel, etc.) are equal to the family of recursively enumerable languages
This document discusses parallel communicating flip pushdown automata systems, which are systems of multiple flip pushdown automata that can communicate by sending stack contents to each other. The key points are:
- Parallel communicating flip pushdown automata systems of degree 2 or more are shown to be able to recognize all recursively enumerable languages.
- Two variants of transition relations are defined for these systems - one where stack contents are preserved after communication ("non-returning") and one where stacks return to their initial state ("returning").
- It is proven that returning systems are equally powerful to non-returning systems in terms of computational power.
Integrating Fuzzy Mde- AT Framework For Urban Traffic SimulationWaqas Tariq
This document summarizes a research paper that proposes integrating fuzzy modeling concepts with Model Driven Engineering (MDE) and Activity Theory (AT) to develop a framework for simulating urban traffic systems. The framework uses AT concepts like activity, subject, object, tools etc. to model the traffic system. It then applies fuzzy set theory to quantify uncertainty in the modeling. MDE is used to successively refine models from analysis to design. The framework was applied to develop a platform independent model of an urban traffic control system using UML. Fuzzy relationships were defined between model elements to represent uncertainty in message passing between system entities. The framework allows modeling both behavioral and structural aspects of the traffic system using fuzzy concepts integrated with MDE and
A REVIEW OF APPLICATIONS OF THEORY OF COMPUTATION AND AUTOMATA TO MUSICDr. Michael Agbaje
Theory of Computation and Automata is a theoretical branch of computer science. It established its roots during 20th Century when mathematicians began developing theoretically and literally machines which mimic certain features of man, completing calculations more quickly and reliably. The word automaton is closely related to the word "automation", meaning automatic processes carrying out the production of specific processes. Automata theory deals with the logic of computation with respect to simple machines, referred to as automata. Through automata, computer scientists are able to understand how machines compute functions and solve problems and more importantly, what it means for a function to be defined as computable or for a question to be described as decidable (Stanford(2004),Cristopher(2013))
The document proposes a new model for emulating massively parallel single instruction multiple data (SIMD) machines in a distributed system using a network of virtual processing elements managed by distributed host agents. It describes the architecture of the model, which uses virtual processing elements arranged in topological structures like meshes and GPUs to emulate different parallel machine architectures. An example application of edge detection on an MRI image is provided to demonstrate the performance of the proposed parallel virtual machine model.
An Optimized Parallel Algorithm for Longest Common Subsequence Using Openmp –...IRJET Journal
This document summarizes research on developing parallel algorithms to optimize solving the longest common subsequence (LCS) problem. LCS is commonly used for sequence comparison in bioinformatics. Traditional sequential dynamic programming algorithms have complexity of O(mn) for sequences of lengths m and n. The document reviews parallel algorithms developed using tools like OpenMP and GPUs like CUDA to reduce computation time. It proposes the authors' own optimized parallel algorithm for multi-core CPUs using OpenMP.
SEMANTIC STUDIES OF A SYNCHRONOUS APPROACH TO ACTIVITY RECOGNITIONcscpconf
Many important and critical applications such as surveillance or healthcare require some formof (human) activity recognition. Activities are usually represented by a series of actions driven and triggered by events. Recognition systems have to be real time, reactive, correct, complete, and dependable. These stringent requirements justify the use of formal methods to describe, analyze, verify, and generate effective recognition systems. Due to the large number of possible application domains, the researchers aim at building a generic recognition system. They choose the synchronous approach because it has a well-founded semantics and it ensures determinism and safe parallel composition. They propose a new language to represent activities as synchronous automata and they supply it with two complementary formal semantics. First a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an equational semantics describes the behavior in a constructive way and can be directly implemented. This paper focuses on the description of these two semantics and their relation.
Analysis of intelligent system design by neuro adaptive control no restrictioniaemedu
This document discusses using neuro-adaptive control to analyze the design of intelligent systems. It begins by introducing the topic and noting that conventional adaptive control techniques assume explicit system models or dynamic structures based on linear models, which may not be valid for complex nonlinear systems. Neural networks and other intelligent control approaches that do not require explicit mathematical modeling are presented as alternatives. The paper then focuses on using time-delay neural networks for system identification and control of nonlinear dynamic systems. Various neural network architectures and learning algorithms for system modeling and control are described.
Analysis of intelligent system design by neuro adaptive controliaemedu
This document summarizes the analysis of intelligent system design using neuro-adaptive control methods. It discusses using neural networks for system identification through series-parallel and parallel models. It also discusses supervised control using a neural network trained by an expert operator, inverse control using a neural network trained on the inverse system model, and neuro-adaptive control using two neural networks - one for system identification and one for control. Neuro-adaptive control allows handling nonlinear system behavior without linear approximations.
This document discusses techniques for optimizing parallel programs for locality. It covers several key topics:
- Symmetric multiprocessors are a popular parallel architecture where all processors can access shared memory uniformly. Cache coherence protocols allow caches to share data while reading.
- Parallel programs must have good data locality to perform well. Techniques like grouping related operations on the same processor and executing code in succession on processors can improve locality.
- Loop-level parallelism is a common target for parallelization. Dividing loop iterations evenly across processors can achieve good parallelism if iterations are independent.
Efficient text compression using special character replacementiaemedu
The document describes a proposed algorithm for efficient text compression using special character replacement and space removal. The algorithm replaces words with non-printable ASCII characters or combinations of characters to compress text files. It uses a dynamic dictionary to map words to their symbols. Spaces are removed from the compressed file in some cases to further reduce file size. Experimental results show the algorithm achieves better compression ratios than LZW, WinZip 10.0 and WinRAR 3.93 for various text file types while allowing lossless decompression.
A Survey of String Matching AlgorithmsIJERA Editor
The concept of string matching algorithms are playing an important role of string algorithms in finding a place where one or several strings (patterns) are found in a large body of text (e.g., data streaming, a sentence, a paragraph, a book, etc.). Its application covers a wide range, including intrusion detection Systems (IDS) in computer networks, applications in bioinformatics, detecting plagiarism, information security, pattern recognition, document matching and text mining. In this paper we present a short survey for well-known and recent updated and hybrid string matching algorithms. These algorithms can be divided into two major categories, known as exact string matching and approximate string matching. The string matching classification criteria was selected to highlight important features of matching strategies, in order to identify challenges and vulnerabilities.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Optimal design of symmetric switching CMOS inverter using symbiotic organisms...IJECEIAES
This paper investigates the optimal design of symmetric switching CMOS inverter using the Symbiotic Organisms Search (SOS) algorithm. SOS has been recently proposed as an effective evolutionary global optimization method that is inspired by the symbiotic interaction strategies between different organisms in an ecosystem. In SOS, the three common types of symbiotic relationships (mutualism, commensalism, and parasitism) are modeled using simple expressions, which are used to find the global minimum of the fitness function. Unlike other optimization methods, SOS has no parameters to be tuned, which makes it an attractive and easy-toimplement optimization method. Here, SOS is used to design a high speed symmetric switching CMOS inverter, which is considered the most fundamental logic gate. SOS results are compared to those obtained using several optimization methods, like particle swarm optimization (PSO), genetic algorithm (GA), differential evolution (DE), and other ones, available in the literature. It is shown that the SOS is a robust straightforward evolutionary algorithm that can compete with other well-known advanced methods.
From Simulation to Online Gaming: the need for adaptive solutions Gabriele D'Angelo
In many fields such as distributed simulation and online gaming the missing piece is adaptivity. There is a strong need for dynamic and adaptive solutions that can improve performances and react to problems.
Pattern Matching using Computational and Automata TheoryIRJET Journal
This document discusses using finite automata for pattern matching. It begins by defining finite automata and their components. It then discusses how finite automata can be used to represent patterns for pattern matching by having states correspond to prefixes of the pattern. Transition tables are provided as an example for the patterns "memo" and detecting the string "barbara". Common pattern matching algorithms like Knuth-Morris-Pratt and Boyer-Moore are also mentioned. The document concludes that finite automata can effectively be used to match regular expressions and languages.
It is a well-known fact that precise definitions play significant role in the development of correct and robust
software. It has been recognized and emphasized that appropriately defined formal conceptual framework
of the context/problem domain proves quite useful in ensuring precise definitions, including those for
software metrics, which are consistent, unambiguous and language independent. In this paper, a formal
conceptual framework for defining metrics for component-based system is proposed, where the framework
formalises the behavioural aspects of the problem domain. The framework in respect of structural aspects
has been discussed in another paper.
A SERIAL COMPUTING MODEL OF AGENT ENABLED MINING OF GLOBALLY STRONG ASSOCIATI...ijcsa
The intelligent agent based model is a popular approach in constructing Distributed Data Mining (DDM) systems to address scalable mining over large scale and ever increasing distributed data. In an agent based
distributed system, variety of agents coordinate and communicate with each other to perform the various
tasks of the Data Mining (DM) process. In this study a serial computing mode of a multi-agent system
(MAS) called Agent enabled Mining of Globally Strong Association Rules (AeMGSAR) is presented based
on the serial itinerary of the mobile agents. A Running environment is also designed for the implementation and performance study of AeMGSAR system.
Compositional testing for fsm based modelsijseajournal
The contribution of this paper is threefold: first, it defines a framework for modelling component
-
based
systems, as well as a formalization of integration rules to combine their behaviour. This is based on fini
te
state machines (FSM). Second, it studies compositional conformance testing i.e. checking whether an
implementation made of conforming components combined with integration operators is conform to its
specification. Third, it shows the correctness of the
global system can be obtained by testing the
components involved into it towards the projection of the global specification on the specifications of the
components. This result is useful to build adequate test purposes for testing components taking into ac
count
the system where they are plugged in
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Parallel communicating extended finite automata systems (20)
Tech transfer making it as a risk free approach in pharmaceutical and biotech iniaemedu
Tech transfer is a common methodology for transferring new products or an existing
commercial product to R&D or to another manufacturing site. Transferring product knowledge to the
manufacturing floor is crucial and it is an ongoing approach in the pharmaceutical and biotech
industry. Without adopting this process, no company can manufacture its niche products, let alone
market them. Technology transfer is a complicated, process because it is highly cross functional. Due
to its cross functional dependence, these projects face numerous risks and failure. If anidea cannot be
successfully brought out in the form of a product, there is no customer benefit, or satisfaction.
Moreover, high emphasis is in sustaining manufacturing with highest quality each and every time. It
is vital that tech transfer projects need to be executed flawlessly. To accomplish this goal, risk
management is crucial and project team needs to use the risk management approach seamlessly.
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
Effective broadcasting in mobile ad hoc networks using gridiaemedu
This document summarizes a research paper that proposes a new grid-based broadcasting mechanism for mobile ad hoc networks. The paper argues that flooding approaches to broadcasting are inefficient and cause network congestion. The proposed approach divides the network into a hierarchical grid structure. When a node needs to broadcast a message, it sends the message to the first node in the appropriate grid, which is then responsible for updating and forwarding the message within that grid. Simulation results showed the grid-based approach outperformed other broadcasting protocols and was more reliable, efficient and scalable.
Effect of scenario environment on the performance of mane ts routingiaemedu
The document analyzes the effect of scenario environment on the performance of the AODV routing protocol in mobile ad hoc networks (MANETs). It studies AODV performance under different scenarios varying network size, maximum node speed, and pause time. The performance is evaluated based on packet delivery ratio, throughput, and end-to-end delay. The results show that AODV performs best in some scenarios and worse in others, indicating that scenario parameters significantly impact routing protocol performance in MANETs.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
The document discusses semantic web services and their challenges. It provides an overview of semantic web technologies like WSDL, SOAP, UDDI, and OIL which are used to build semantic web services. The semantic web architecture adds semantics to web services through ontologies written in OWL and DAML+OIL. Key approaches to semantic web services include annotation, composition, and addressing privacy and security. However, semantic web services still face challenges in achieving their full potential due to issues in representation, reasoning, and a lack of real-world applications and data.
Website based patent information searching mechanismiaemedu
This document summarizes a research paper on developing a website-based patent information searching mechanism. It discusses how patent information can be used for technology development, rights acquisition and utilization, and management information. It describes different types of patent searches including novelty, validity, infringement, and state-of-the-art searches. It also evaluates and compares two major patent websites, Delphion and USPTO, in terms of their search capabilities and features.
Revisiting the experiment on detecting of replay and message modificationiaemedu
This document summarizes a research paper that proposes methods for detecting message modification and replay attacks in ad-hoc wireless networks. It begins with background on security issues in wireless networks and types of attacks. It then reviews existing intrusion detection systems and security techniques. Related work that detects attacks using features from the media access control layer or radio frequency fingerprinting is also discussed. The paper aims to present a simple, economical, and platform-independent system for detecting message modification, replay attacks, and unauthorized users in ad-hoc networks.
1) The document discusses the Cyclic Model Analysis (CMA) technique for sequential pattern mining which aims to predict customer purchasing behavior.
2) CMA calculates the Trend Distribution Function from sequential patterns to model purchasing trends over time. It then uses Generalized Periodicity Detection and Trend Modeling to identify periodic patterns and construct an approximating model.
3) The Cyclic Model Analysis algorithm is applied to further analyze the patterns, dividing the domain into segments where the distribution function is increasing or decreasing and applying the other techniques recursively to fully model the cyclic behavior.
Performance analysis of manet routing protocol in presenceiaemedu
This document analyzes the performance of different routing protocols in a mobile ad hoc network (MANET) under hybrid traffic conditions. It simulates a MANET with 50 nodes moving at speeds up to 20 m/s using the AODV, DSDV, and DSR routing protocols. Traffic included both constant bit rate and variable bit rate sources. Results found that AODV had lower average end-to-end delay and higher packet delivery ratios than DSDV and DSR as the percentage of variable bit rate traffic increased. AODV also performed comparably under both low and high node mobility scenarios with hybrid traffic.
Performance measurement of different requirements engineeringiaemedu
This document summarizes a research paper that compares the performance of different requirements engineering (RE) process models. It describes three RE process models - two existing linear models and the authors' iterative model. It also reviews literature on common RE activities and issues with descriptive models not reflecting real-world practices. The authors conducted interviews at two Indian companies to model their RE processes and compare them to the three models. They found the existing linear models did not fully capture the iterative nature of observed RE processes.
This document proposes a mobile safety system for automobiles that uses Android operating system. The system has two main components: a safety device and an automobile base unit. The safety device allows users to monitor the vehicle's location on a map, check its status, and control functions remotely. It communicates with the base unit in the vehicle using GPRS. The base unit collects data from sensors, determines the vehicle's GPS location, and can execute control commands like activating the brakes or switching off the engine. The document provides details on the design and algorithms of both components and includes examples of Java code implementation. The goal is to create an intelligent, secure and easy-to-use mobile safety system for vehicles using embedded systems and Android
The document discusses agile programming and proposes a new methodology. It provides an overview of existing agile methodologies like Scrum and Extreme Programming. Scrum uses short sprints to define tasks and deadlines. Extreme Programming focuses on practices like test-first development, pair programming, and continuous integration. The document notes drawbacks like an inability to support large or multi-site projects. It proposes designing a new methodology that combines the advantages of existing methods while overcoming their deficiencies.
Adaptive load balancing techniques in global scale grid environmentiaemedu
The document discusses various adaptive load balancing techniques for distributed applications in grid environments. It first describes adaptive mesh refinement algorithms that partition computational domains using space-filling curves or by distributing grids independently or at different levels. It also discusses dynamic load balancing using tiling and multi-criteria geometric partitioning. The document then covers repartitioning algorithms based on multilevel diffusion and the adaptive characteristics of structured adaptive mesh refinement applications. Finally, it discusses adaptive workload balancing on heterogeneous resources by benchmarking resource characteristics and estimating application parameters to find optimal load distribution.
A survey on the performance of job scheduling in workflow applicationiaemedu
This document summarizes a survey on job scheduling performance in workflow applications on grid platforms. It discusses an adaptive dual objective scheduling (ADOS) algorithm that takes both completion time and resource usage into account for measuring schedule performance. The study shows ADOS delivers good performance in completion time, resource usage, and robustness to changes in resource performance. It also describes the system architecture used, which includes a planner and executor component. The planner focuses on scheduling to minimize completion time while considering resource usage, and can reschedule if needed. The executor enacts the schedule on the grid resources.
A survey of mitigating routing misbehavior in mobile ad hoc networksiaemedu
This document summarizes existing methods to detect misbehavior in mobile ad hoc networks (MANETs). It discusses how routing protocols assume nodes will cooperate fully, but misbehavior like packet dropping can occur. It describes several techniques to detect misbehavior, including watchdog, ACK/SACK, TWOACK, S-TWOACK, and credit-based/reputation-based schemes. Credit-based schemes use virtual currencies to provide incentives for nodes to forward packets, while reputation-based schemes track nodes' past behaviors. The document aims to survey approaches for mitigating the impact of misbehaving nodes in MANET routing.
A novel approach for satellite imagery storage by classifyiaemedu
This document presents a novel approach for classifying and storing satellite imagery by detecting and storing only non-duplicate regions. It uses kernel principal component analysis to reduce the dimensionality and extract features of satellite images. Fuzzy N-means clustering is then used to segment the images into blocks. A duplication detection algorithm compares blocks to identify duplicate and non-duplicate regions. Only the non-duplicate regions are stored in the database, improving storage efficiency and updating speed compared to completely replacing existing images. Support vector machines are used to categorize the non-duplicate blocks into the appropriate classes in the existing images.
A self recovery approach using halftone images for medical imageryiaemedu
This document summarizes a proposed approach for securely transferring medical images over the internet using visual cryptography and halftone images. The approach uses error diffusion techniques to generate a halftone host image from the grayscale medical image. Shadow images are then created from the halftone host image using visual cryptography algorithms. When stacked together, the shadow images reveal the secret medical image. The halftone host image also contains an embedded logo that can be extracted to verify the integrity of the reconstructed image without a trusted third party.
A comprehensive study of non blocking joining techniqueiaemedu
The document discusses and compares various non-blocking joining techniques for databases. It describes 7 different non-blocking joining algorithms: 1) Symmetric hash join, 2) XJoin, 3) Progressive merge join, 4) Hash merge join, 5) Rate based progressive join, 6) Multi-way join, and 7) Early hash join. For each algorithm, it explains the basic approach, memory overflow handling technique, and provides diagrams to illustrate the process. The goal of the paper is to explain and evaluate these non-blocking joining techniques based on factors like execution time, memory usage, I/O complexity, and ability to handle continuous data streams.