The random forest (RF) classifier is an ensemble
classifier derived from decision tree idea. However the parallel
operations of several classifiers along with use of randomness
in sample and feature selection has made the random forest a
very strong classifier with accuracy rates comparable to most
of currently used classifiers. Although, the use of random
forest on handwritten digits has been considered before, in
this paper RF is applied in recognizing Persian handwritten
characters. Trying to improve the recognition rate, we suggest
converting the structure of decision trees from a binary tree
to a multi branch tree. The improvement gained this way
proves the applicability of the idea.
This document summarizes a neuroscience-inspired approach to segmenting online handwritten Tamil words into constituent symbols. The approach first uses a simple overlap-based method to segment words into stroke groups. It then applies attention and feedback mechanisms, drawing from neuroscience research on visual perception, to detect and correct segmentation errors by splitting or merging stroke groups. The approach is tested on 10,000 handwritten Tamil words and achieves over 99% accuracy at the symbol level, demonstrating efficacy in segmentation and improving word recognition performance.
This paper presents a novel machine learning approach for morphological analysis of Tamil, an agglutinative language. The approach segments words into morphemes and labels them without relying on rules. It captures Tamil's complex morphological structure more accurately than existing rule-based analyzers. A dataset was created by segmenting and aligning words with their morphological analyses. Two models were trained on this data: one to identify morpheme boundaries and another to assign grammatical categories. This approach achieved 95.65% accuracy, outperforming existing Tamil morphological analyzers.
Scene text recognition in mobile applications by character descriptor and str...eSAT Journals
This document presents a method for scene text recognition in mobile applications using character descriptors and structure configuration. It proposes using a character descriptor that combines feature detectors and descriptors to extract text features effectively. It also models character structure using stroke configuration maps derived from character boundaries and skeletons. The method was tested on various datasets and achieved accuracy rates above 70%, outperforming existing methods. It can detect text regions and recognize text information for applications like text understanding and retrieval on mobile devices.
This document describes a factored statistical machine translation system from English to Tamil that incorporates Tamil morphology. The system first reorders and factors the English text, then uses morphological analysis and generation tools for Tamil to further factorize the text. This addresses challenges of translating between languages with different morphological structures and word orders. The system was shown to improve over a baseline SMT system for English to Tamil translation by integrating linguistic information like lemmas and morphological features.
This document describes a system for named entity recognition in South and Southeast Asian languages that uses conditional random fields for machine learning followed by rule-based post-processing. The system was tested on Bengali, Hindi, Oriya, Telugu, and Urdu. It uses window-based features and prefixes/suffixes to handle agglutinative properties. Post-processing improves recall by considering secondary tags and handles nested entities. Evaluation shows F-measures from 39-51% depending on the language and entity type. The system achieves decent performance without extensive language-specific resources.
This document discusses statistical feature extraction methods for isolated handwritten Gurumukhi script characters. It introduces Zernike and Pseudo-Zernike moment-based methods for extracting features from preprocessed and normalized Gurumukhi character images. Features are extracted at various moment orders and used to reconstruct the images to check accuracy. The document provides background on Gurumukhi script and discusses shape descriptors and image moments as methods for statistical feature extraction. Experimental results using Zernike and Pseudo-Zernike moments are presented.
The document discusses trees and their representations in graphs. It defines trees as acyclic, connected graphs with one designated root node. Trees can be represented recursively or using adjacency lists and matrices. Binary trees are discussed, with examples of full and complete binary trees. Common tree traversal algorithms are presented: preorder, inorder and postorder. Applications of trees include decision trees, file systems and representing algebraic expressions. Infix, prefix and postfix notations are explained using binary expression trees.
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
This document summarizes a research paper on Devnagari document segmentation using a histogram approach. It discusses challenges in segmenting the Devnagari script used for several Indian languages. A simple algorithm is proposed using horizontal and vertical histograms to segment documents into lines, words and characters. The algorithm achieves near 100% accuracy for line segmentation but lower accuracy for word and character segmentation due to complexities in the Devnagari script. Future work is needed to improve character segmentation handling connected and modified characters.
This document summarizes a neuroscience-inspired approach to segmenting online handwritten Tamil words into constituent symbols. The approach first uses a simple overlap-based method to segment words into stroke groups. It then applies attention and feedback mechanisms, drawing from neuroscience research on visual perception, to detect and correct segmentation errors by splitting or merging stroke groups. The approach is tested on 10,000 handwritten Tamil words and achieves over 99% accuracy at the symbol level, demonstrating efficacy in segmentation and improving word recognition performance.
This paper presents a novel machine learning approach for morphological analysis of Tamil, an agglutinative language. The approach segments words into morphemes and labels them without relying on rules. It captures Tamil's complex morphological structure more accurately than existing rule-based analyzers. A dataset was created by segmenting and aligning words with their morphological analyses. Two models were trained on this data: one to identify morpheme boundaries and another to assign grammatical categories. This approach achieved 95.65% accuracy, outperforming existing Tamil morphological analyzers.
Scene text recognition in mobile applications by character descriptor and str...eSAT Journals
This document presents a method for scene text recognition in mobile applications using character descriptors and structure configuration. It proposes using a character descriptor that combines feature detectors and descriptors to extract text features effectively. It also models character structure using stroke configuration maps derived from character boundaries and skeletons. The method was tested on various datasets and achieved accuracy rates above 70%, outperforming existing methods. It can detect text regions and recognize text information for applications like text understanding and retrieval on mobile devices.
This document describes a factored statistical machine translation system from English to Tamil that incorporates Tamil morphology. The system first reorders and factors the English text, then uses morphological analysis and generation tools for Tamil to further factorize the text. This addresses challenges of translating between languages with different morphological structures and word orders. The system was shown to improve over a baseline SMT system for English to Tamil translation by integrating linguistic information like lemmas and morphological features.
This document describes a system for named entity recognition in South and Southeast Asian languages that uses conditional random fields for machine learning followed by rule-based post-processing. The system was tested on Bengali, Hindi, Oriya, Telugu, and Urdu. It uses window-based features and prefixes/suffixes to handle agglutinative properties. Post-processing improves recall by considering secondary tags and handles nested entities. Evaluation shows F-measures from 39-51% depending on the language and entity type. The system achieves decent performance without extensive language-specific resources.
This document discusses statistical feature extraction methods for isolated handwritten Gurumukhi script characters. It introduces Zernike and Pseudo-Zernike moment-based methods for extracting features from preprocessed and normalized Gurumukhi character images. Features are extracted at various moment orders and used to reconstruct the images to check accuracy. The document provides background on Gurumukhi script and discusses shape descriptors and image moments as methods for statistical feature extraction. Experimental results using Zernike and Pseudo-Zernike moments are presented.
The document discusses trees and their representations in graphs. It defines trees as acyclic, connected graphs with one designated root node. Trees can be represented recursively or using adjacency lists and matrices. Binary trees are discussed, with examples of full and complete binary trees. Common tree traversal algorithms are presented: preorder, inorder and postorder. Applications of trees include decision trees, file systems and representing algebraic expressions. Infix, prefix and postfix notations are explained using binary expression trees.
DEVNAGARI DOCUMENT SEGMENTATION USING HISTOGRAM APPROACHijcseit
This document summarizes a research paper on Devnagari document segmentation using a histogram approach. It discusses challenges in segmenting the Devnagari script used for several Indian languages. A simple algorithm is proposed using horizontal and vertical histograms to segment documents into lines, words and characters. The algorithm achieves near 100% accuracy for line segmentation but lower accuracy for word and character segmentation due to complexities in the Devnagari script. Future work is needed to improve character segmentation handling connected and modified characters.
An exhaustive font and size invariant classification scheme for ocr of devana...ijnlc
The document presents a classification scheme for recognizing Devanagari characters that is invariant to font and size. It identifies the basic symbols that commonly appear in the middle zone of Devanagari text across different fonts and sizes. Through an analysis of over 465,000 words from various sources, it finds that 345 symbols account for 99.97% of text and aims to classify these into groups based on structural properties like the presence or absence of vertical bars. The proposed classification scheme is validated on 25 fonts and 3 sizes to demonstrate its font and size invariance.
This document discusses fuzzy logical databases and an efficient algorithm for evaluating fuzzy equi-joins. It begins with an introduction to fuzzy concepts in databases, including representing imprecise data using fuzzy sets and membership functions. It then defines a new measure for fuzzy equality that is used to define a fuzzy equi-join. The document proposes a sort-merge join algorithm that sorts relations based on a partial order of intervals to efficiently evaluate the fuzzy equi-join in two phases: sorting and joining. Experimental results are said to show a significant improvement in efficiency when using this algorithm.
Devnagari document segmentation using histogram approachVikas Dongre
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
The document summarizes key concepts of the relational database model including:
1. The relational model uses tables to represent data and relationships, with each table having columns and rows.
2. Key characteristics are that it is the primary commercial data model, provides a simple way to represent data, and uses a record-based structure with fixed-format records and fields.
3. Relational databases have a schema defining relations (tables) and attributes (columns), with each relation made up of tuples (rows) that contain values from the defined domains.
Recognition of Words in Tamil Script Using Neural NetworkIJERA Editor
In this paper, word recognition using neural network is proposed. Recognition process is started with the partitioning of document image into lines, words, and characters and then capturing the local features of segmented characters. After classifying the characters, the word image is transferred into unique code based on character code. This code ideally describes any form of word including word with mixed styles and different sizes. Sequence of character codes of the word form input pattern and word code is a target value of the pattern. Neural network is used to train the patterns of the words. Trained network is tested with word patterns and is recognized or unrecognized based on the network error value. Experiments have been conducted with a local database to evaluate the performance of the word recognizing system and obtained good accuracy. This method can be applied for any language word recognition system as the training is based on only unique code of the characters and words belonging to the language.
1. The document presents a methodology for recognizing isolated handwritten Devanagari numerals using structural and statistical features.
2. Key features extracted include whether the numeral has openings on the left, right, above or below, and the number of horizontal and vertical crossings.
3. The methodology achieves an average accuracy of 96.8% on a dataset of 500 numeral images collected from various individuals. Accuracy is highest for numerals 0, 6, 8 and 10 at 100%, while some similar numerals like 3 and 2 see more errors.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Dbms 10: Conversion of ER model to Relational ModelAmiya9439793168
The document discusses the conversion of an entity-relationship (ER) model to a relational model by describing how different ER constructs such as strong/weak entities, relationships, composite/multi-valued attributes, generalization/specialization, and aggregation map to relational schemas and tables. Strong entities become tables with their primary key and attributes, while weak entities include the primary key of their identifying entity. Relationships become tables linking the participating entity primary keys. Descriptive attributes may also be included.
The document discusses database normalization. It defines functional dependency and explains how anomalies like redundancy, insertion anomalies, deletion anomalies, and update anomalies can occur in a database without normalization. It also describes the different normal forms including 1NF, 2NF, 3NF and BCNF. Decomposition is introduced as a process to normalize relations by eliminating anomalies. The goal of normalization is to ensure data is stored efficiently and consistently without redundancy.
Bca3020– data base management system(dbms)smumbahelp
This document provides information about getting solved assignments by email or phone. It includes contact details for an assignment help service and then provides sample questions and answers related to a database management systems course. The questions cover topics like entities, attributes, relationships, database manager responsibilities, file organization, the LIKE predicate, relational algebra operations, and object-oriented programming features.
Abstract A usage of regular expressions to search text is well known and understood as a useful technique. Regular Expressions are generic representations for a string or a collection of strings. Regular expressions (regexps) are one of the most useful tools in computer science. NLP, as an area of computer science, has greatly benefitted from regexps: they are used in phonology, morphology, text analysis, information extraction, & speech recognition. This paper helps a reader to give a general review on usage of regular expressions illustrated with examples from natural language processing. In addition, there is a discussion on different approaches of regular expression in NLP. Keywords— Regular Expression, Natural Language Processing, Tokenization, Longest common subsequence alignment, POS tagging
----------------------------
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Handwritten character recognition using method filterseSAT Journals
Abstract Handwritten character recognition is an emerging and a very challenging field of research as the handwritings vary from person to person. In this paper we have focused on some of the existing methodologies of character recognition and come up with some new methodologies. A system which encompasses different character recognition methods as filters is proposed in this paper. The methods are prioritized based on their result efficiencies and applied on the input. As we pass through the process, the number of possible results in the solution set keeps decreasing steeply. Using a combination of methods as a filter for recognition yields more accurate results than using a single method and also decreases the space and time complexity of the algorithm. Finally, further scope of development of this model is discussed. Keywords –Glyph, Character recognition, handwriting recognition, space time complexity, filter process.
Specification-based Verification of Incomplete ProgramsIDES Editor
Recently, formal methods like model checking or
theorem proving have been considered efficient tools for
software verification. However, when practically applied, those
techniques suffer high complexity cost. Combining static
analysis with dynamic checking to deal with this problem has
been becoming an emerging trend, which results in the
introduction of concolic testing technique and its variations.
However, the analysis-based verification techniques always
assume the availability of full source code of the verified
program, which does not always hold in real life contexts. In
this paper, we propose an approach to tackle this problem,
where our contributed ideas are (i) combining function
specification with control flow analysis to deal with sourcemissing
function; (ii) generating self-complete programs from
incomplete programs by means of concrete execution, thus
making them fully verifiable by model checking; and (iii)
developing a constraint-based test-case generation technique
to significantly reduce the complexity. Our solution has been
proved viable when successfully deployed for checking
programming work of students.
Development of Dual Frequency Alternator Technology Based Power Source For Mi...IDES Editor
This paper presents a Dual frequency alternator,
which generates power output at 50Hz and 400Hz. In weapon
system applications 50Hz and 400Hz power is required to meet
the requirement of different loads which operate at different
frequencies simultaneously. To cater power supply to these
loads either separate generators are required to be used or
50Hz and 400Hz power is obtained by solid-state conversion
using power converters. These separate power sources
increases the logistics, space requirement, maintenance,
repairs and overhauls. To overcome these difficulties special
kind of machine called Dual Frequency Alternator has been
designed to give power outputs of 50Hz and 400Hz frequencies
simultaneously with a single prime mover (Engine). In this
paper design, development and performance evaluation of the
Dual frequency output alternator along with integrated power
supply system is discussed. Experimental results are also
presented that demonstrate the high performance of Dual
frequency output Alternator technology for future Power
generation system.
A Generic Describing Method of Memory Latency Hiding in a High-level Synthesi...IDES Editor
We show a generic describing method of hardware
including a memory access controller in a C-based high-level
synthesis technology (Handel-C). In this method, a prefetching
mechanism to improve the performance by hiding memory
access latency can be systematically described in C language.
We demonstrate that the proposed method is very simple and
easy through a case study. The experimental result shows
that although the proposed method introduces a little hardware
overhead, it can improve the performance significantly.
A New Soft-Switched Resonant DC-DC ConverterIDES Editor
This paper presents a new soft-switched resonant dcdc
converter using a passive snubber circuit. The proposed
converter uses a new zero voltage and zero current switching
(ZVZCS) strategies to get ZVZCS function. Besides operating
at constant frequency, all semiconductor devices operate at
soft-switching without additional voltage and current stresses.
In order to validate the proposed converter, computer
simulations and experimental results were conducted. The
paper indicates the effective converter operation region of the
soft-switching action and its efficiency improvement results
on the basis of experimental evaluations using laboratory
prototype.
Identification of Reactive Power Reserve in Transmission NetworkIDES Editor
This document summarizes a method for identifying critical voltage control areas (VCAs) in transmission networks. It describes operational difficulties that occurred in the Polish transmission network due to insufficient reactive power reserves. The method clusters contingencies based on bus participation factors from modal analysis. It then identifies the buses and generators that form each VCA. Reactive power reserve requirements are then established for the generators controlling each VCA to ensure voltage stability under all conditions.
KidzFrame: Supporting Awareness in the DaycareIDES Editor
KidzFrame is an innovative system that connects
working parents with their children at daycare centers. This
paper reports the findings of three focus groups and a twoweek
long field study of the system. As conclusions of our
study we offer six guidelines that are of direct salience for
designers of pervasive computing services wanting to address
the communication needs of daycare centers, parents and their
children.
Application of Solar Powered High Voltage Discharge Plasma for NOX Removal in...IDES Editor
This document describes a proposed system for using solar-powered high voltage discharge plasma to remove NOx from diesel engine exhaust. It includes:
1) A boost converter that boosts the voltage from a 12V solar-powered battery to 24V, and an automobile ignition coil that generates high voltage pulses using flyback topology.
2) A comparison of discharge plasma and plasma-adsorbent processes for NOx removal at different gas flow rates, using activated alumina as the adsorbent.
3) The design and fabrication of the high voltage pulse source, including the boost converter, ignition coil driver circuit, and generation of high voltage pulses up to 18kV from the 24V input.
Resource Identification Using Mobile QueriesIDES Editor
Location based mobile services (LBS) are budding
significantly along with development of GPS-enabled mobile
phones, smart phones and PDAs. Mobile users may submit the
query to the server for knowing about nearest resources such
as fuel stations, hospitals, ATM centers etc to get the services.
In this scenario, identifying locations of resources is highly
significant. This paper focuses on query management in
mobile environments to locate the most appropriate location of
the required services.
An exhaustive font and size invariant classification scheme for ocr of devana...ijnlc
The document presents a classification scheme for recognizing Devanagari characters that is invariant to font and size. It identifies the basic symbols that commonly appear in the middle zone of Devanagari text across different fonts and sizes. Through an analysis of over 465,000 words from various sources, it finds that 345 symbols account for 99.97% of text and aims to classify these into groups based on structural properties like the presence or absence of vertical bars. The proposed classification scheme is validated on 25 fonts and 3 sizes to demonstrate its font and size invariance.
This document discusses fuzzy logical databases and an efficient algorithm for evaluating fuzzy equi-joins. It begins with an introduction to fuzzy concepts in databases, including representing imprecise data using fuzzy sets and membership functions. It then defines a new measure for fuzzy equality that is used to define a fuzzy equi-join. The document proposes a sort-merge join algorithm that sorts relations based on a partial order of intervals to efficiently evaluate the fuzzy equi-join in two phases: sorting and joining. Experimental results are said to show a significant improvement in efficiency when using this algorithm.
Devnagari document segmentation using histogram approachVikas Dongre
Document segmentation is one of the critical phases in machine recognition of any language. Correct
segmentation of individual symbols decides the accuracy of character recognition technique. It is used to
decompose image of a sequence of characters into sub images of individual symbols by segmenting lines and
words. Devnagari is the most popular script in India. It is used for writing Hindi, Marathi, Sanskrit and
Nepali languages. Moreover, Hindi is the third most popular language in the world. Devnagari documents
consist of vowels, consonants and various modifiers. Hence proper segmentation of Devnagari word is
challenging. A simple histogram based approach to segment Devnagari documents is proposed in this paper.
Various challenges in segmentation of Devnagari script are also discussed.
The document summarizes key concepts of the relational database model including:
1. The relational model uses tables to represent data and relationships, with each table having columns and rows.
2. Key characteristics are that it is the primary commercial data model, provides a simple way to represent data, and uses a record-based structure with fixed-format records and fields.
3. Relational databases have a schema defining relations (tables) and attributes (columns), with each relation made up of tuples (rows) that contain values from the defined domains.
Recognition of Words in Tamil Script Using Neural NetworkIJERA Editor
In this paper, word recognition using neural network is proposed. Recognition process is started with the partitioning of document image into lines, words, and characters and then capturing the local features of segmented characters. After classifying the characters, the word image is transferred into unique code based on character code. This code ideally describes any form of word including word with mixed styles and different sizes. Sequence of character codes of the word form input pattern and word code is a target value of the pattern. Neural network is used to train the patterns of the words. Trained network is tested with word patterns and is recognized or unrecognized based on the network error value. Experiments have been conducted with a local database to evaluate the performance of the word recognizing system and obtained good accuracy. This method can be applied for any language word recognition system as the training is based on only unique code of the characters and words belonging to the language.
1. The document presents a methodology for recognizing isolated handwritten Devanagari numerals using structural and statistical features.
2. Key features extracted include whether the numeral has openings on the left, right, above or below, and the number of horizontal and vertical crossings.
3. The methodology achieves an average accuracy of 96.8% on a dataset of 500 numeral images collected from various individuals. Accuracy is highest for numerals 0, 6, 8 and 10 at 100%, while some similar numerals like 3 and 2 see more errors.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Dbms 10: Conversion of ER model to Relational ModelAmiya9439793168
The document discusses the conversion of an entity-relationship (ER) model to a relational model by describing how different ER constructs such as strong/weak entities, relationships, composite/multi-valued attributes, generalization/specialization, and aggregation map to relational schemas and tables. Strong entities become tables with their primary key and attributes, while weak entities include the primary key of their identifying entity. Relationships become tables linking the participating entity primary keys. Descriptive attributes may also be included.
The document discusses database normalization. It defines functional dependency and explains how anomalies like redundancy, insertion anomalies, deletion anomalies, and update anomalies can occur in a database without normalization. It also describes the different normal forms including 1NF, 2NF, 3NF and BCNF. Decomposition is introduced as a process to normalize relations by eliminating anomalies. The goal of normalization is to ensure data is stored efficiently and consistently without redundancy.
Bca3020– data base management system(dbms)smumbahelp
This document provides information about getting solved assignments by email or phone. It includes contact details for an assignment help service and then provides sample questions and answers related to a database management systems course. The questions cover topics like entities, attributes, relationships, database manager responsibilities, file organization, the LIKE predicate, relational algebra operations, and object-oriented programming features.
Abstract A usage of regular expressions to search text is well known and understood as a useful technique. Regular Expressions are generic representations for a string or a collection of strings. Regular expressions (regexps) are one of the most useful tools in computer science. NLP, as an area of computer science, has greatly benefitted from regexps: they are used in phonology, morphology, text analysis, information extraction, & speech recognition. This paper helps a reader to give a general review on usage of regular expressions illustrated with examples from natural language processing. In addition, there is a discussion on different approaches of regular expression in NLP. Keywords— Regular Expression, Natural Language Processing, Tokenization, Longest common subsequence alignment, POS tagging
----------------------------
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Handwritten character recognition using method filterseSAT Journals
Abstract Handwritten character recognition is an emerging and a very challenging field of research as the handwritings vary from person to person. In this paper we have focused on some of the existing methodologies of character recognition and come up with some new methodologies. A system which encompasses different character recognition methods as filters is proposed in this paper. The methods are prioritized based on their result efficiencies and applied on the input. As we pass through the process, the number of possible results in the solution set keeps decreasing steeply. Using a combination of methods as a filter for recognition yields more accurate results than using a single method and also decreases the space and time complexity of the algorithm. Finally, further scope of development of this model is discussed. Keywords –Glyph, Character recognition, handwriting recognition, space time complexity, filter process.
Specification-based Verification of Incomplete ProgramsIDES Editor
Recently, formal methods like model checking or
theorem proving have been considered efficient tools for
software verification. However, when practically applied, those
techniques suffer high complexity cost. Combining static
analysis with dynamic checking to deal with this problem has
been becoming an emerging trend, which results in the
introduction of concolic testing technique and its variations.
However, the analysis-based verification techniques always
assume the availability of full source code of the verified
program, which does not always hold in real life contexts. In
this paper, we propose an approach to tackle this problem,
where our contributed ideas are (i) combining function
specification with control flow analysis to deal with sourcemissing
function; (ii) generating self-complete programs from
incomplete programs by means of concrete execution, thus
making them fully verifiable by model checking; and (iii)
developing a constraint-based test-case generation technique
to significantly reduce the complexity. Our solution has been
proved viable when successfully deployed for checking
programming work of students.
Development of Dual Frequency Alternator Technology Based Power Source For Mi...IDES Editor
This paper presents a Dual frequency alternator,
which generates power output at 50Hz and 400Hz. In weapon
system applications 50Hz and 400Hz power is required to meet
the requirement of different loads which operate at different
frequencies simultaneously. To cater power supply to these
loads either separate generators are required to be used or
50Hz and 400Hz power is obtained by solid-state conversion
using power converters. These separate power sources
increases the logistics, space requirement, maintenance,
repairs and overhauls. To overcome these difficulties special
kind of machine called Dual Frequency Alternator has been
designed to give power outputs of 50Hz and 400Hz frequencies
simultaneously with a single prime mover (Engine). In this
paper design, development and performance evaluation of the
Dual frequency output alternator along with integrated power
supply system is discussed. Experimental results are also
presented that demonstrate the high performance of Dual
frequency output Alternator technology for future Power
generation system.
A Generic Describing Method of Memory Latency Hiding in a High-level Synthesi...IDES Editor
We show a generic describing method of hardware
including a memory access controller in a C-based high-level
synthesis technology (Handel-C). In this method, a prefetching
mechanism to improve the performance by hiding memory
access latency can be systematically described in C language.
We demonstrate that the proposed method is very simple and
easy through a case study. The experimental result shows
that although the proposed method introduces a little hardware
overhead, it can improve the performance significantly.
A New Soft-Switched Resonant DC-DC ConverterIDES Editor
This paper presents a new soft-switched resonant dcdc
converter using a passive snubber circuit. The proposed
converter uses a new zero voltage and zero current switching
(ZVZCS) strategies to get ZVZCS function. Besides operating
at constant frequency, all semiconductor devices operate at
soft-switching without additional voltage and current stresses.
In order to validate the proposed converter, computer
simulations and experimental results were conducted. The
paper indicates the effective converter operation region of the
soft-switching action and its efficiency improvement results
on the basis of experimental evaluations using laboratory
prototype.
Identification of Reactive Power Reserve in Transmission NetworkIDES Editor
This document summarizes a method for identifying critical voltage control areas (VCAs) in transmission networks. It describes operational difficulties that occurred in the Polish transmission network due to insufficient reactive power reserves. The method clusters contingencies based on bus participation factors from modal analysis. It then identifies the buses and generators that form each VCA. Reactive power reserve requirements are then established for the generators controlling each VCA to ensure voltage stability under all conditions.
KidzFrame: Supporting Awareness in the DaycareIDES Editor
KidzFrame is an innovative system that connects
working parents with their children at daycare centers. This
paper reports the findings of three focus groups and a twoweek
long field study of the system. As conclusions of our
study we offer six guidelines that are of direct salience for
designers of pervasive computing services wanting to address
the communication needs of daycare centers, parents and their
children.
Application of Solar Powered High Voltage Discharge Plasma for NOX Removal in...IDES Editor
This document describes a proposed system for using solar-powered high voltage discharge plasma to remove NOx from diesel engine exhaust. It includes:
1) A boost converter that boosts the voltage from a 12V solar-powered battery to 24V, and an automobile ignition coil that generates high voltage pulses using flyback topology.
2) A comparison of discharge plasma and plasma-adsorbent processes for NOx removal at different gas flow rates, using activated alumina as the adsorbent.
3) The design and fabrication of the high voltage pulse source, including the boost converter, ignition coil driver circuit, and generation of high voltage pulses up to 18kV from the 24V input.
Resource Identification Using Mobile QueriesIDES Editor
Location based mobile services (LBS) are budding
significantly along with development of GPS-enabled mobile
phones, smart phones and PDAs. Mobile users may submit the
query to the server for knowing about nearest resources such
as fuel stations, hospitals, ATM centers etc to get the services.
In this scenario, identifying locations of resources is highly
significant. This paper focuses on query management in
mobile environments to locate the most appropriate location of
the required services.
ANN Based PID Controlled Brushless DC drive SystemIDES Editor
Brushless (BLDC) DC motors find many industrial
applications such as process control, robotics, automation,
aerospace etc. Wider usage of this system has demanded an
optimum position control for high efficiency, accuracy and
reliability. Hence for the effective position control, estimation
of dynamic load parameters i.e. moment of inertia and friction
coefficient is necessary. This paper incorporates the estimation
of mechanical parameters such as moment of inertia and
friction coefficient of BLDC motor and load at various load
settings by using simple procedure. To achieve the optimum
position control, PID controller is employed and tuned using
PARR method. ANN training is used for obtaining the
mechanical and PID controller parameters at different load
settings. Closed loop position control system of the BLDC
drive system is created using SIMULINK. Simulation results
of this system are obtained at different load settings. It is
evident from the results that the position control system
responds to the desired position with minimum rise time,
settling time and peak overshoot.
This document describes a system for named entity recognition in South and Southeast Asian languages that uses conditional random fields for machine learning followed by rule-based post-processing. The system was tested on Bengali, Hindi, Oriya, Telugu, and Urdu. It uses windows of words, prefixes, suffixes, and other features as input to the CRF model. Post-processing includes assigning the second best tag if confidence is high and addressing nested entities. Evaluation shows F-measures from 33.94% to 50.06% depending on the language and entity type. The system performs well on closed classes but struggles with long entities and unknown words.
This paper presents a new multi-tier holistic approach for recognizing Urdu text written in Nastaliq script. It first identifies special ligatures like dots, tay, hamza and mad from base ligatures. It then associates the special ligatures with neighboring base ligatures. Features are extracted from the ligatures and special ligature-base ligature associations. These features are input to a neural network that recognizes the ligatures in three steps: 1) identifying special ligatures, 2) associating them with base ligatures, and 3) recognizing the base ligatures. The system was tested on 200 ligatures with 100% accuracy for ligatures in its training set and closest match classification for new ligatures.
Optimal Clustering Technique for Handwritten Nandinagari Character RecognitionEditor IJCATR
In this paper, an optimalclustering technique for handwritten Nandinagari character recognition is proposed. We compare two
different corner detector mechanisms and compare and contrast various clustering approachesfor handwritten Nandinagari characters.
In this model, the key interest points on the images which are invariant to Scale, rotation, translation, illumination and occlusion are
identified by choosing robust Scale Invariant Feature Transform method(SIFT) and Speeded Up Robust Feature (SURF) transform
techniques. We then generate a dissimilarity matrix, which is in turn fed as an input for a set of clustering techniques like K Means,
PAM (Partition Around Medoids) and Hierarchical Agglomerative clustering. Various cluster validity measures are used to assess the
quality of clustering techniques with an intent to find a technique suitable for these rare characters. On a varied data set of over 1040
Handwritten Nandinagari characters, a careful analysis indicate this combinatorial approach used in a collaborative manner will aid in
achieving good recognition accuracy. We found that Hierarchical clustering technique is most suitable for SIFT and SURF features as
compared to K Means and PAM techniques.
Recognition of Persian handwritten characters has been considered as a significant field of research for
the last few years under pattern analysing technique. In this paper, a new approach for robust handwritten
Persian numerals recognition using strong feature set and a classifier fusion method is scrutinized to
increase the recognition percentage. For implementing the classifier fusion technique, we have considered
k nearest neighbour (KNN), linear classifier (LC) and support vector machine (SVM) classifiers. The
innovation of this tactic is to attain better precision with few features using classifier fusion method. For
evaluation of the proposed method we considered a Persian numerals database with 20,000 handwritten
samples. Spending 15,000 samples for training stage, we verified our technique on other 5,000 samples,
and the correct recognition ratio achievedapproximately 99.90%. Additional, we got 99.97% exactness
using four-fold cross validation procedure on 20,000 databases.
Indexing for Large DNA Database sequencesCSCJournals
Bioinformatics data consists of a huge amount of information due to the large number of sequences, the very high sequences lengths and the daily new additions. This data need to be efficiently accessed for many needs. What makes one DNA data item distinct from another is its DNA sequence. DNA sequence consists of a combination of four characters which are A, C, G, T and have different lengths. Use a suitable representation of DNA sequences, and a suitable index structure to hold this representation at main memory will lead to have efficient processing by accessing the DNA sequences through indexing, and will reduce number of disk I/O accesses. I/O operations needed at the end, to avoid false hits, we reduce the number of candidate DNA sequences that need to be checked by pruning, so no need to search the whole database. We need to have a suitable index for searching DNA sequences efficiently, with suitable index size and searching time. The suitable selection of relation fields, where index is build upon has a big effect on index size and search time. Our experiments use the n-gram wavelet transformation upon one field and multi-fields index structure under the relational DBMS environment. Results show the need to consider index size and search time while using indexing carefully. Increasing window size decreases the amount of I/O reference. The use of a single field and multiple fields indexing is highly affected by window size value. Increasing window size value lead to better searching time with special type index using single filed indexing. While the search time is almost good and the same with most index types when using multiple field indexing. Storage space needed for RDMS indexing types are almost the same or greater than the actual data.
A NOVEL DATA DICTIONARY LEARNING FOR LEAF RECOGNITIONsipij
Automatic leaf recognition via image processing has been greatly important for a number of professionals, such as botanical taxonomic, environmental protectors, and foresters. Learn an over-complete leaf dictionary is an essential step for leaf image recognition. Big leaf images dimensions and training images number is facing of fast and complete data leaves dictionary. In this work an efficient approach applies to construct over-complete data leaves dictionary to set of big images diminutions based on sparse representation. In the proposed method a new cropped-contour method has used to crop the training image. The experiments are testing using correlation between the sparse representation and data dictionary and with focus on the computing time.
A NOVEL FEATURE SET FOR RECOGNITION OF SIMILAR SHAPED HANDWRITTEN HINDI CHARA...cscpconf
This document describes a study that uses machine learning algorithms to recognize similar shaped handwritten Hindi characters. It extracts 85 features from each character image based on geometry. Four machine learning algorithms (Bayesian Network, RBFN, MLP, C4.5 Decision Tree) are trained on datasets containing samples of a target character pair and evaluated based on precision, misclassification rate, and model build time. Feature selection techniques are also used to reduce the feature set dimensionality before classification. Experimental results show that different algorithms perform best depending on the feature set and number of samples used for training.
High level speaker specific features modeling in automatic speaker recognitio...IJECEIAES
Spoken words convey several levels of information. At the primary level, the speech conveys words or spoken messages, but at the secondary level, the speech also reveals information about the speakers. This work is based on the high-level speaker-specific features on statistical speaker modeling techniques that express the characteristic sound of the human voice. Using Hidden Markov model (HMM), Gaussian mixture model (GMM), and Linear Discriminant Analysis (LDA) models build Automatic Speaker Recognition (ASR) system that are computational inexpensive can recognize speakers regardless of what is said. The performance of the ASR system is evaluated for clear speech to a wide range of speech quality using a standard TIMIT speech corpus. The ASR efficiency of HMM, GMM, and LDA based modeling technique are 98.8%, 99.1%, and 98.6% and Equal Error Rate (EER) is 4.5%, 4.4% and 4.55% respectively. The EER improvement of GMM modeling technique based ASR systemcompared with HMM and LDA is 4.25% and 8.51% respectively.
SVM Based Identification of Psychological Personality Using Handwritten Text IJERA Editor
This document describes a study that uses handwriting analysis to identify psychological personality traits using support vector machines (SVM). Handwriting samples were collected and preprocessed by removing noise and segmenting lines. Features like slope, shape, and edge histograms were extracted. SVM with radial basis function kernel was used for classification. Analysis of single lines achieved 95% accuracy while multiple lines achieved 91% accuracy in identifying traits like cheerfulness and weariness. The methodology was also applied to analyze handwriting of celebrities and compare the results to analyses by graphologists. The study aims to automate handwriting analysis using machine learning techniques.
Review of research on devnagari character recognitionVikas Dongre
This document summarizes research on Devnagari character recognition. It begins with an abstract discussing the progress of English character recognition and the need for further research on Indian languages like Devnagari. The document then reviews the stages of Devnagari optical character recognition systems, including pre-processing, segmentation, feature extraction, recognition, and post-processing. It discusses challenges in Devnagari recognition due to features of the script like connected characters. The document also reviews common techniques used at each stage of recognition systems and provides directions for future research.
An effective approach to offline arabic handwriting recognitionijaia
Segmentation is the most challenging part of the Arabic handwriting recognition, due to the unique
characteristics of Arabic writing that allows the same shape to denote different characters. In this paper,
an off-line Arabic handwriting recognition system is proposed. The processing details are presented in
three main stages. Firstly, the image is skeletonized to one pixel thin. Secondly, transfer each diagonally
connected foreground pixel to the closest horizontal or vertical line. Finally, these orthogonal lines are
coded as vectors of unique integer numbers; each vector represents one letter of the word. In order to
evaluate the proposed techniques, the system has been tested on the IFN/ENIT database, and the
experimental results show that our method is superior to those methods currently available.
FREEMAN CODE BASED ONLINE HANDWRITTEN CHARACTER RECOGNITION FOR MALAYALAM USI...acijjournal
Handwritten character recognition is conversion of handwritten text to machine readable and editable form. Online character recognition deals with live conversion of characters. Malayalam is a language spoken by millions of people in the state of Kerala and the union territories of Lakshadweep and Pondicherry in India. It is written mostly in clockwise direction and consists of loops and curves. The method aims at training a simple neural network with three layers using backpropagation algorithm.
Freeman codes are used to represent each character as feature vector. These feature vectors act as inputs to the network during the training and testing phases of the neural network. The output is the character expressed in the Unicode format.
This document compares different machine learning techniques for web page classification, including k-Nearest Neighbors, Naive Bayes, Support Vector Machine, Classification and Regression Trees, Random Forest, and Particle Swarm Optimization. Experiments were performed using two datasets to evaluate the accuracy of each technique. The document discusses the implementation methodology, including representations of web pages, performance metrics, and the classification algorithms.
This document discusses relation extraction from biological text. It describes relation extraction as detecting and classifying relationships between entities in text. Various machine learning approaches are used, including kernel-based algorithms, regression, and neural networks. Features include sequences, parse trees, dependency graphs, and shallow parsing. Two approaches are described in detail: a string kernel using shortest dependency graph paths, and a global alignment kernel comparing semantic similarity. Both approaches improved performance when using syntactic and semantic information from linguistic annotation. Future work focuses on distant supervision to generate more training data without full manual annotation.
IRJET- A Survey on MSER Based Scene Text DetectionIRJET Journal
This document summarizes research on using Maximally Stable Extremal Region (MSER) techniques for scene text detection. It discusses how MSER detects text candidate regions based on stable intensity values that contrast with surrounding areas. The document reviews several papers that enhance MSER performance by combining it with other methods like stroke width transform, character filtering, and neural networks. It also lists advantages of MSER like low computation cost and robustness to lighting, but notes disadvantages like sensitivity to character sizes and performance reductions in blurry, low contrast, or highly illuminated images.
Random forest is an ensemble classifier that consists of many decision trees, where each tree depends on the values of a random vector sampled independently from the input data. It combines Breiman's "bagging" idea and the random selection of features to construct a set of decision trees with controlled variance. The random forest algorithm builds decision trees using randomly selected subsets of the training data and randomly selected subsets of input features. Each tree provides a class prediction and the class with the most votes becomes the random forest's prediction. Random forests have advantages including high accuracy, efficiency on large datasets, ability to handle thousands of variables, and estimates of feature importance.
Dimensionality Reduction and Feature Selection Methods for Script Identificat...ITIIIndustries
The goal of this research is to explore effects of dimensionality reduction and feature selection on the problem of script identification from images of printed documents. The kadjacent segment is ideal for this use due to its ability to capture visual patterns. We have used principle component analysis to reduce the size of our feature matrix to a handier size that can be trained easily, and experimented by including varying combinations of dimensions of the super feature set. A modular
approach in neural network was used to classify 7 languages – Arabic, Chinese, English, Japanese, Tamil, Thai and Korean.
A New Method for Identification of Partially Similar Indian ScriptsCSCJournals
In this paper, the texture symmetry/non symmetry factor has been exploited to get the script texture by using the Bi Wavelants which give the factor of symmetry/non symmetry in terms of the third cumulant and the Bi-spectra gives the quadratically coupled frequencies. The envelope of Bi-spectra (Bi-Wavelant) provides an accurate behavior of the symmetry/non symmetry factor of the script texture. Classification has been better performed by SVM with training set of roots of the envelope found using the Newton-Raphson technique. The method could successfully identify 8 Indian scripts like Devanagari, Urdu, Gujrati, Telugu, Assamese, Gurmukhi, Kannada, and Bangla. The method can segment any kind of document with very good results. The identification results are excellent.
The effect of training set size in authorship attribution: application on sho...IJECEIAES
Authorship attribution (AA) is a subfield of linguistics analysis, aiming to identify the original author among a set of candidate authors. Several research papers were published and several methods and models were developed for many languages. However, the number of related works for Arabic is limited. Moreover, investigating the impact of short words length and training set size is not well addressed. To the best of our knowledge, no published works or researches, in this direction or even in other languages, are available. Therefore, we propose to investigate this effect, taking into account different stylomatric combination. The Mahalanobis distance (MD), Linear Regression (LR), and Multilayer Perceptron (MP) are selected as AA classifiers. During the experiment, the training dataset size is increased and the accuracy of the classifiers is recorded. The results are quite interesting and show different classifiers behaviours. Combining word-based stylomatric features with n-grams provides the best accuracy reached in average 93%.
An Efficient Segmentation Technique for Machine Printed Devanagiri Script: Bo...iosrjce
Segmentation technique plays a major role in scripting the documents for extraction of various
features. Many researchers are doing various research works in this field to make the segmenting process
simple as well as efficient. In this paper a simple segmentation technique for both the line and word
segmentation of a script document has been proposed. The main objective of this technique is to recognize the
spaces that separate two text lines.For the Word segmentation technique also similar procedure is followed. In
this work ,three different scanned document have been taken as input images for both line and word
segmentation techniques. The results found were outstanding with average accuracy for both line and word. It
provides 100% accuracy for line segmentation and 100% for line segmentation as well. Evaluation results show
that our method outperforms several competing methods.
Similar to Improvement of Random Forest Classifier through Localization of Persian Handwritten OCR (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.