With the explosive growth of international users,
distributed information and the number of linguistic
resources, accessible throughout the World Wide Web,
information retrieval has become crucial for users to find,
retrieve and understand relevant information, in any language
and form. Cross- Language Information Retrieval (CLIR) is a
subfield of Information Retrieval which provides a query in
one language and searches document collections in one or
many languages but it also has a specific meaning of crosslanguage
information retrieval where a document collection
is multilingual. In the present research, we focus on query
translation, disambiguation of multiple translation candidates
and query expansion with various combinations, in order to
improve the effectiveness of retrieval. Extracting, selecting
and adding terms that emphasize query concepts are performed
using expansion techniques such as, pseudo-relevance
feedback, domain-based feedback and thesaurus-based
expansion. A method for information retrieval for a query
expressed in a native language is presented in this paper. It
uses insights from data mining and intelligent search for
formulating the query and parsing the results.
This document summarizes a research paper on cross language text retrieval. It discusses the different types of translations that can occur in cross language retrieval, including query translation, document translation, and various approaches like machine translation, dictionary-based, and corpus-based. It also outlines some common ranking methods used in information retrieval for cross language documents, such as Okapi BM25, language modeling, and TF-IDF. The paper provides an overview of the key components and challenges of cross language text retrieval systems.
A NOVEL APPROACH OF CLASSIFICATION TECHNIQUES FOR CLIRcscpconf
Recent and continuing advances in online information systems are creating many opportunities
and also new problems in information retrieval. Gathering the information in different natural
language is the most difficult task, which often requires huge resources. Cross-language
information retrieval (CLIR) is the retrieval of information for a query written in the native
language. This paper deals with various classification techniques that can be used for solving
the problems encountered in CLIR.
MODIFIED PAGE RANK ALGORITHM TO SOLVE AMBIGUITY OF POLYSEMOUS WORDSIJCI JOURNAL
The document proposes a Dynamic Page Rank algorithm to address the problem of polysemy, or multiple meanings, of words in information retrieval systems. It discusses how word sense ambiguity negatively impacts retrieval precision. The Dynamic Page Rank algorithm extends the traditional PageRank algorithm by incorporating word sense disambiguation to provide more accurate results tailored to the user's intended context. An experiment compares the proposed algorithm to PageRank and finds that it achieves a mean reciprocal rank of 1, indicating all top results were relevant, compared to 0.3167 for PageRank. The algorithm is presented as a way to improve information retrieval performance by resolving lexical ambiguity.
The document proposes a system to identify six scripts including Arabic, Chinese, Cyrillic, Devnagari, Japanese, and Roman. It extracts 11 spatial and temporal features from word strokes and achieves 87.1% accuracy at the word level using 5-fold cross validation. Accuracy improves to 95% for 5-word samples and 95.5% for full text lines containing an average of 7 words. The system allows analyzing individual strokes and uses spatial and temporal information to identify scripts.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Imran Sarwar Bajwa, M. Abbas Choudhary [2006], "A Rule Based System for Speech Language Context Understanding", International Journal of Donghua University (English Edition), Jun 2006, Vol. 23 No. 06, pp:39-42
A Simple Information Retrieval Techniqueidescitation
The document presents a simple information retrieval technique that involves removing stop words and punctuation from documents, calculating term frequency and inverse document frequency, constructing a master document matrix, and ranking documents based on similarity to user queries. The technique is demonstrated on a sample collection of 5 documents. For a query on "information retrieval system", the documents are ranked from most similar to least similar as document 5, document 1, document 2, document 3, document 4. The technique provides an easy way to search and retrieve relevant documents from a collection.
This document provides a survey of semantic web personalization techniques. It begins by defining semantic web personalization and its advantages over traditional web personalization. It then classifies semantic web personalization approaches into several categories, including ontology-based, context-based, and hybrid recommendation systems. For each category, it provides examples of approaches and compares their methods and steps for personalization. The goal of the survey is to analyze and compare different techniques used for personalization in the semantic web.
This document summarizes a research paper on cross language text retrieval. It discusses the different types of translations that can occur in cross language retrieval, including query translation, document translation, and various approaches like machine translation, dictionary-based, and corpus-based. It also outlines some common ranking methods used in information retrieval for cross language documents, such as Okapi BM25, language modeling, and TF-IDF. The paper provides an overview of the key components and challenges of cross language text retrieval systems.
A NOVEL APPROACH OF CLASSIFICATION TECHNIQUES FOR CLIRcscpconf
Recent and continuing advances in online information systems are creating many opportunities
and also new problems in information retrieval. Gathering the information in different natural
language is the most difficult task, which often requires huge resources. Cross-language
information retrieval (CLIR) is the retrieval of information for a query written in the native
language. This paper deals with various classification techniques that can be used for solving
the problems encountered in CLIR.
MODIFIED PAGE RANK ALGORITHM TO SOLVE AMBIGUITY OF POLYSEMOUS WORDSIJCI JOURNAL
The document proposes a Dynamic Page Rank algorithm to address the problem of polysemy, or multiple meanings, of words in information retrieval systems. It discusses how word sense ambiguity negatively impacts retrieval precision. The Dynamic Page Rank algorithm extends the traditional PageRank algorithm by incorporating word sense disambiguation to provide more accurate results tailored to the user's intended context. An experiment compares the proposed algorithm to PageRank and finds that it achieves a mean reciprocal rank of 1, indicating all top results were relevant, compared to 0.3167 for PageRank. The algorithm is presented as a way to improve information retrieval performance by resolving lexical ambiguity.
The document proposes a system to identify six scripts including Arabic, Chinese, Cyrillic, Devnagari, Japanese, and Roman. It extracts 11 spatial and temporal features from word strokes and achieves 87.1% accuracy at the word level using 5-fold cross validation. Accuracy improves to 95% for 5-word samples and 95.5% for full text lines containing an average of 7 words. The system allows analyzing individual strokes and uses spatial and temporal information to identify scripts.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Imran Sarwar Bajwa, M. Abbas Choudhary [2006], "A Rule Based System for Speech Language Context Understanding", International Journal of Donghua University (English Edition), Jun 2006, Vol. 23 No. 06, pp:39-42
A Simple Information Retrieval Techniqueidescitation
The document presents a simple information retrieval technique that involves removing stop words and punctuation from documents, calculating term frequency and inverse document frequency, constructing a master document matrix, and ranking documents based on similarity to user queries. The technique is demonstrated on a sample collection of 5 documents. For a query on "information retrieval system", the documents are ranked from most similar to least similar as document 5, document 1, document 2, document 3, document 4. The technique provides an easy way to search and retrieve relevant documents from a collection.
This document provides a survey of semantic web personalization techniques. It begins by defining semantic web personalization and its advantages over traditional web personalization. It then classifies semantic web personalization approaches into several categories, including ontology-based, context-based, and hybrid recommendation systems. For each category, it provides examples of approaches and compares their methods and steps for personalization. The goal of the survey is to analyze and compare different techniques used for personalization in the semantic web.
An unsupervised approach to develop ir system the case of urduijaia
Web Search Engines are best gifts to the mankind by Information and Communication Technologies.
Without the search engines it would have been almost impossible to make the efficient access of the
information available on the web today. They play a very vital role in the accessibility and usability of the
internet based information systems. As the internet users are increasing day by day so is the amount of
information being available on web increasing. But the access of information is not uniform across all the
language communities. Besides English and European languages that constitutes to the 60% of the
information available on the web, there is still a wide range of the information available on the internet in
different languages too. In the past few years the amount of information available in Indian Languages
has also increased. Besides English and few European Languages, there are no tools and techniques
available for the efficient retrieval of this information available on the internet. Especially in the case of
the Indian Languages the research is still in the preliminary steps. There are no sufficient amount of tools
and techniques available for the efficient retrieval of the information for Indian Languages.
As we know that Indian Languages are very resource poor languages in terms of IR test data collection.
So my main focus was mainly on developing the data set for URDU IR, training and testing data for
Stemmer.
We have developed a language independent system to facilitate efficient retrieval of information available
in Urdu language which can be used for other languages as well. The system gives precision of 0.63 and
the recall of the system is 0.8. For this Firstly I have developed an Unsupervised Stemmer for URDU
Language [1] as it is very important in the Information Retrieval.
Robust Text Watermarking Technique for Authorship Protection of Hindi Languag...CSCJournals
Digital text documents have become a significantly important part on the Internet. A large number of users are attracted towards this digital form of text documents. But some security threats also arise concurrently. The digital libraries offer effective ways to access educational materials, government e-documents, financial documents, social media contents and many others. However content authorship and tamper detection of all these digital text documents require special attention. Till now, considerably very few digital watermarking techniques exist for text documents. In this paper, we propose a method for effective watermarking of Hindi language text documents. Hindi stands second among all languages across the world. It has widespread availability of its digital contents of various types. In proposed technique, the watermark is logically embedded in the text using 'swar' (vowel) as a special feature of the Hindi language, supported by suitable encryption. In extraction phase the Certificate Authority (CA) plays an important role in the authorship protection process as a trusted third party. The text is decrypted and watermark is extracted to prove genuine authorship. Our technique has been tested for various types of feasible text attacks with different embedding frequency.
This document proposes a method for classifying Tamil web documents using neural networks with dimension reduction. It involves using a genetic algorithm to reduce the dimensionality of documents by selecting important keywords. Then, a neural network trained on predefined labels from the English domain would classify the Tamil documents. The method crawls Tamil news articles to build a corpus, applies genetic algorithm for dimension reduction, and uses backpropagation neural networks for classification. Future work may explore using alternative neural network techniques like winnow/perceptron without hidden layers.
hExarAbax makkAmasjix samayaM anni rojulu 5:00 am - 9:00 pm
(Mecca Masjid timings in Hyderabad - All days 5:00 am - 9:00 pm)
User query: makkAmasjix PIju eVMwa?
(What is the fee for Mecca Masjid?)
POS-tagger: makkAmasjix PIju/WQ eVMwa
Replace with root word: makkAmasjix PIju/WQ eMwa
Context Handler: Updates context to 'makkAmasjix'
Advanced Filter: Keywords - makkAmas
SCRIPTS AND NUMERALS IDENTIFICATION FROM PRINTED MULTILINGUAL DOCUMENT IMAGEScscpconf
This document presents a technique for identifying scripts (Tamil, English, Hindi) and numerals from multilingual document images using a rule-based classifier. Words are segmented and the first character of each word is represented as a 9-bit vector based on features like density, shape, and transitions. A rule-based classifier containing rules derived from training data is used to classify the script of each character. The technique aims to automatically categorize multilingual documents before applying optical character recognition and requires minimal preprocessing with high accuracy.
Speech Recognition Application for the Speech Impaired using the Android-base...TELKOMNIKA JOURNAL
Those who are speech impaired (tunawicara in the Indonesian language) suffer from
abnormalities in their delivery (articulation) of the language as well their voice in normal speech, resulting
in difficulty in communicating verbally within their environment. Therefore, an application is required that
can help and facilitate conversations for communication. In this research, the authors have developed a
speech recognition application that can recognise speech of the speech impaired, and can translate into
text form with input in the form of sound detected on a smartphone. By using the Google Cloud Speech
Application Programming Interface (API), this allows converting audio to text, and it is also user-friendly to
use such APIs. The Google Cloud Speech API integrates with Google Cloud Storage for data storage.
Although research into speech recognition to text has been widely practiced, this research try to develop
speech recognition, specially for speech impaired's speech, as well as perform a likelihood calculation to
see the factor of tone, pronunciation, and speech speed in speech recognition. The test was conducted by
mentioning the digits 1 through 10. The experimental results showed that the recognition rate for the
speech impaired is about 80%, while the recognition rate for normal speech is 100%.
A language independent approach to develop urduir systemcsandit
This is the era of Information Technology. Today the most important thing is how one gets the
right information at right time. More and more data repositories are now being made available
online. Information retrieval systems or search engines are used to access electronic
information available on the internet. These information retrieval systems depend on the
available tools and techniques for efficient retrieval of information content in response to the
user query needs. During last few years, a wide range of information in Indian regional
languages like Hindi, Urdu, Bengali, Oriya, Tamil and Telugu has been made available on web
in the form of e-data. But the access to these data repositories is very low because the efficient
search engines/retrieval systems supporting these languages are very limited. We have
developed a language independent system to facilitate efficient retrieval of information
available in Urdu language which can be used for other languages as well. The system gives
precision of 0.63 and the recall of the system is 0.8.
A LANGUAGE INDEPENDENT APPROACH TO DEVELOP URDUIR SYSTEMcscpconf
This is the era of Information Technology. Today the most important thing is how one gets theright information at right time. More and more data repositories are now being made available online. Information retrieval systems or search engines are used to access electronic information available on the internet. These information retrieval systems depend on the available tools and techniques for efficient retrieval of information content in response to the user query needs. During last few years, a wide range of information in Indian regional languages like Hindi, Urdu, Bengali, Oriya, Tamil and Telugu has been made available on web in the form of e-data. But the access to these data repositories is very low because the efficient search engines/retrieval systems supporting these languages are very limited. We have developed a language independent system to facilitate efficient retrieval of information available in Urdu language which can be used for other languages as well. The system gives precision of 0.63 and the recall of the system is 0.8.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Milind Honrao is seeking a co-op or full-time position in computer science. He has an MS in Computer Science from UT Dallas with a 3.82 GPA and a bachelor's degree in Computer Engineering from India. He has skills in programming languages like C++, Java, and databases like Oracle. His experience includes contractor work developing tools for Oracle and research assisting in question answering. He has also taught computer courses and completed projects in areas like coreference resolution, text categorization, and distributed systems.
This document summarizes a research article that proposes using a convolutional neural network (CNN) to detect malware in PDF files. The researchers collected benign and malicious PDF files, extracted byte sequences, and manually labeled the data. They designed a CNN model to interpret patterns in the byte sequence data and predict whether a file contains malware. Their experimental results showed that the proposed CNN model outperformed other machine learning models in malware detection of PDF files.
IRJET- Querying Database using Natural Language InterfaceIRJET Journal
This document presents a proposed natural language interface system to allow users to query a database using English queries instead of SQL. The system aims to make database access easier for non-technical users. It discusses the architecture of the system, which includes modules for natural language processing, query translation to SQL, and speech conversion. It also reviews related work and discusses advantages and disadvantages of natural language interfaces for databases. The proposed system uses techniques like tokenization, parsing, and semantic analysis to understand queries and map them to equivalent SQL queries to retrieve results from the database.
NAMED ENTITY RECOGNITION FROM BENGALI NEWSPAPER DATAijnlc
Due to the dramatic growth of internet use, the amount of unstructured Bengali text data has increased
enormous. It is therefore essential to extract event intelligently from it. The progress in technologies in
natural language processing (NLP) for information extraction that is used to locate and classify content in
news data according to predefined categories such as person name, place name, organization name, date,
time etc. The current named entity recognition (NER), which is a subtask of NLP, plays a vital rule to
achieve human level performance on specific documents such as newspapers to effectively identify entities.
The purpose of this research is to introduce NER system in Bengali news data to identify events of specified
things in running text based on regular expression and Bengali grammar. In so doing, I have designed and
evaluated part-of-speech (POS) tags to recognize proper nouns. In this thesis, I have explained Hidden
Markov Model (HMM) based approach for developing NER system from Bengali news data.
lectronic-mail is widely used most suitable method of transferring messages electronically from one
person to another, rising from and going to any part of the world. Main features of Electronic mail is its speed,
dependability, well-equipped storage options and a large number of added services make it highly well-liked
among people from all sectors of business and society. But being popular it also has negative side too. Electronics
mails are preferred media for a large number of attacks over the internet.. A number of the most popular attacks over
the internet include spams. Some methods are essentially in detection of spam related mails but they have higher false
positives. A number of filters such as Checksum-based filters, Bayesian filters, machine learning based and
memory-based filters are usually used in order to recognize spams. As spammers constantly try to find a way to
avoid existing filters, a new filters need to be developed to catch spam. This paper proposes to find an
resourceful spam mail filtering method using user profile base ontology. Ontologies permit for machineunderstandable
semantics of data. It is main to interchange information with each other for more efficient spam
filtering. Thus, it is essential to build ontology and a framework for capable email filtering. Using ontology that is
particularly designed to filter spam, bunch of useless bulk email could be filtered out on the system. We propose a
user profile-based spam filter that classifies email based on the likelihood that User profile within it have been
included in spam or valid email.
This document provides an overview of word sense disambiguation (WSD). It discusses how ambiguity is a challenge for natural language processing by computers. WSD is the process of identifying the correct sense of a word in a particular context. The document reviews different approaches to WSD, including knowledge-based, supervised learning, and unsupervised learning. It also discusses applications of WSD and concludes that WSD remains a difficult problem due to the complexity of natural languages.
A NOVEL APPROACH FOR WORD RETRIEVAL FROM DEVANAGARI DOCUMENT IMAGESijnlc
Large amount of information is lying dormant in historical documents and manuscripts. This information would go futile if not stored in digital form. Searching some relevant information from these scanned images would ideally require converting these document images to text form by doing optical character
recognition (OCR). For indigenous scripts of India, there are very few OCRs that can successfully recognize printed text images of varying quality, size, style and font. An alternate approach using word spotting can be effective to access large collections of document images. We propose a word spotting
technique based on codes for matching the word images of Devanagari script. The shape information is utilised for generating integer codes for words in the document image and these codes are matched for final retrieval of relevant documents. The technique is illustrated using Marathi document images.
A decision tree based word sense disambiguation system in manipuri languageacijjournal
This paper manifests a primary attempt on building a word sense disambiguation system in Manipuri
language. The paper discusses related attempts made in the Manipuri language followed by the proposed
plan. A database, consisting of 650 sentences, is collected in Manipuri language in the course of the study.
Conventional positional and context based features are suggested to capture the sense of the words, which
have ambiguous and multiple senses. The proposed work is expected to predict the senses of the
polysemous words with high accuracy with the help of the suitable knowledge acquisition techniques. The
system produces an accuracy of 71.75 %.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses font and size identification in Telugu printed documents. It provides background on the Telugu script, which contains a large number of compound characters formed from combinations of vowels and consonants. The document then discusses the need for font and size identification as a preprocessing step for optical character recognition (OCR) systems to improve accuracy. It presents an approach using zonal analysis and connected component analysis to extract features from text images like aspect ratio and pixel ratio to identify the font and size by comparing to a database. Results showed this approach could accurately identify different fonts and sizes in Telugu text images.
Performance analysis on secured data method in natural language steganographyjournalBEEI
The rapid amount of exchange information that causes the expansion of the internet during the last decade has motivated that a research in this field. Recently, steganography approaches have received an unexpected attention. Hence, the aim of this paper is to review different performance metric; covering the decoding, decrypting and extracting performance metric. The process of data decoding interprets the received hidden message into a code word. As such, data encryption is the best way to provide a secure communication. Decrypting take an encrypted text and converting it back into an original text. Data extracting is a process which is the reverse of the data embedding process. The effectiveness evaluation is mainly determined by the performance metric aspect. The intention of researchers is to improve performance metric characteristics. The evaluation success is mainly determined by the performance analysis aspect. The objective of this paper is to present a review on the study of steganography in natural language based on the criteria of the performance analysis. The findings review will clarify the preferred performance metric aspects used. This review is hoped to help future research in evaluating the performance analysis of natural language in general and the proposed secured data revealed on natural language steganography in specific.
USER AUTHENTICATION USING NATIVE LANGUAGE PASSWORDSIJNSA Journal
Information security is necessary for any organization. Intrusion prevention is the basic level of security which requires user authentication. User can be authenticated to a machine by passwords. Traditional textual passwords are vulnerable to many attacks. Graphical passwords are introduced as alternatives to textual passwords to overcome these problems. This paper introduces native language passwords for authentication. Native language character set consists of characters with single or multiple strokes. User can select one (or more) character(s) for his password. The shape and strokes of the characters are used for authentication.
MULTILINGUAL INFORMATION RETRIEVAL BASED ON KNOWLEDGE CREATION TECHNIQUESijcseit
As the information access across languages increases, the importance of a system that supports querybased
searching with the presence of multilingual also grows. Gathering the information in different
natural language is the most difficult task, which requires huge resources like database and digital
libraries. Cross language information retrieval (CLIR) enables to search in multilingual document
collections using the native language which can be supported by the different data mining techniques. This
paper deals with various data mining techniques that can be used for solving the problems encountered in
CLIR.
Information retrieval (IR) system aims to retrieve
relevant documents to a user query where the query is a set of
keywords. Cross-language information retrieval (CLIR) is a
retrieval process in which the user fires queries in one language to
retrieve information from another language. The growing
requirement on the Internet for users to access information
expressed in language other than their own has led to Cross
Language Information Retrieval (CLIR) becoming established as
a major topic in IR.
An unsupervised approach to develop ir system the case of urduijaia
Web Search Engines are best gifts to the mankind by Information and Communication Technologies.
Without the search engines it would have been almost impossible to make the efficient access of the
information available on the web today. They play a very vital role in the accessibility and usability of the
internet based information systems. As the internet users are increasing day by day so is the amount of
information being available on web increasing. But the access of information is not uniform across all the
language communities. Besides English and European languages that constitutes to the 60% of the
information available on the web, there is still a wide range of the information available on the internet in
different languages too. In the past few years the amount of information available in Indian Languages
has also increased. Besides English and few European Languages, there are no tools and techniques
available for the efficient retrieval of this information available on the internet. Especially in the case of
the Indian Languages the research is still in the preliminary steps. There are no sufficient amount of tools
and techniques available for the efficient retrieval of the information for Indian Languages.
As we know that Indian Languages are very resource poor languages in terms of IR test data collection.
So my main focus was mainly on developing the data set for URDU IR, training and testing data for
Stemmer.
We have developed a language independent system to facilitate efficient retrieval of information available
in Urdu language which can be used for other languages as well. The system gives precision of 0.63 and
the recall of the system is 0.8. For this Firstly I have developed an Unsupervised Stemmer for URDU
Language [1] as it is very important in the Information Retrieval.
Robust Text Watermarking Technique for Authorship Protection of Hindi Languag...CSCJournals
Digital text documents have become a significantly important part on the Internet. A large number of users are attracted towards this digital form of text documents. But some security threats also arise concurrently. The digital libraries offer effective ways to access educational materials, government e-documents, financial documents, social media contents and many others. However content authorship and tamper detection of all these digital text documents require special attention. Till now, considerably very few digital watermarking techniques exist for text documents. In this paper, we propose a method for effective watermarking of Hindi language text documents. Hindi stands second among all languages across the world. It has widespread availability of its digital contents of various types. In proposed technique, the watermark is logically embedded in the text using 'swar' (vowel) as a special feature of the Hindi language, supported by suitable encryption. In extraction phase the Certificate Authority (CA) plays an important role in the authorship protection process as a trusted third party. The text is decrypted and watermark is extracted to prove genuine authorship. Our technique has been tested for various types of feasible text attacks with different embedding frequency.
This document proposes a method for classifying Tamil web documents using neural networks with dimension reduction. It involves using a genetic algorithm to reduce the dimensionality of documents by selecting important keywords. Then, a neural network trained on predefined labels from the English domain would classify the Tamil documents. The method crawls Tamil news articles to build a corpus, applies genetic algorithm for dimension reduction, and uses backpropagation neural networks for classification. Future work may explore using alternative neural network techniques like winnow/perceptron without hidden layers.
hExarAbax makkAmasjix samayaM anni rojulu 5:00 am - 9:00 pm
(Mecca Masjid timings in Hyderabad - All days 5:00 am - 9:00 pm)
User query: makkAmasjix PIju eVMwa?
(What is the fee for Mecca Masjid?)
POS-tagger: makkAmasjix PIju/WQ eVMwa
Replace with root word: makkAmasjix PIju/WQ eMwa
Context Handler: Updates context to 'makkAmasjix'
Advanced Filter: Keywords - makkAmas
SCRIPTS AND NUMERALS IDENTIFICATION FROM PRINTED MULTILINGUAL DOCUMENT IMAGEScscpconf
This document presents a technique for identifying scripts (Tamil, English, Hindi) and numerals from multilingual document images using a rule-based classifier. Words are segmented and the first character of each word is represented as a 9-bit vector based on features like density, shape, and transitions. A rule-based classifier containing rules derived from training data is used to classify the script of each character. The technique aims to automatically categorize multilingual documents before applying optical character recognition and requires minimal preprocessing with high accuracy.
Speech Recognition Application for the Speech Impaired using the Android-base...TELKOMNIKA JOURNAL
Those who are speech impaired (tunawicara in the Indonesian language) suffer from
abnormalities in their delivery (articulation) of the language as well their voice in normal speech, resulting
in difficulty in communicating verbally within their environment. Therefore, an application is required that
can help and facilitate conversations for communication. In this research, the authors have developed a
speech recognition application that can recognise speech of the speech impaired, and can translate into
text form with input in the form of sound detected on a smartphone. By using the Google Cloud Speech
Application Programming Interface (API), this allows converting audio to text, and it is also user-friendly to
use such APIs. The Google Cloud Speech API integrates with Google Cloud Storage for data storage.
Although research into speech recognition to text has been widely practiced, this research try to develop
speech recognition, specially for speech impaired's speech, as well as perform a likelihood calculation to
see the factor of tone, pronunciation, and speech speed in speech recognition. The test was conducted by
mentioning the digits 1 through 10. The experimental results showed that the recognition rate for the
speech impaired is about 80%, while the recognition rate for normal speech is 100%.
A language independent approach to develop urduir systemcsandit
This is the era of Information Technology. Today the most important thing is how one gets the
right information at right time. More and more data repositories are now being made available
online. Information retrieval systems or search engines are used to access electronic
information available on the internet. These information retrieval systems depend on the
available tools and techniques for efficient retrieval of information content in response to the
user query needs. During last few years, a wide range of information in Indian regional
languages like Hindi, Urdu, Bengali, Oriya, Tamil and Telugu has been made available on web
in the form of e-data. But the access to these data repositories is very low because the efficient
search engines/retrieval systems supporting these languages are very limited. We have
developed a language independent system to facilitate efficient retrieval of information
available in Urdu language which can be used for other languages as well. The system gives
precision of 0.63 and the recall of the system is 0.8.
A LANGUAGE INDEPENDENT APPROACH TO DEVELOP URDUIR SYSTEMcscpconf
This is the era of Information Technology. Today the most important thing is how one gets theright information at right time. More and more data repositories are now being made available online. Information retrieval systems or search engines are used to access electronic information available on the internet. These information retrieval systems depend on the available tools and techniques for efficient retrieval of information content in response to the user query needs. During last few years, a wide range of information in Indian regional languages like Hindi, Urdu, Bengali, Oriya, Tamil and Telugu has been made available on web in the form of e-data. But the access to these data repositories is very low because the efficient search engines/retrieval systems supporting these languages are very limited. We have developed a language independent system to facilitate efficient retrieval of information available in Urdu language which can be used for other languages as well. The system gives precision of 0.63 and the recall of the system is 0.8.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Milind Honrao is seeking a co-op or full-time position in computer science. He has an MS in Computer Science from UT Dallas with a 3.82 GPA and a bachelor's degree in Computer Engineering from India. He has skills in programming languages like C++, Java, and databases like Oracle. His experience includes contractor work developing tools for Oracle and research assisting in question answering. He has also taught computer courses and completed projects in areas like coreference resolution, text categorization, and distributed systems.
This document summarizes a research article that proposes using a convolutional neural network (CNN) to detect malware in PDF files. The researchers collected benign and malicious PDF files, extracted byte sequences, and manually labeled the data. They designed a CNN model to interpret patterns in the byte sequence data and predict whether a file contains malware. Their experimental results showed that the proposed CNN model outperformed other machine learning models in malware detection of PDF files.
IRJET- Querying Database using Natural Language InterfaceIRJET Journal
This document presents a proposed natural language interface system to allow users to query a database using English queries instead of SQL. The system aims to make database access easier for non-technical users. It discusses the architecture of the system, which includes modules for natural language processing, query translation to SQL, and speech conversion. It also reviews related work and discusses advantages and disadvantages of natural language interfaces for databases. The proposed system uses techniques like tokenization, parsing, and semantic analysis to understand queries and map them to equivalent SQL queries to retrieve results from the database.
NAMED ENTITY RECOGNITION FROM BENGALI NEWSPAPER DATAijnlc
Due to the dramatic growth of internet use, the amount of unstructured Bengali text data has increased
enormous. It is therefore essential to extract event intelligently from it. The progress in technologies in
natural language processing (NLP) for information extraction that is used to locate and classify content in
news data according to predefined categories such as person name, place name, organization name, date,
time etc. The current named entity recognition (NER), which is a subtask of NLP, plays a vital rule to
achieve human level performance on specific documents such as newspapers to effectively identify entities.
The purpose of this research is to introduce NER system in Bengali news data to identify events of specified
things in running text based on regular expression and Bengali grammar. In so doing, I have designed and
evaluated part-of-speech (POS) tags to recognize proper nouns. In this thesis, I have explained Hidden
Markov Model (HMM) based approach for developing NER system from Bengali news data.
lectronic-mail is widely used most suitable method of transferring messages electronically from one
person to another, rising from and going to any part of the world. Main features of Electronic mail is its speed,
dependability, well-equipped storage options and a large number of added services make it highly well-liked
among people from all sectors of business and society. But being popular it also has negative side too. Electronics
mails are preferred media for a large number of attacks over the internet.. A number of the most popular attacks over
the internet include spams. Some methods are essentially in detection of spam related mails but they have higher false
positives. A number of filters such as Checksum-based filters, Bayesian filters, machine learning based and
memory-based filters are usually used in order to recognize spams. As spammers constantly try to find a way to
avoid existing filters, a new filters need to be developed to catch spam. This paper proposes to find an
resourceful spam mail filtering method using user profile base ontology. Ontologies permit for machineunderstandable
semantics of data. It is main to interchange information with each other for more efficient spam
filtering. Thus, it is essential to build ontology and a framework for capable email filtering. Using ontology that is
particularly designed to filter spam, bunch of useless bulk email could be filtered out on the system. We propose a
user profile-based spam filter that classifies email based on the likelihood that User profile within it have been
included in spam or valid email.
This document provides an overview of word sense disambiguation (WSD). It discusses how ambiguity is a challenge for natural language processing by computers. WSD is the process of identifying the correct sense of a word in a particular context. The document reviews different approaches to WSD, including knowledge-based, supervised learning, and unsupervised learning. It also discusses applications of WSD and concludes that WSD remains a difficult problem due to the complexity of natural languages.
A NOVEL APPROACH FOR WORD RETRIEVAL FROM DEVANAGARI DOCUMENT IMAGESijnlc
Large amount of information is lying dormant in historical documents and manuscripts. This information would go futile if not stored in digital form. Searching some relevant information from these scanned images would ideally require converting these document images to text form by doing optical character
recognition (OCR). For indigenous scripts of India, there are very few OCRs that can successfully recognize printed text images of varying quality, size, style and font. An alternate approach using word spotting can be effective to access large collections of document images. We propose a word spotting
technique based on codes for matching the word images of Devanagari script. The shape information is utilised for generating integer codes for words in the document image and these codes are matched for final retrieval of relevant documents. The technique is illustrated using Marathi document images.
A decision tree based word sense disambiguation system in manipuri languageacijjournal
This paper manifests a primary attempt on building a word sense disambiguation system in Manipuri
language. The paper discusses related attempts made in the Manipuri language followed by the proposed
plan. A database, consisting of 650 sentences, is collected in Manipuri language in the course of the study.
Conventional positional and context based features are suggested to capture the sense of the words, which
have ambiguous and multiple senses. The proposed work is expected to predict the senses of the
polysemous words with high accuracy with the help of the suitable knowledge acquisition techniques. The
system produces an accuracy of 71.75 %.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses font and size identification in Telugu printed documents. It provides background on the Telugu script, which contains a large number of compound characters formed from combinations of vowels and consonants. The document then discusses the need for font and size identification as a preprocessing step for optical character recognition (OCR) systems to improve accuracy. It presents an approach using zonal analysis and connected component analysis to extract features from text images like aspect ratio and pixel ratio to identify the font and size by comparing to a database. Results showed this approach could accurately identify different fonts and sizes in Telugu text images.
Performance analysis on secured data method in natural language steganographyjournalBEEI
The rapid amount of exchange information that causes the expansion of the internet during the last decade has motivated that a research in this field. Recently, steganography approaches have received an unexpected attention. Hence, the aim of this paper is to review different performance metric; covering the decoding, decrypting and extracting performance metric. The process of data decoding interprets the received hidden message into a code word. As such, data encryption is the best way to provide a secure communication. Decrypting take an encrypted text and converting it back into an original text. Data extracting is a process which is the reverse of the data embedding process. The effectiveness evaluation is mainly determined by the performance metric aspect. The intention of researchers is to improve performance metric characteristics. The evaluation success is mainly determined by the performance analysis aspect. The objective of this paper is to present a review on the study of steganography in natural language based on the criteria of the performance analysis. The findings review will clarify the preferred performance metric aspects used. This review is hoped to help future research in evaluating the performance analysis of natural language in general and the proposed secured data revealed on natural language steganography in specific.
USER AUTHENTICATION USING NATIVE LANGUAGE PASSWORDSIJNSA Journal
Information security is necessary for any organization. Intrusion prevention is the basic level of security which requires user authentication. User can be authenticated to a machine by passwords. Traditional textual passwords are vulnerable to many attacks. Graphical passwords are introduced as alternatives to textual passwords to overcome these problems. This paper introduces native language passwords for authentication. Native language character set consists of characters with single or multiple strokes. User can select one (or more) character(s) for his password. The shape and strokes of the characters are used for authentication.
MULTILINGUAL INFORMATION RETRIEVAL BASED ON KNOWLEDGE CREATION TECHNIQUESijcseit
As the information access across languages increases, the importance of a system that supports querybased
searching with the presence of multilingual also grows. Gathering the information in different
natural language is the most difficult task, which requires huge resources like database and digital
libraries. Cross language information retrieval (CLIR) enables to search in multilingual document
collections using the native language which can be supported by the different data mining techniques. This
paper deals with various data mining techniques that can be used for solving the problems encountered in
CLIR.
Information retrieval (IR) system aims to retrieve
relevant documents to a user query where the query is a set of
keywords. Cross-language information retrieval (CLIR) is a
retrieval process in which the user fires queries in one language to
retrieve information from another language. The growing
requirement on the Internet for users to access information
expressed in language other than their own has led to Cross
Language Information Retrieval (CLIR) becoming established as
a major topic in IR.
The document describes an intelligent query processing system for the Malayalam language. It presents a model for developing such a system, focusing on time inquiries for different transportation modes. The system performs shallow syntactic and semantic analysis of queries. It determines the query type and required result slots. SQL queries are generated to retrieve answers from the database. The system architecture includes morphological analysis, shallow parsing, query frame identification, SQL generation, and answer retrieval. It was evaluated on 70 queries with 87.5% precision.
A SURVEY ON CROSS LANGUAGE INFORMATION RETRIEVALIJCI JOURNAL
Now a days, number of Web Users accessing information over Internet is increasing day by day. A huge
amount of information on Internet is available in different language that can be access by anybody at any
time. Information Retrieval (IR) deals with finding useful information from a large collection of
unstructured, structured and semi-structured data. Information Retrieval can be classified into different
classes such as monolingual information retrieval, cross language information retrieval and multilingual
information retrieval (MLIR) etc. In the current scenario, the diversity of information and language
barriers are the serious issues for communication and cultural exchange across the world. To solve such
barriers, cross language information retrieval (CLIR) system, are nowadays in strong demand. CLIR refers
to the information retrieval activities in which the query or documents may appear in different languages.
This paper takes an overview of the new application areas of CLIR and reviews the approaches used in the
process of CLIR research for query and document translation. Further, based on available literature, a
number of challenges and issues in CLIR have been identified and discussed.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
Survey on Indian CLIR and MT systems in Marathi LanguageEditor IJCATR
Cross Language Information Retrieval (CLIR) deals with retrieving relevant information stored in a language different from
the language of user’s query. This helps users to express the information need in their native languages. Machine translation based (MTbased)
approach of CLIR uses existing machine translation techniques to provide automatic translation of queries. This paper covers the
research work done in CLIR and MT systems for Marathi language in India.
This document discusses the use of fuzzy queries to retrieve information from databases. Fuzzy queries allow for imprecise or vague terms to be used in queries, similar to natural language. The document first provides background on limitations of traditional database queries. It then discusses how fuzzy set theory and membership functions can be applied to queries and data to handle uncertain terms. The proposed approach applies fuzzy queries to a relational database, defining linguistic variables and membership functions. This allows information to be retrieved based on fuzzy criteria and improves the ability to query databases using human-like terms. Benefits of fuzzy queries include more natural interaction and accounting for real-world data imperfections.
INTELLIGENT INFORMATION RETRIEVAL WITHIN DIGITAL LIBRARY USING DOMAIN ONTOLOGYcscpconf
A digital library is a type of information retrieval (IR) system. The existing information retrieval
methodologies generally have problems on keyword-searching. We proposed a model to solve
the problem by using concept-based approach (ontology) and metadata case base. This model
consists of identifying domain concepts in user’s query and applying expansion to them. The
system aims at contributing to an improved relevance of results retrieved from digital libraries
by proposing a conceptual query expansion for intelligent concept-based retrieval. We need to
import the concept of ontology, making use of its advantage of abundant semantics and
standard concept. Domain specific ontology can be used to improve information retrieval from
traditional level based on keyword to the lay based on knowledge (or concept) and change the
process of retrieval from traditional keyword matching to semantics matching. One approach is
query expansion techniques using domain ontology and the other would be introducing a case
based similarity measure for metadata information retrieval using Case Based Reasoning
(CBR) approach. Results show improvements over classic method, query expansion using
general purpose ontology and a number of other approaches.
Performance Evaluation of Query Processing Techniques in Information Retrievalidescitation
The first element of the search process is the query.
The user query being on an average restricted to two or three
keywords makes the query ambiguous to the search engine.
Given the user query, the goal of an Information Retrieval
[IR] system is to retrieve information which might be useful
or relevant to the information need of the user. Hence, the
query processing plays an important role in IR system.
The query processing can be divided into four categories
i.e. query expansion, query optimization, query classification and
query parsing. In this paper an attempt is made to evaluate the
performance of query processing algorithms in each of the
category. The evaluation was based on dataset as specified by
Forum for Information Retrieval [FIRE15]. The criteria used
for evaluation are precision and relative recall. The analysis is
based on the importance of each step in query processing. The
experimental results show that the significance of each step
in query processing and also the relevance of web semantics
and spelling correction in the user query.
A Domain Based Approach to Information Retrieval in Digital LibrariesFulvio Rotella
The current abundance of electronic documents requires automatic techniques that support the users in understanding their content and extracting useful information. To this aim, improving the retrieval performance must necessarily go beyond simple lexical interpretation ofthe user queries, and pass through an understanding of their semantic content and aims. It goes without saying that any digital library wouldtake enormous advantage from the availability of effective Information Retrieval techniques to provide to their users. This paper proposes an approach to Information Retrieval based on a correspondence of the domain of discourse between the query and the documents in the repository. Such an association is based on standard general-purpose linguistic resources (WordNet and WordNet Domains) and on a novel similarity assessmenttechnique. Although the work is at a preliminary stage, interesting initial results suggest to go on extending and improving the approach.
A Domain Based Approach to Information Retrieval in Digital Libraries - Rotel...University of Bari (Italy)
The current abundance of electronic documents requires automatic techniques that support the users in understanding their content and extracting useful information. To this aim, improving the retrieval performance must necessarily go beyond simple lexical interpretation of the user queries, and pass through an understanding of their semantic content and aims. It goes without saying that any digital library would take enormous advantage from the availability of effective Information Retrieval techniques to provide to their users. This paper proposes an approach to Information Retrieval based on a correspondence of the domain of discourse between the query and the documents in the repository. Such an association is based on standard general-purpose linguistic resources (WordNet and WordNet Domains) and on a novel similarity assessment technique. Although the work is at a preliminary stage, interesting initial results suggest to go on extending and improving the approach.
A Review on the Cross and Multilingual Information Retrievaldannyijwest
In this paper we explore some of the most important areas of information retrieval. In particular, Cross-
lingual Information Retrieval (CLIR) and Multilingual Information Retrieval (MLIR). CLIR deals with
asking questions in one language and retrieving documents in different language. MLIR deals with asking
questions in one or more languages and retrieving documents in one or more different languages. With an
increasingly globalized economy, the ability to find information in other languages is becoming a necessity.
We also presented the evaluation initiatives of information retrieval domain. Finally we have presented the
overall review of the research works in Indian and Foreign languages.
An Improved Mining Of Biomedical Data From Web Documents Using ClusteringKelly Lipiec
This document summarizes a research paper that proposes an improved method for mining biomedical data from web documents using clustering. Specifically, it develops an optimized k-means clustering algorithm to group similar biomedical documents together based on identifying relevant terms using the Unified Medical Language System (UMLS). The approach aims to more efficiently retrieve relevant biomedical documents for users. It compares the proposed method to the original k-means algorithm and finds it achieves an average F-measure of 99.06%, indicating more accurate clustering of biomedical web documents.
Marathi-English CLIR using detailed user query and unsupervised corpus-based WSDIJERA Editor
With rapid growth of multilingual information on the Internet, Cross Language Information Retrieval (CLIR) is becoming need of the day. It helps user to query in their native language and retrieve information in any language. But the performance of CLIR is poor as compared to monolingual retrieval due to lexical ambiguity, mismatching of query terms and out-of-vocabulary words. In this paper, we have proposed an algorithm for improving the performance of Marathi-English CLIR system. The system first finds possible translations of input query in target language, disambiguates them and then gives English queries to search engine for relevant document retrieval. The disambiguation is based on unsupervised corpus-based method which uses English dictionary as additional resource. The experiment is performed on FIRE 2011 (Forum of Information Retrieval Evaluation) dataset using “Title” and “Description” fields as inputs. The experimental results show that proposed approach gives better performance of Marathi-English CLIR system with good precision level.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
QUrdPro: Query processing system for Urdu LanguageIJERA Editor
This document describes QUrdPro, a query processing system for the Urdu language. It proposes an ontology-based architecture that uses natural language processing to analyze user queries in Urdu, formulate queries based on the domain ontology, search documents to extract relevant answers, and return results to the user. The system aims to improve information retrieval for the Urdu language by leveraging ontologies and avoiding users having to sift through large amounts of unstructured text. It discusses related work on question answering systems and outlines the proposed architecture and four-phase process model of QUrdPro.
Enhanced Performance of Search Engine with Multitype Feature Co-Selection of ...IJASCSE
Information world meet many confronts nowadays and one such, is data retrieval from a multidimensional and heterogeneous data set. Han & et al carried out a trail for the mentioned challenge. A novel feature co-selection for web document clustering is proposed by them, which is called Multitype Features Co-selection for Clustering (MFCC). MFCC uses intermediate clustering results in one type of feature space to help the selection in other types of feature spaces. It reduces effectively of the noise introduced by “pseudoclass” and further improves clustering performance. This efficiency also can be used in data retrieval, by implementing the MFCC algorithm in ranking algorithm of Search Engine technique. The proposed work is to apply the MFCC algorithm in search engine architecture. Such that the information retrieves from the dataset is retrieved effectively and shows the relevant retrieval.
Mining Opinion Features in Customer ReviewsIJCERT JOURNAL
Now days, E-commerce systems have become extremely important. Large numbers of customers are choosing online shopping because of its convenience, reliability, and cost. Client generated information and especially item reviews are significant sources of data for consumers to make informed buy choices and for makers to keep track of customer’s opinions. It is difficult for customers to make purchasing decisions based on only pictures and short product descriptions. On the other hand, mining product reviews has become a hot research topic and prior researches are mostly based on pre-specified product features to analyse the opinions. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to raw customer reviews and keywords can be extracted. This paper presents a survey on the techniques used for designing software to mine opinion features in reviews. Elven IEEE papers are selected and a comparison is made between them. These papers are representative of the significant improvements in opinion mining in the past decade.
Multilingualism in Information Retrieval SystemAriel Hess
The document discusses multilingualism in information retrieval systems. It examines challenges in creating systems that allow users to input queries and receive results in multiple languages. Several existing systems are described that have implemented multilingual features, such as translating queries or documents. However, challenges remain around accurate translation and handling diverse languages and regions. Future research areas discussed include developing better translation tools and testing systems on a wider range of languages and users.
IRJET- Concept Extraction from Ambiguous Text Document using K-MeansIRJET Journal
This document discusses using a K-means clustering algorithm to extract concepts from ambiguous text documents. It involves preprocessing the text by tokenizing, removing stop words, and stemming words. The words are then represented as vectors and dimensionality reduction using PCA is applied. Finally, K-means clustering is used to group similar words into clusters to identify the overall concepts in the document without reading the entire text. The aim is to help users understand the key topics in a document in a time-efficient manner without having to read the full text.
Similar to Cross Lingual Information Retrieval Using Search Engine and Data Mining (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.