The document proposes an algorithm for dynamically assigning papers to reviewers based on keywords. It discusses:
1) Existing exact string matching algorithms like brute force, KMP, and Boyer-Moore are ineffective for this problem since keyword phrases may be similar but not exact matches.
2) The algorithm uses dynamic programming to calculate an "expertise distance" between each paper's keywords and a reviewer's keywords based on the edit distance between the strings. Reviewers with lower expertise distances are better matches.
3) This approach accounts for minor differences in keyword phrases while still capturing the underlying semantic similarity, making it more suitable than exact string matching for the paper-reviewer assignment problem.
AN ALGORITHM FOR OPTIMIZED SEARCHING USING NON-OVERLAPPING ITERATIVE NEIGHBOR...IJCSEA Journal
The document presents an algorithm for optimized searching using non-overlapping iterative neighbor intervals. The algorithm aims to reduce the number of checked conditions by saving the frequency of replicated words and using non-overlapping intervals based on the plane sweep algorithm. It does this by focusing the search on a smaller subspace of relevant intervals near the minimum frequency keyword. The algorithm iterates through ranges, eliminating unsatisfied keywords to detect relevant ranges. This improves efficiency and reduces the number of comparisons compared to previous methods.
The document summarizes string matching algorithms. It discusses the naive string matching algorithm which compares characters at each shift to find matches. It also discusses the Rabin-Karp algorithm which uses hashing to match the hash value of the pattern to the hash value of substrings in the text. If the hash values match, it then checks for an exact character match. The Rabin-Karp algorithm has better average-case performance than the naive algorithm but the same worst-case performance of O((n-m+1)m) time.
Computing probabilistic queries in the presence of uncertainty via probabilis...Konstantinos Giannakis
1) The document proposes using probabilistic automata to compute probabilistic queries on RDF-like data structures that contain uncertainty. It shows how to assign a probabilistic automaton corresponding to a particular query.
2) An example query is provided that finds all nodes influenced by a starting node with a probability above a threshold. The probabilistic automata calculations allow filtering results by probability.
3) Benefits cited include leveraging well-studied probabilistic automata results and efficient handling of uncertainty. Future work could expand the models to infinite data and provide more empirical results.
The document describes a system for semantic textual similarity (STS) that uses various techniques to estimate the semantic similarity between texts. The system combines lexical, syntactic, and semantic information sources using state-of-the-art algorithms. In SemEval 2016 tasks, the system achieved a mean Pearson correlation of 75.7% on the monolingual English task and 86.3% on the cross-lingual Spanish-English task, ranking first in the cross-lingual task. The system utilizes techniques such as word embeddings, paragraph vectors, tree-structured LSTMs, and word alignment to capture semantic similarity.
TSD2013.AUTOMATIC MACHINE TRANSLATION EVALUATION WITH PART-OF-SPEECH INFORMATIONLifeng (Aaron) Han
Proceedings of the 16th International Conference of Text, Speech and Dialogue (TSD 2013). Plzen, Czech Republic, September 2013. LNAI Vol. 8082, pp. 121-128. Volume Editors: I. Habernal and V. Matousek. Springer-Verlag Berlin Heidelberg 2013. Open tool https://github.com/aaronlifenghan/aaron-project-hlepor
A Survey of String Matching AlgorithmsIJERA Editor
The concept of string matching algorithms are playing an important role of string algorithms in finding a place where one or several strings (patterns) are found in a large body of text (e.g., data streaming, a sentence, a paragraph, a book, etc.). Its application covers a wide range, including intrusion detection Systems (IDS) in computer networks, applications in bioinformatics, detecting plagiarism, information security, pattern recognition, document matching and text mining. In this paper we present a short survey for well-known and recent updated and hybrid string matching algorithms. These algorithms can be divided into two major categories, known as exact string matching and approximate string matching. The string matching classification criteria was selected to highlight important features of matching strategies, in order to identify challenges and vulnerabilities.
Enhanced Methodology for supporting approximate string search in Geospatial ...IJMER
In recent years many websites have started providing keyword search services on maps. In
these systems, users may experience difficulties finding the entities they are looking for if they do not
know their exact spelling, such as a name of a restaurant. In this paper, we present a novel index
structure and corresponding search algorithm for answering map based approximate-keyword in an
Euclidean space so that the users get their desired results even though they have typos in the keyword.
This work mainly focuses on investigating range queries in Euclidean space
AN ALGORITHM FOR OPTIMIZED SEARCHING USING NON-OVERLAPPING ITERATIVE NEIGHBOR...IJCSEA Journal
The document presents an algorithm for optimized searching using non-overlapping iterative neighbor intervals. The algorithm aims to reduce the number of checked conditions by saving the frequency of replicated words and using non-overlapping intervals based on the plane sweep algorithm. It does this by focusing the search on a smaller subspace of relevant intervals near the minimum frequency keyword. The algorithm iterates through ranges, eliminating unsatisfied keywords to detect relevant ranges. This improves efficiency and reduces the number of comparisons compared to previous methods.
The document summarizes string matching algorithms. It discusses the naive string matching algorithm which compares characters at each shift to find matches. It also discusses the Rabin-Karp algorithm which uses hashing to match the hash value of the pattern to the hash value of substrings in the text. If the hash values match, it then checks for an exact character match. The Rabin-Karp algorithm has better average-case performance than the naive algorithm but the same worst-case performance of O((n-m+1)m) time.
Computing probabilistic queries in the presence of uncertainty via probabilis...Konstantinos Giannakis
1) The document proposes using probabilistic automata to compute probabilistic queries on RDF-like data structures that contain uncertainty. It shows how to assign a probabilistic automaton corresponding to a particular query.
2) An example query is provided that finds all nodes influenced by a starting node with a probability above a threshold. The probabilistic automata calculations allow filtering results by probability.
3) Benefits cited include leveraging well-studied probabilistic automata results and efficient handling of uncertainty. Future work could expand the models to infinite data and provide more empirical results.
The document describes a system for semantic textual similarity (STS) that uses various techniques to estimate the semantic similarity between texts. The system combines lexical, syntactic, and semantic information sources using state-of-the-art algorithms. In SemEval 2016 tasks, the system achieved a mean Pearson correlation of 75.7% on the monolingual English task and 86.3% on the cross-lingual Spanish-English task, ranking first in the cross-lingual task. The system utilizes techniques such as word embeddings, paragraph vectors, tree-structured LSTMs, and word alignment to capture semantic similarity.
TSD2013.AUTOMATIC MACHINE TRANSLATION EVALUATION WITH PART-OF-SPEECH INFORMATIONLifeng (Aaron) Han
Proceedings of the 16th International Conference of Text, Speech and Dialogue (TSD 2013). Plzen, Czech Republic, September 2013. LNAI Vol. 8082, pp. 121-128. Volume Editors: I. Habernal and V. Matousek. Springer-Verlag Berlin Heidelberg 2013. Open tool https://github.com/aaronlifenghan/aaron-project-hlepor
A Survey of String Matching AlgorithmsIJERA Editor
The concept of string matching algorithms are playing an important role of string algorithms in finding a place where one or several strings (patterns) are found in a large body of text (e.g., data streaming, a sentence, a paragraph, a book, etc.). Its application covers a wide range, including intrusion detection Systems (IDS) in computer networks, applications in bioinformatics, detecting plagiarism, information security, pattern recognition, document matching and text mining. In this paper we present a short survey for well-known and recent updated and hybrid string matching algorithms. These algorithms can be divided into two major categories, known as exact string matching and approximate string matching. The string matching classification criteria was selected to highlight important features of matching strategies, in order to identify challenges and vulnerabilities.
Enhanced Methodology for supporting approximate string search in Geospatial ...IJMER
In recent years many websites have started providing keyword search services on maps. In
these systems, users may experience difficulties finding the entities they are looking for if they do not
know their exact spelling, such as a name of a restaurant. In this paper, we present a novel index
structure and corresponding search algorithm for answering map based approximate-keyword in an
Euclidean space so that the users get their desired results even though they have typos in the keyword.
This work mainly focuses on investigating range queries in Euclidean space
This document describes an implementation of a polynomial abstract data type (ADT) using a linked list in Python. It discusses representing polynomials as terms in a linked list, with each node storing a term. Operations like addition, multiplication, evaluation are supported. The implementation uses a tail pointer for efficient appends when adding terms during arithmetic operations. Key methods like constructors, degree, get, evaluate are implemented, with addition shown as an example of iterating over the polynomials and appending new terms to the end of the linked list.
Learning Collaborative Agents with Rule Guidance for Knowledge Graph ReasoningDeren Lei
Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions. However, the sparse reward signals offered by the KG during traversal are often insufficient to guide a sophisticated walk-based reinforcement learning (RL) model. An alternate approach is to use traditional symbolic methods (e.g., rule induction), which achieve good performance but can be hard to generalize due to the limitation of symbolic representation. In this paper, we propose RuleGuider, which leverages high-quality rules generated by symbolic-based methods to provide reward supervision for walk-based agents. Experiments on benchmark datasets show that RuleGuider improves the performance of walk-based models without losing interpretability.
An Index Based K-Partitions Multiple Pattern Matching AlgorithmIDES Editor
The study of pattern matching is one of the
fundamental applications and emerging area in computational
biology. Searching DNA related data is a common activity for
molecular biologists. In this paper we explore the applicability
of a new pattern matching technique called Index based Kpartition
Multiple Pattern Matching algorithm (IKPMPM), for
DNA sequences. Current approach avoids unnecessary
comparisons in the DNA sequence. Due to this, the number of
comparisons gradually decreases and comparison per character
ratio of the proposed algorithm reduces accordingly when
compared to other existing popular methods. The experimental
results show that there is considerable amount of performance
improvement.
In conventional transportation problem (TP), all the parameters are always certain. But, many of the real life situations in industry or organization, the parameters (supply, demand and cost) of the TP are not precise which are imprecise in nature in different factors like the market condition, variations in rates of diesel, traffic jams, weather in hilly areas, capacity of men and machine, long power cut, labourer’s over time work, unexpected failures in machine, seasonal changesandmanymore. Tocountertheseproblems,dependingonthenatureoftheparameters, theTPisclassifiedintotwocategoriesnamelytype-2andtype-4fuzzytransportationproblems (FTPs) under uncertain environment and formulates the problem and utilizes the trapezoidal fuzzy number (TrFN) to solve the TP. The existing ranking procedure of Liou and Wang (1992)isusedtotransformthetype-2andtype-4FTPsintoacrisponesothattheconventional method may be applied to solve the TP. Moreover, the solution procedure differs from TP to type-2 and type-4 FTPs in allocation step only. Therefore a simple and efficient method denoted by PSK (P. Senthil Kumar) method is proposed to obtain an optimal solution in terms of TrFNs. From this fuzzy solution, the decision maker (DM) can decide the level of acceptance for the transportation cost or profit. Thus, the major applications of fuzzy set theory are widely used in areas such as inventory control, communication network, aggregate planning, employment scheduling, and personnel assignment and so on.
ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...Lifeng (Aaron) Han
The document describes two machine translation evaluation systems, nLEPOR_baseline and LEPOR_v3.1, that were submitted to the WMT13 Metrics Task. nLEPOR_baseline is an n-gram based metric that considers modified sentence length penalty, position difference penalty, and n-gram precision and recall. LEPOR_v3.1 is an enhanced version that uses a harmonic mean to combine factors and includes part-of-speech information. Evaluation results showed LEPOR_v3.1 had the highest correlation of 0.86 with human judgments for English to other language pairs.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
The document discusses probabilistic information retrieval and Bayesian networks for modeling document collections and queries. It introduces concepts like conditional probability, Bayes' theorem, and the probability ranking principle. It then describes the binary independence retrieval model and how Bayesian networks can model dependencies between document terms and concepts to improve upon the independence assumption. The use of Bayesian networks allows modeling both documents and queries as networks to estimate the probability that a document satisfies an information need.
The document provides an introduction to Probabilistic Latent Semantic Analysis (PLSA). It discusses how PLSA improves on previous Latent Semantic Analysis methods by incorporating a probabilistic framework. PLSA models documents as mixtures of topics and allows words to have multiple meanings. The parameters of the PLSA model, including the topic distributions and word-topic distributions, are estimated using an expectation-maximization algorithm to find the parameters that best explain the observed word-document co-occurrence data.
A survey of Stemming Algorithms for Information Retrievaliosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses text summarization using machine learning. It begins by defining text summarization as reducing a text to create a summary that retains the most important points. There are two main types: single document summarization and multiple document summarization. Extractive summarization creates summaries by extracting phrases or sentences from the source text, while abstractive summarization expresses ideas using different words. Supervised machine learning approaches use labeled training data to train classifiers to select content, while unsupervised approaches select content based on metrics like term frequency-inverse document frequency. ROUGE is commonly used to automatically evaluate summaries by comparing them to human references. Query-focused multi-document summarization aims to answer a user's information need by summarizing relevant documents
AN INVESTIGATION OF THE SAMPLING-BASED ALIGNMENT METHOD AND ITS CONTRIBUTIONSijaia
This document summarizes an investigation into improving the performance of a sampling-based alignment method for statistical machine translation. It proposes two contributions: 1) A method to enforce alignment of n-grams in distinct translation subtables to increase the number of longer n-grams, and 2) Examining combining phrase translation tables from the sampling method and MGIZA++, finding it slightly outperforms MGIZA++ alone and helps reduce out-of-vocabulary words. The method divides the parallel corpus into "unigramized" source-target n-gram subtables, runs the sampling aligner on each, and merges the subtables' phrase tables.
This document discusses using convolutional neural networks and self-organized maps to visualize knowledge graphs. It presents a compositional model for embedding nodes and relationships in a knowledge graph into a vector space. A self-organized map is used to cluster the embeddings and extract semantic fingerprints. The fingerprints are useful for knowledge discovery and classification tasks. The technique is applied to a subset of the CTD knowledge graph containing compound-gene/protein interactions, and the results are comparable to structural models.
Adaptive relevance feedback in information retrievalYI-JHEN LIN
Adaptive relevance feedback aims to optimize the balance between the original query and feedback documents. The paper proposes learning an adaptive feedback coefficient based on query and feedback document characteristics. These include query and feedback document discrimination and divergence between the query and feedback. Logistic regression is used to learn weights mapping query-feedback pairs to coefficients. Experiments show the approach improves retrieval performance compared to fixed coefficients, especially when training and test data are in the same domain.
Fast and Accurate Spelling Correction Using Trie and Damerau-levenshtein Dist...TELKOMNIKA JOURNAL
This research was intended to create a fast and accurate spelling correction system with the
ability to handle both kind of spelling errors, non-word and real word errors. Existing spelling correction
system was analyzed and was then applied some modifications to improve its accuracy and speed. The
proposed spelling correction system is then built based on the method and intuition used by existing
system along with the modifications made in previous step. The result is a various spelling correction
system using different methods. Best result is achieved by the system that uses bigram with Trie and
Damerau-Levenshtein distance with the word level accuracy of 84.62% and an average processing speed
of 18.89 ms per sentence.
This chapter discusses clustering connections on LinkedIn based on job title to find similarities. It covers standardizing job titles, common similarity metrics like edit distance and Jaccard distance, and clustering algorithms like greedy clustering, hierarchical clustering and k-means clustering. It also discusses fetching extended profile information using OAuth authorization to access private LinkedIn data without credentials. The goal is to answer questions about connections by clustering them based on attributes like job title, company or location.
Ranking nodes in growing networks: when PageRank failsPietro De Nicolao
PageRank is a popular algorithm for ranking nodes in networks, but it can fail in growing networks with temporal properties. The document presents a growing network model called the Relevance Model (RM) that incorporates preferential attachment, fitness, and temporal decay of relevance and activity. Numerical simulations of the RM show that PageRank is biased towards old nodes when relevance decays slowly, and recent nodes when relevance decays quickly, failing to provide an unbiased ranking. Analysis of real citation networks finds PageRank strongly biased towards old papers compared to indegree ranking. The findings suggest PageRank is inappropriate for networks where temporal patterns affect linking behavior.
The Russian Doll Search algorithm improves upon the Depth First Branch and Bound algorithm for solving constraint optimization problems. It does this by performing n successive searches on nested subproblems, where n is the number of variables in the problem. Each search solves a subproblem involving a subset of the variables and records the optimal solution. This recorded information is then used to improve the lower bound estimate for partial assignments during subsequent searches on larger subproblems, allowing earlier pruning of search branches. On benchmark problems, this approach yields better results than a standard Depth First Branch and Bound.
The document summarizes string matching algorithms. It discusses the naive string matching algorithm which compares characters at each shift to find matches. It also describes the Rabin-Karp algorithm which uses hashing to match the hash value of the pattern to the hash value of substrings in the text. If the hash values match, it then checks for an exact character match. The Rabin-Karp algorithm has better average-case performance than the naive algorithm but the same worst-case performance of O((n-m+1)m) time.
An Application of Pattern matching for Motif IdentificationCSCJournals
Pattern matching is one of the central and most widely studied problem in theoretical computer science. Solutions to the problem play an important role in many areas of science and information processing. Its performance has great impact on many applications including database query, text processing and DNA sequence analysis. In general Pattern matching algorithms are based on the shift value, the direction of the sliding window and the order in which comparisons are made. The performance of the algorithms can be enhanced to a great extent by a larger shift value and less number of comparison to get the shift value. In this paper we proposed an algorithm, for finding motif in DNA sequence. The algorithm is based on preprocessing of the pattern string(motif) by considering four consecutive nucleotides of the DNA that immediately follow the aligned pattern window in an event of mismatch between pattern(motif) and DNA sequence .Theoretically, we found the proposed algorithms work efficiently for motif identification.
Discovering Novel Information with sentence Level clustering From Multi-docu...irjes
The document presents a novel fuzzy clustering algorithm called FRECCA that clusters sentences from multi-documents to discover new information. FRECCA uses fuzzy relational eigenvector centrality to calculate page rank scores for sentences within clusters, treating the scores as likelihoods. It uses expectation maximization to optimize cluster membership values and mixing coefficients without a parameterized likelihood function. An evaluation shows FRECCA achieves superior performance to other clustering algorithms on a quotations dataset, identifying overlapping clusters of semantically related sentences.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A COMPARISON OF DOCUMENT SIMILARITY ALGORITHMSgerogepatton
Document similarity is an important part of Natural Language Processing and is most commonly used for
plagiarism-detection and text summarization. Thus, finding the overall most effective document similarity
algorithm could have a major positive impact on the field of Natural Language Processing. This report sets
out to examine the numerous document similarity algorithms, and determine which ones are the most
useful. It addresses the most effective document similarity algorithm by categorizing them into 3 types of
document similarity algorithms: statistical algorithms, neural networks, and corpus/knowledge-based
algorithms. The most effective algorithms in each category are also compared in our work using a series of
benchmark datasets and evaluations that test every possible area that each algorithm could be used in.
This document describes an implementation of a polynomial abstract data type (ADT) using a linked list in Python. It discusses representing polynomials as terms in a linked list, with each node storing a term. Operations like addition, multiplication, evaluation are supported. The implementation uses a tail pointer for efficient appends when adding terms during arithmetic operations. Key methods like constructors, degree, get, evaluate are implemented, with addition shown as an example of iterating over the polynomials and appending new terms to the end of the linked list.
Learning Collaborative Agents with Rule Guidance for Knowledge Graph ReasoningDeren Lei
Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions. However, the sparse reward signals offered by the KG during traversal are often insufficient to guide a sophisticated walk-based reinforcement learning (RL) model. An alternate approach is to use traditional symbolic methods (e.g., rule induction), which achieve good performance but can be hard to generalize due to the limitation of symbolic representation. In this paper, we propose RuleGuider, which leverages high-quality rules generated by symbolic-based methods to provide reward supervision for walk-based agents. Experiments on benchmark datasets show that RuleGuider improves the performance of walk-based models without losing interpretability.
An Index Based K-Partitions Multiple Pattern Matching AlgorithmIDES Editor
The study of pattern matching is one of the
fundamental applications and emerging area in computational
biology. Searching DNA related data is a common activity for
molecular biologists. In this paper we explore the applicability
of a new pattern matching technique called Index based Kpartition
Multiple Pattern Matching algorithm (IKPMPM), for
DNA sequences. Current approach avoids unnecessary
comparisons in the DNA sequence. Due to this, the number of
comparisons gradually decreases and comparison per character
ratio of the proposed algorithm reduces accordingly when
compared to other existing popular methods. The experimental
results show that there is considerable amount of performance
improvement.
In conventional transportation problem (TP), all the parameters are always certain. But, many of the real life situations in industry or organization, the parameters (supply, demand and cost) of the TP are not precise which are imprecise in nature in different factors like the market condition, variations in rates of diesel, traffic jams, weather in hilly areas, capacity of men and machine, long power cut, labourer’s over time work, unexpected failures in machine, seasonal changesandmanymore. Tocountertheseproblems,dependingonthenatureoftheparameters, theTPisclassifiedintotwocategoriesnamelytype-2andtype-4fuzzytransportationproblems (FTPs) under uncertain environment and formulates the problem and utilizes the trapezoidal fuzzy number (TrFN) to solve the TP. The existing ranking procedure of Liou and Wang (1992)isusedtotransformthetype-2andtype-4FTPsintoacrisponesothattheconventional method may be applied to solve the TP. Moreover, the solution procedure differs from TP to type-2 and type-4 FTPs in allocation step only. Therefore a simple and efficient method denoted by PSK (P. Senthil Kumar) method is proposed to obtain an optimal solution in terms of TrFNs. From this fuzzy solution, the decision maker (DM) can decide the level of acceptance for the transportation cost or profit. Thus, the major applications of fuzzy set theory are widely used in areas such as inventory control, communication network, aggregate planning, employment scheduling, and personnel assignment and so on.
ACL-WMT2013.A Description of Tunable Machine Translation Evaluation Systems i...Lifeng (Aaron) Han
The document describes two machine translation evaluation systems, nLEPOR_baseline and LEPOR_v3.1, that were submitted to the WMT13 Metrics Task. nLEPOR_baseline is an n-gram based metric that considers modified sentence length penalty, position difference penalty, and n-gram precision and recall. LEPOR_v3.1 is an enhanced version that uses a harmonic mean to combine factors and includes part-of-speech information. Evaluation results showed LEPOR_v3.1 had the highest correlation of 0.86 with human judgments for English to other language pairs.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
The document discusses probabilistic information retrieval and Bayesian networks for modeling document collections and queries. It introduces concepts like conditional probability, Bayes' theorem, and the probability ranking principle. It then describes the binary independence retrieval model and how Bayesian networks can model dependencies between document terms and concepts to improve upon the independence assumption. The use of Bayesian networks allows modeling both documents and queries as networks to estimate the probability that a document satisfies an information need.
The document provides an introduction to Probabilistic Latent Semantic Analysis (PLSA). It discusses how PLSA improves on previous Latent Semantic Analysis methods by incorporating a probabilistic framework. PLSA models documents as mixtures of topics and allows words to have multiple meanings. The parameters of the PLSA model, including the topic distributions and word-topic distributions, are estimated using an expectation-maximization algorithm to find the parameters that best explain the observed word-document co-occurrence data.
A survey of Stemming Algorithms for Information Retrievaliosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses text summarization using machine learning. It begins by defining text summarization as reducing a text to create a summary that retains the most important points. There are two main types: single document summarization and multiple document summarization. Extractive summarization creates summaries by extracting phrases or sentences from the source text, while abstractive summarization expresses ideas using different words. Supervised machine learning approaches use labeled training data to train classifiers to select content, while unsupervised approaches select content based on metrics like term frequency-inverse document frequency. ROUGE is commonly used to automatically evaluate summaries by comparing them to human references. Query-focused multi-document summarization aims to answer a user's information need by summarizing relevant documents
AN INVESTIGATION OF THE SAMPLING-BASED ALIGNMENT METHOD AND ITS CONTRIBUTIONSijaia
This document summarizes an investigation into improving the performance of a sampling-based alignment method for statistical machine translation. It proposes two contributions: 1) A method to enforce alignment of n-grams in distinct translation subtables to increase the number of longer n-grams, and 2) Examining combining phrase translation tables from the sampling method and MGIZA++, finding it slightly outperforms MGIZA++ alone and helps reduce out-of-vocabulary words. The method divides the parallel corpus into "unigramized" source-target n-gram subtables, runs the sampling aligner on each, and merges the subtables' phrase tables.
This document discusses using convolutional neural networks and self-organized maps to visualize knowledge graphs. It presents a compositional model for embedding nodes and relationships in a knowledge graph into a vector space. A self-organized map is used to cluster the embeddings and extract semantic fingerprints. The fingerprints are useful for knowledge discovery and classification tasks. The technique is applied to a subset of the CTD knowledge graph containing compound-gene/protein interactions, and the results are comparable to structural models.
Adaptive relevance feedback in information retrievalYI-JHEN LIN
Adaptive relevance feedback aims to optimize the balance between the original query and feedback documents. The paper proposes learning an adaptive feedback coefficient based on query and feedback document characteristics. These include query and feedback document discrimination and divergence between the query and feedback. Logistic regression is used to learn weights mapping query-feedback pairs to coefficients. Experiments show the approach improves retrieval performance compared to fixed coefficients, especially when training and test data are in the same domain.
Fast and Accurate Spelling Correction Using Trie and Damerau-levenshtein Dist...TELKOMNIKA JOURNAL
This research was intended to create a fast and accurate spelling correction system with the
ability to handle both kind of spelling errors, non-word and real word errors. Existing spelling correction
system was analyzed and was then applied some modifications to improve its accuracy and speed. The
proposed spelling correction system is then built based on the method and intuition used by existing
system along with the modifications made in previous step. The result is a various spelling correction
system using different methods. Best result is achieved by the system that uses bigram with Trie and
Damerau-Levenshtein distance with the word level accuracy of 84.62% and an average processing speed
of 18.89 ms per sentence.
This chapter discusses clustering connections on LinkedIn based on job title to find similarities. It covers standardizing job titles, common similarity metrics like edit distance and Jaccard distance, and clustering algorithms like greedy clustering, hierarchical clustering and k-means clustering. It also discusses fetching extended profile information using OAuth authorization to access private LinkedIn data without credentials. The goal is to answer questions about connections by clustering them based on attributes like job title, company or location.
Ranking nodes in growing networks: when PageRank failsPietro De Nicolao
PageRank is a popular algorithm for ranking nodes in networks, but it can fail in growing networks with temporal properties. The document presents a growing network model called the Relevance Model (RM) that incorporates preferential attachment, fitness, and temporal decay of relevance and activity. Numerical simulations of the RM show that PageRank is biased towards old nodes when relevance decays slowly, and recent nodes when relevance decays quickly, failing to provide an unbiased ranking. Analysis of real citation networks finds PageRank strongly biased towards old papers compared to indegree ranking. The findings suggest PageRank is inappropriate for networks where temporal patterns affect linking behavior.
The Russian Doll Search algorithm improves upon the Depth First Branch and Bound algorithm for solving constraint optimization problems. It does this by performing n successive searches on nested subproblems, where n is the number of variables in the problem. Each search solves a subproblem involving a subset of the variables and records the optimal solution. This recorded information is then used to improve the lower bound estimate for partial assignments during subsequent searches on larger subproblems, allowing earlier pruning of search branches. On benchmark problems, this approach yields better results than a standard Depth First Branch and Bound.
The document summarizes string matching algorithms. It discusses the naive string matching algorithm which compares characters at each shift to find matches. It also describes the Rabin-Karp algorithm which uses hashing to match the hash value of the pattern to the hash value of substrings in the text. If the hash values match, it then checks for an exact character match. The Rabin-Karp algorithm has better average-case performance than the naive algorithm but the same worst-case performance of O((n-m+1)m) time.
An Application of Pattern matching for Motif IdentificationCSCJournals
Pattern matching is one of the central and most widely studied problem in theoretical computer science. Solutions to the problem play an important role in many areas of science and information processing. Its performance has great impact on many applications including database query, text processing and DNA sequence analysis. In general Pattern matching algorithms are based on the shift value, the direction of the sliding window and the order in which comparisons are made. The performance of the algorithms can be enhanced to a great extent by a larger shift value and less number of comparison to get the shift value. In this paper we proposed an algorithm, for finding motif in DNA sequence. The algorithm is based on preprocessing of the pattern string(motif) by considering four consecutive nucleotides of the DNA that immediately follow the aligned pattern window in an event of mismatch between pattern(motif) and DNA sequence .Theoretically, we found the proposed algorithms work efficiently for motif identification.
Discovering Novel Information with sentence Level clustering From Multi-docu...irjes
The document presents a novel fuzzy clustering algorithm called FRECCA that clusters sentences from multi-documents to discover new information. FRECCA uses fuzzy relational eigenvector centrality to calculate page rank scores for sentences within clusters, treating the scores as likelihoods. It uses expectation maximization to optimize cluster membership values and mixing coefficients without a parameterized likelihood function. An evaluation shows FRECCA achieves superior performance to other clustering algorithms on a quotations dataset, identifying overlapping clusters of semantically related sentences.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A COMPARISON OF DOCUMENT SIMILARITY ALGORITHMSgerogepatton
Document similarity is an important part of Natural Language Processing and is most commonly used for
plagiarism-detection and text summarization. Thus, finding the overall most effective document similarity
algorithm could have a major positive impact on the field of Natural Language Processing. This report sets
out to examine the numerous document similarity algorithms, and determine which ones are the most
useful. It addresses the most effective document similarity algorithm by categorizing them into 3 types of
document similarity algorithms: statistical algorithms, neural networks, and corpus/knowledge-based
algorithms. The most effective algorithms in each category are also compared in our work using a series of
benchmark datasets and evaluations that test every possible area that each algorithm could be used in.
A COMPARISON OF DOCUMENT SIMILARITY ALGORITHMSgerogepatton
Document similarity is an important part of Natural Language Processing and is most commonly used for
plagiarism-detection and text summarization. Thus, finding the overall most effective document similarity
algorithm could have a major positive impact on the field of Natural Language Processing. This report sets
out to examine the numerous document similarity algorithms, and determine which ones are the most
useful. It addresses the most effective document similarity algorithm by categorizing them into 3 types of
document similarity algorithms: statistical algorithms, neural networks, and corpus/knowledge-based
algorithms. The most effective algorithms in each category are also compared in our work using a series of
benchmark datasets and evaluations that test every possible area that each algorithm could be used in.
The document discusses various algorithms for pattern searching in a text, including:
1. Naive pattern searching which slides the pattern over the text and checks for matches in O(nm) time in worst case.
2. KMP algorithm which uses a preprocessing step to construct a lps array to avoid rematching characters, improving worst case to O(n).
3. Rabin-Karp algorithm which computes hashes of patterns and substrings to quickly eliminate non-matching candidates before character matching.
4. Finite automata based algorithm which preprocesses the pattern to construct a state machine, allowing searches in O(n) time.
Proposed Method for String Transformation using Probablistic ApproachEditor IJMTER
This document discusses a probabilistic approach to string transformation and describes four modules: 1) approximate string search, 2) candidate selection for spelling error correction, 3) candidate generation for spelling error correction, and 4) query reformulation. It presents a new statistical learning approach to string transformation and describes how this approach can be applied to spelling error correction of queries and query reformulation for web search.
This document presents an algorithm for semantic-based similarity measure (SBSM) to improve text clustering. The algorithm assigns semantic weights to documents terms and phrases based on their use as arguments in proposition bank notation. It calculates similarity between a document and query based on matching weighted terms and phrases. Experimental results on a dataset show the SBSM using proposition bank notation achieves better performance than traditional similarity measures for text clustering.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document presents an algorithm for semantic-based similarity measure (SBSM) to improve text clustering. The algorithm assigns semantic weights to documents terms and phrases based on their use as arguments in proposition bank notation. It calculates similarity between a document and query based on matching weighted terms and phrases. Experimental results on a dataset show the SBSM using proposition bank notation improves performance over traditional measures like cosine and Jaccard similarity. The algorithm captures semantic information within documents for more accurate similarity assessment and clustering.
A SURVEY ON SIMILARITY MEASURES IN TEXT MINING mlaij
The Volume of text resources have been increasing in digital libraries and internet. Organizing these text documents has become a practical need. For organizing great number of objects into small or minimum number of coherent groups automatically, Clustering technique is used. These documents are widely used for information retrieval and Natural Language processing tasks. Different Clustering algorithms require a metric for quantifying how dissimilar two given documents are. This difference is often measured by similarity measure such as Euclidean distance, Cosine similarity etc. The similarity measure process in text
mining can be used to identify the suitable clustering algorithm for a specific problem. This survey discusses the existing works on text similarity by partitioning them into three significant approaches; String-based, Knowledge based and Corpus-based similarities.
A Comparison of Serial and Parallel Substring Matching Algorithmszexin wan
This document summarizes a study comparing serial and parallel substring matching algorithms. It describes implementing a naive serial algorithm and dynamic programming algorithm in parallel using RPI's supercomputer. Testing on randomly generated and real-world text data showed the parallel naive algorithm outperformed the parallel dynamic algorithm for smaller files, while the dynamic algorithm scaled better to larger files. Analysis found the parallel algorithms grew exponentially with input size due to blocking MPI calls, though the dynamic implementation had memory issues for large files due to its 2D array structure. On average, the parallel implementations provided a 25x speedup over the fastest serial algorithm.
Seeds Affinity Propagation Based on Text ClusteringIJRES Journal
The objective is to find among all partitions of the data set, best publishing according to some quality measure. Affinity propagation is a low error, high speed, flexible, and remarkably simple clustering algorithm that may be used in forming teams of participants for business simulations and experiential exercises, and in organizing participant’s preferences for the parameters of simulations. This paper proposes an efficient Affinity Propagation algorithm that guarantees the same clustering result as the original algorithm after convergence. The heart of our approach is (1) to prune unnecessary message exchanges in the iterations and (2) to compute the convergence values of pruned messages after the iterations to determine clusters.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Sentence Validation by Statistical Language Modeling and Semantic RelationsEditor IJCATR
This paper deals with Sentence Validation - a sub-field of Natural Language Processing. It finds various applications in
different areas as it deals with understanding the natural language (English in most cases) and manipulating it. So the effort is on
understanding and extracting important information delivered to the computer and make possible efficient human computer
interaction. Sentence Validation is approached in two ways - by Statistical approach and Semantic approach. In both approaches
database is trained with the help of sample sentences of Brown corpus of NLTK. The statistical approach uses trigram technique based
on N-gram Markov Model and modified Kneser-Ney Smoothing to handle zero probabilities. As another testing on statistical basis,
tagging and chunking of the sentences having named entities is carried out using pre-defined grammar rules and semantic tree parsing,
and chunked off sentences are fed into another database, upon which testing is carried out. Finally, semantic analysis is carried out by
extracting entity relation pairs which are then tested. After the results of all three approaches is compiled, graphs are plotted and
variations are studied. Hence, a comparison of three different models is calculated and formulated. Graphs pertaining to the
probabilities of the three approaches are plotted, which clearly demarcate them and throw light on the findings of the project.
EXPERT OPINION AND COHERENCE BASED TOPIC MODELINGijnlc
In this paper, we propose a novel algorithm that rearrange the topic assignment results obtained from topic
modeling algorithms, including NMF and LDA. The effectiveness of the algorithm is measured by how much
the results conform to expert opinion, which is a data structure called TDAG that we defined to represent the
probability that a pair of highly correlated words appear together. In order to make sure that the internal
structure does not get changed too much from the rearrangement, coherence, which is a well known metric
for measuring the effectiveness of topic modeling, is used to control the balance of the internal structure.
We developed two ways to systematically obtain the expert opinion from data, depending on whether the
data has relevant expert writing or not. The final algorithm which takes into account both coherence and
expert opinion is presented. Finally we compare amount of adjustments needed to be done for each topic
modeling method, NMF and LDA.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING cscpconf
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drif
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Similar to Algorithm of Dynamic Programming for Paper-Reviewer Assignment Problem (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.