This document presents a proposed system for detecting plagiarism in student assignments submitted online. The system would use data mining algorithms and natural language processing to compare submitted assignments against each other and identify plagiarized content. It would analyze assignments at both the syntactic and semantic levels. The proposed system is intended to more efficiently and accurately detect plagiarism compared to teachers manually reviewing all submissions. The document describes the workflow of the system, including preprocessing of assignments, text analysis, similarity measurement, and algorithms that would be used like Rabin-Karp, KMP and SCAM.
Predicting cyber bullying on t witter using machine learningMirXahid1
The document discusses a project that aims to predict cyberbullying on Twitter using machine learning. The objectives are to use natural language processing techniques like sentiment analysis and part-of-speech tagging on tweets to extract features and train a machine learning model using these features to predict if a tweet contains cyberbullying or not. The proposed system collects tweets, converts emojis and audio to text, preprocesses the data by removing stop words and punctuation, and trains a recurrent neural network model for prediction. The current status is that the team has collected datasets and is working with them. Hardware and software requirements for the project are also outlined.
The Plagiarism Detection Systems for Higher Education - A Case Study in Saudi...ijseajournal
Plagiarism, cheating, and other types of academic misconduct are critical issues in higher education. In
this study, we conducted two questionnaires, one for Saudi universities and another for Saudi students at
different Saudi universities to investigate their beliefs and perceptions about plagiarism tools.
The first questionnaire was conducted to investigate to which degree the Saudi universities use plagiarism
tools. Four universities responded to our questionnaire (KSU, Immamu, PSAU, and Shaqra University).
The second questionnaire was used to investigate the user perceptions toward plagiarism tools in their
universities. Forty students responded to this questionnaire. Each student was answered 20 questions. Part
of these questions is to measure the confidence of students in terms of referencing; another part is to
measure the overall confidence to the system and evaluation of the students’ experience. The responses
indicate that the respondents believe that some questions reflect their own cases with the plagiarism during
their educational lifetime.
Smart detection of offensive words in social media using the soundex algorith...IJECEIAES
Offensive posts in the social media that are inappropriate for a specific age, level of maturity, or impression are quite often destined more to unadult than adult participants. Nowadays, the growth in the number of the masked offensive words in the social media is one of the ethically challenging problems. Thus, there has been growing interest in development of methods that can automatically detect posts with such words. This study aimed at developing a method that can detect the masked offensive words in which partial alteration of the word may trick the conventional monitoring systems when being posted on social media. The proposed method progresses in a series of phases that can be broken down into a pre-processing phase, which includes filtering, tokenization, and stemming; offensive word extraction phase, which relies on using the soundex algorithm and permuterm index; and a post-processing phase that classifies the users’ posts in order to highlight the offensive content. Accordingly, the method detects the masked offensive words in the written text, thus forbidding certain types of offensive words from being published. Results of evaluation of performance of the proposed method indicate a 99% accuracy of detection of offensive words.
Irjet v4 i73A Survey on Student’s Academic Experiences using Social Media Data53IRJET Journal
This document summarizes a research paper that analyzed social media posts by engineering students to understand their academic experiences. It developed a workflow that integrated qualitative analysis and data mining techniques. By applying classification algorithms to tweets with hashtags, it identified three main problems engineering students discussed: heavy study loads, lack of social engagement, and sleep deprivation. The analysis revealed students struggled with managing heavy course loads, which led to other issues affecting their health, motivation, and emotions. It also found posts discussing diversity issues and cultural stereotypes among engineering students.
This document presents a system for detecting semantically similar questions in online forums like Quora to reduce duplicate content. It proposes using natural language processing techniques like tagging questions with keywords, vectorizing text with Google News vectors, and calculating similarity with Word Mover's Distance. The system cleans and preprocesses questions before generating tags and calculating similarity between questions to identify duplicates. An evaluation of the system achieved accurate detection of matching and non-matching question pairs.
1) The document presents a framework called Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potentially offensive users in social media. It uses word lists, syntactic rules, and user profile features to analyze content and predict offensiveness at both the sentence and user level.
2) Experiments show the LSF framework performs significantly better than baseline methods in detecting offensive content and identifying offensive users, with over 70% recall and high precision.
3) The framework incorporates features like word strength, punctuation, capitalization, and message/user histories to analyze content in context and predict user behaviors, achieving better results than methods using only individual messages or basic machine learning.
Developing an arabic plagiarism detection corpuscsandit
A corpus is a collection of documents. It is a valuable resource in linguistics research to
perform statistical analysis and testing hypothesis for different linguistic rules. An annotated
corpus consists of documents or entities annotated with some task related labels such as part of
speech tags, sentiment etc One such task is plagiarism detection that seeks to identify if a given
document is plagiarized or not. This paper describes our efforts to build a plagiarism detection
corpus for Arabic. The corpus consists of about 350 plagiarized – source document pairs and
more than 250 documents where no plagiarism was found. The plagiarized documents consists
of students submitted assignments. For each of the plagiarized documents, the source document
was located from the Web and downloaded for further investigation. We report corpus statistics
including number of documents, number of sentences and number of tokens for each of the
plagiarized and source categories.
Detecting the presence of cyberbullying using computer softwareAshish Arora
This document discusses methods for detecting cyberbullying using machine learning techniques. It proposes using a machine learning online patrol crawler that utilizes support vector machines to classify social media posts as containing cyberbullying or not. The crawler is trained on manually identified examples of cyberbullying comments involving negativity, profanity, and attacks on attributes. It also discusses using sentiment analysis and social graphs to identify bullies and victims on Twitter by analyzing tweets containing gender-based terms. The goal is to accurately classify sentiment and identify instances of bullying to increase visibility of the problem.
Predicting cyber bullying on t witter using machine learningMirXahid1
The document discusses a project that aims to predict cyberbullying on Twitter using machine learning. The objectives are to use natural language processing techniques like sentiment analysis and part-of-speech tagging on tweets to extract features and train a machine learning model using these features to predict if a tweet contains cyberbullying or not. The proposed system collects tweets, converts emojis and audio to text, preprocesses the data by removing stop words and punctuation, and trains a recurrent neural network model for prediction. The current status is that the team has collected datasets and is working with them. Hardware and software requirements for the project are also outlined.
The Plagiarism Detection Systems for Higher Education - A Case Study in Saudi...ijseajournal
Plagiarism, cheating, and other types of academic misconduct are critical issues in higher education. In
this study, we conducted two questionnaires, one for Saudi universities and another for Saudi students at
different Saudi universities to investigate their beliefs and perceptions about plagiarism tools.
The first questionnaire was conducted to investigate to which degree the Saudi universities use plagiarism
tools. Four universities responded to our questionnaire (KSU, Immamu, PSAU, and Shaqra University).
The second questionnaire was used to investigate the user perceptions toward plagiarism tools in their
universities. Forty students responded to this questionnaire. Each student was answered 20 questions. Part
of these questions is to measure the confidence of students in terms of referencing; another part is to
measure the overall confidence to the system and evaluation of the students’ experience. The responses
indicate that the respondents believe that some questions reflect their own cases with the plagiarism during
their educational lifetime.
Smart detection of offensive words in social media using the soundex algorith...IJECEIAES
Offensive posts in the social media that are inappropriate for a specific age, level of maturity, or impression are quite often destined more to unadult than adult participants. Nowadays, the growth in the number of the masked offensive words in the social media is one of the ethically challenging problems. Thus, there has been growing interest in development of methods that can automatically detect posts with such words. This study aimed at developing a method that can detect the masked offensive words in which partial alteration of the word may trick the conventional monitoring systems when being posted on social media. The proposed method progresses in a series of phases that can be broken down into a pre-processing phase, which includes filtering, tokenization, and stemming; offensive word extraction phase, which relies on using the soundex algorithm and permuterm index; and a post-processing phase that classifies the users’ posts in order to highlight the offensive content. Accordingly, the method detects the masked offensive words in the written text, thus forbidding certain types of offensive words from being published. Results of evaluation of performance of the proposed method indicate a 99% accuracy of detection of offensive words.
Irjet v4 i73A Survey on Student’s Academic Experiences using Social Media Data53IRJET Journal
This document summarizes a research paper that analyzed social media posts by engineering students to understand their academic experiences. It developed a workflow that integrated qualitative analysis and data mining techniques. By applying classification algorithms to tweets with hashtags, it identified three main problems engineering students discussed: heavy study loads, lack of social engagement, and sleep deprivation. The analysis revealed students struggled with managing heavy course loads, which led to other issues affecting their health, motivation, and emotions. It also found posts discussing diversity issues and cultural stereotypes among engineering students.
This document presents a system for detecting semantically similar questions in online forums like Quora to reduce duplicate content. It proposes using natural language processing techniques like tagging questions with keywords, vectorizing text with Google News vectors, and calculating similarity with Word Mover's Distance. The system cleans and preprocesses questions before generating tags and calculating similarity between questions to identify duplicates. An evaluation of the system achieved accurate detection of matching and non-matching question pairs.
1) The document presents a framework called Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potentially offensive users in social media. It uses word lists, syntactic rules, and user profile features to analyze content and predict offensiveness at both the sentence and user level.
2) Experiments show the LSF framework performs significantly better than baseline methods in detecting offensive content and identifying offensive users, with over 70% recall and high precision.
3) The framework incorporates features like word strength, punctuation, capitalization, and message/user histories to analyze content in context and predict user behaviors, achieving better results than methods using only individual messages or basic machine learning.
Developing an arabic plagiarism detection corpuscsandit
A corpus is a collection of documents. It is a valuable resource in linguistics research to
perform statistical analysis and testing hypothesis for different linguistic rules. An annotated
corpus consists of documents or entities annotated with some task related labels such as part of
speech tags, sentiment etc One such task is plagiarism detection that seeks to identify if a given
document is plagiarized or not. This paper describes our efforts to build a plagiarism detection
corpus for Arabic. The corpus consists of about 350 plagiarized – source document pairs and
more than 250 documents where no plagiarism was found. The plagiarized documents consists
of students submitted assignments. For each of the plagiarized documents, the source document
was located from the Web and downloaded for further investigation. We report corpus statistics
including number of documents, number of sentences and number of tokens for each of the
plagiarized and source categories.
Detecting the presence of cyberbullying using computer softwareAshish Arora
This document discusses methods for detecting cyberbullying using machine learning techniques. It proposes using a machine learning online patrol crawler that utilizes support vector machines to classify social media posts as containing cyberbullying or not. The crawler is trained on manually identified examples of cyberbullying comments involving negativity, profanity, and attacks on attributes. It also discusses using sentiment analysis and social graphs to identify bullies and victims on Twitter by analyzing tweets containing gender-based terms. The goal is to accurately classify sentiment and identify instances of bullying to increase visibility of the problem.
The document presents a method for automatically evaluating handwritten student essays using longest common subsequence. It involves 3 phases: 1) constructing a reference material from multiple study materials by comparing sentences semantically, 2) extracting text from scanned handwritten essays using image processing techniques, and 3) grading essays by comparing the extracted text to the reference material using longest common subsequence to calculate a score and assign a grade. The method aims to automatically evaluate essays similar to human evaluation and explores the accuracy and effectiveness of using longest common subsequence for semantic comparison between texts.
The size of the Internet enlarging as per to grow the users of search providers continually demand search
results that are accurate to their wishes. Personalized Search is one of the options available to users in
order to sculpt search results based on their personal data returned to them provided to the search
provider. This brings up fears of privacy issues however, as users are typically anxious to revealing
personal info to an often faceless service provider along the Internet. This work proposes to administer
with the privacy issues surrounding personalized search and discusses ways that privacy can be improved
so that users can get easier with the dismissal of their personal information in order to obtain more precise
search results.
VIDEO OBJECTS DESCRIPTION IN HINDI TEXT LANGUAGE ijmpict
Video activity recognition has grown to be a dynamic location of analysis in latest years. A widespread
information-driven approach is denoted in this paper that produces descriptions of video content into
textual content description inside the Hindi language. This method combines the final results of modern
item with "real-international" records to pick the in all subject-verb-object triplet for depicting a video. The
usage of this triplet desire technique, a video is tagged via the trainer, mainly, Subject, Verb, and object
(SVO) and then this data is mined to improve the result of checking out video clarification by using pastime
as well as item identity. Contrasting preceding approaches, this method can annotate arbitrary videos
deprived of wanting the large series and annotation of a similar schooling video corpus. The proposed
work affords initial and primary text description within the Hindi language that is producing easy words
and sentence formation. But the fundamental challenging attempt on this work is to extract grammatically
accurate and expressive text records in Hindi textual content regarding video content.
IRJET- An Effective Analysis of Anti Troll System using Artificial Intell...IRJET Journal
This document discusses various techniques for detecting trolls using artificial intelligence and machine learning. It first reviews related work on sentiment analysis, supervised machine learning for troll detection, real-time sentiment analysis, and analyzing vulnerabilities in social networks. It then analyzes the limitations of current troll detection systems and how AI/ML solutions can help overcome these. The literature survey covers key approaches used for troll detection, including sentiment analysis, supervised learning models, and analyzing post vulnerabilities.
This document reviews research on predicting personality from Twitter users' tweets using machine learning algorithms. It discusses how tweets have attracted research interest from diverse fields. Different techniques have been used to predict personality from tweets, but there are still shortcomings to address. The aim is to consider the current state of this research area and explore personality prediction from tweets by reviewing past literature and discussing approaches to issues researchers face. It provides an overview of machine learning methods used for personality prediction from tweets, including data collection, preprocessing, model training and evaluation.
A Review on Grammar-Based Fuzzing TechniquesCSCJournals
Fuzzing has become the most interesting software testing technique because it can find different types of bugs and vulnerabilities in many target programs. Grammar-based fuzzing tools have been shown effectiveness in finding bugs and generating good fuzzing files. Fuzzing techniques are usually guided by different methods to improve their effectiveness. However, they have limitation as well. In this paper, we present an overview of grammar-based fuzzing tools and techniques that are used to guide them which include mutation, machine learning, and evolutionary computing. Few studies are conducted on this approach and show the effectiveness and quality in exploring new vulnerabilities in a program. Here we summarize the studied fuzzing tools and explain each one method, input format, strengths and limitations. Some experiments are conducted on two of the fuzzing tools and comparing between them based on the quality of generated fuzzing files.
The document summarizes a workshop on Service-Oriented Programming (SOP). SOP is a new programming methodology that allows developing software applications by connecting and composing existing services, facilitating software reuse. The workshop is divided into two parts: the first part describes SOP concepts and motivation, and the second introduces teaching materials through a demonstration of SOP techniques. The qualifications of the three presenters are also provided, including their research interests and experience in computer science education.
Automatic Generation of Multiple Choice Questions using Surface-based Semanti...CSCJournals
Multiple Choice Questions (MCQs) are a popular large-scale assessment tool. MCQs make it much easier for test-takers to take tests and for examiners to interpret their results; however, they are very expensive to compile manually, and they often need to be produced on a large scale and within short iterative cycles. We examine the problem of automated MCQ generation with the help of unsupervised Relation Extraction, a technique used in a number of related Natural Language Processing problems. Unsupervised Relation Extraction aims to identify the most important named entities and terminology in a document and then recognize semantic relations between them, without any prior knowledge as to the semantic types of the relations or their specific linguistic realization. We investigated a number of relation extraction patterns and tested a number of assumptions about linguistic expression of semantic relations between named entities. Our findings indicate that an optimized configuration of our MCQ generation system is capable of achieving high precision rates, which are much more important than recall in the automatic generation of MCQs. Its enhancement with linguistic knowledge further helps to produce significantly better patterns. We furthermore carried out a user-centric evaluation of the system, where subject domain experts from biomedical domain evaluated automatically generated MCQ items in terms of readability, usefulness of semantic relations, relevance, acceptability of questions and distractors and overall MCQ usability. The results of this evaluation make it possible for us to draw conclusions about the utility of the approach in practical e-Learning applications.
This study used multilevel analysis to determine the predictive value of selected intrinsic factors (gender,
computer ownership, mathematics background and computer experience) and institutional type (an
extrinsic factor) on undergraduates’ Self Efficacy in Java Computer Programming (SEiJCP) in South –
West, Nigeria. The study adopted a correlational design. Purposive Sampling was used to select 254
computer science undergraduates from four universities (three federal-owned and one state-owned) in
south-west, Nigeria. Three research questions were answered. Two research instruments namely,
Computer experience scale (r = 0.84) and Java Programming Self Efficacy Scale (JPSES, r = 0.96) were
used to collect data. Data were analysed using descriptive statistics, and null and linear growth model
(LGM) procedures. The intercorrelation coefficients among the extrinsic factor, intrinsic factors and
SEiJCP were moderate. Null model shows that the variations in SEiJCP accounted for by insitutional level
differences was 99.0%. The fixed part of the LGM of intrinsic factors showed that only mathematical
backgroung contributed significantly (p < 0.05) to the prediction of SEiJCP. The random part of the LGM
showed no significant contributions of the interactions of the intrinsic factors, to the prediction of SEiJCP.
About 60.0% of the student level variation in SEiJCP is explained by the differences in intrinsic factors.
The institution – level variable had large predictive value on programming self efficacy. Computer science
departments should increase the number of mathematics courses in their curriculum.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
IRJET- Automated Exam Question Generator using Genetic AlgorithmIRJET Journal
The document describes a proposed system for automatically generating exam questions using genetic algorithms. The system would take in previous exam questions categorized by Bloom's Taxonomy levels and chapters selected by instructors. It would then use genetic algorithms to generate new exam questions that cover different Bloom's levels and avoid repeating questions from the past two years. This aims to ease instructor workload while producing high-quality exam questions at different difficulty levels to evaluate students. The proposed system is described to be implemented using Java, with questions and details stored in a MySQL database.
This document proposes using Word2Vec and decision trees to extract keywords from textual documents and classify the documents. It reviews related work on keyword extraction and text classification techniques. The proposed approach involves preprocessing text, representing words as vectors with Word2Vec, calculating frequently occurring keywords for each category, and using decision trees to classify documents based on keyword similarity. Experiments using different preprocessing and Word2Vec settings achieved an F-score of up to 82% for document classification.
TALASH: A SEMANTIC AND CONTEXT BASED OPTIMIZED HINDI SEARCH ENGINEIJCSEIT Journal
This document summarizes a research paper that proposes three models for query expansion in a Hindi search engine: 1) Using lexical resources like HindiWordNet to find synonyms and related terms, 2) Using user context information like location, interests and profession, 3) Combining lexical resources and user context. An experiment compares the precision of results from simple Google searches to searches using each model. Precision was highest using the combined Model III at 0.79, showing that integrating lexical and user context information improves search quality in Hindi.
Dr. William Allan Kritsonis, PhD - Editor-in-Chief, NATIONAL FORUM JOURNALS (Established 1982). Dr. Kritsonis earned his PhD from The University of Iowa, Iowa City, Iowa; M.Ed., Seattle Pacific University; Seattle, Washington; BA Central Washington University, Ellensburg, Washington. He was also named as the Distinguished Alumnus for the College of Education and Professional Studies at Central Washington University.
This document provides an overview and requirements for the Stat project, an open source machine learning framework for text analysis. It describes the background, motivation, scope, and stakeholders of the project. Key requirements for the framework include being simplified, reusable, and providing built-in capabilities to naturally support text representation and processing tasks.
Question Retrieval in Community Question Answering via NON-Negative Matrix Fa...IRJET Journal
The document proposes using statistical machine translation via non-negative matrix factorization to address word ambiguity and mismatch problems in question retrieval for community question answering systems. It translates questions into other languages using Google Translate to leverage contextual information, representing the original and translated questions together in a matrix. Experimental results on a real CQA dataset show this approach improves over methods relying only on surface text matching.
Improving Annotations in Digital Documents using Document Features and Fuzzy ...IRJET Journal
The document proposes a system to automatically annotate digital documents using document features extracted via natural language processing techniques and fuzzy logic. It aims to improve on existing annotation systems by maintaining semantic accuracy while annotating large amounts of documents. The system first extracts features from documents like titles, sentence length, proper nouns etc. It then uses fuzzy logic to apply the best possible annotations based on weighted feature values. The approach is meant to accurately annotate documents in all conditions while preserving semantic meaning.
This document summarizes a research paper that proposes using a logistic regression classifier trained with stochastic gradient descent to predict Twitter users' personalities from their tweets. It begins with an abstract of the paper and an introduction on personality prediction from social media. It then provides more detail on the anatomy of the research, including defining personality prediction from Twitter, its applications, and the general process of using machine learning for the task. Next, it reviews several previous studies on personality prediction from Twitter and social networks, noting their approaches, findings and limitations. It identifies remaining research gaps, such as the need for improved linguistic analysis of tweets and more robust/scalable predictive models. Finally, it proposes using a logistic regression classifier as the personality prediction model to address
This document describes a plagiarism checker system that uses the Levenshtein algorithm to detect plagiarism in text documents. The system allows users to import paragraphs to check for plagiarism against a database of stored paragraphs. It analyzes the imported text and ranks how similar it is to texts in the database using the Levenshtein algorithm, which measures the minimum number of single-character edits needed to change one word into another. The system aims to efficiently detect plagiarism in medium to large volumes of student work with less human effort than other plagiarism checkers.
A Literature Review on Plagiarism Detection in Computer Programming AssignmentsIRJET Journal
This document summarizes research on plagiarism detection in computer programming assignments. It provides an abstract of the research topic and reviews several existing approaches for detecting plagiarism, including similarity-based, logic-based, and machine learning-based methods. Feature-based, string-matching, and algorithm-based techniques are discussed. The review identifies areas for further development, such as improving accuracy and supporting additional programming languages. The goal is to determine the best algorithm and methodology for developing a system to detect plagiarism in code.
This document summarizes various plagiarism detection techniques. It discusses detecting plagiarism in documents using web-enabled systems like Turnitin and SafeAssign or stand-alone systems like EVE and WCopyFind. It also covers detecting plagiarism in computer code using structure-based methods like Plague, YAP, and JPlag. Common plagiarism techniques discussed include string tiling and parse tree comparison. Algorithms are based on string comparisons and handle different levels of code modification. Existing tools use fingerprints, stylometry, or integrate search APIs to detect plagiarism.
The sarcasm detection with the method of logistic regressionEditorIJAERD
The document discusses sarcasm detection using logistic regression. It compares the performance of logistic regression and SVM classification for sarcasm detection. Logistic regression achieved higher accuracy of 93.5% for sarcasm detection, with lower execution time compared to SVM classification. The proposed approach uses data preprocessing, feature extraction using N-grams, and trains a logistic regression classifier on a manually labeled dataset to classify text as sarcastic or non-sarcastic. Accuracy and execution time analysis shows logistic regression performs better than SVM for this task.
The document presents a method for automatically evaluating handwritten student essays using longest common subsequence. It involves 3 phases: 1) constructing a reference material from multiple study materials by comparing sentences semantically, 2) extracting text from scanned handwritten essays using image processing techniques, and 3) grading essays by comparing the extracted text to the reference material using longest common subsequence to calculate a score and assign a grade. The method aims to automatically evaluate essays similar to human evaluation and explores the accuracy and effectiveness of using longest common subsequence for semantic comparison between texts.
The size of the Internet enlarging as per to grow the users of search providers continually demand search
results that are accurate to their wishes. Personalized Search is one of the options available to users in
order to sculpt search results based on their personal data returned to them provided to the search
provider. This brings up fears of privacy issues however, as users are typically anxious to revealing
personal info to an often faceless service provider along the Internet. This work proposes to administer
with the privacy issues surrounding personalized search and discusses ways that privacy can be improved
so that users can get easier with the dismissal of their personal information in order to obtain more precise
search results.
VIDEO OBJECTS DESCRIPTION IN HINDI TEXT LANGUAGE ijmpict
Video activity recognition has grown to be a dynamic location of analysis in latest years. A widespread
information-driven approach is denoted in this paper that produces descriptions of video content into
textual content description inside the Hindi language. This method combines the final results of modern
item with "real-international" records to pick the in all subject-verb-object triplet for depicting a video. The
usage of this triplet desire technique, a video is tagged via the trainer, mainly, Subject, Verb, and object
(SVO) and then this data is mined to improve the result of checking out video clarification by using pastime
as well as item identity. Contrasting preceding approaches, this method can annotate arbitrary videos
deprived of wanting the large series and annotation of a similar schooling video corpus. The proposed
work affords initial and primary text description within the Hindi language that is producing easy words
and sentence formation. But the fundamental challenging attempt on this work is to extract grammatically
accurate and expressive text records in Hindi textual content regarding video content.
IRJET- An Effective Analysis of Anti Troll System using Artificial Intell...IRJET Journal
This document discusses various techniques for detecting trolls using artificial intelligence and machine learning. It first reviews related work on sentiment analysis, supervised machine learning for troll detection, real-time sentiment analysis, and analyzing vulnerabilities in social networks. It then analyzes the limitations of current troll detection systems and how AI/ML solutions can help overcome these. The literature survey covers key approaches used for troll detection, including sentiment analysis, supervised learning models, and analyzing post vulnerabilities.
This document reviews research on predicting personality from Twitter users' tweets using machine learning algorithms. It discusses how tweets have attracted research interest from diverse fields. Different techniques have been used to predict personality from tweets, but there are still shortcomings to address. The aim is to consider the current state of this research area and explore personality prediction from tweets by reviewing past literature and discussing approaches to issues researchers face. It provides an overview of machine learning methods used for personality prediction from tweets, including data collection, preprocessing, model training and evaluation.
A Review on Grammar-Based Fuzzing TechniquesCSCJournals
Fuzzing has become the most interesting software testing technique because it can find different types of bugs and vulnerabilities in many target programs. Grammar-based fuzzing tools have been shown effectiveness in finding bugs and generating good fuzzing files. Fuzzing techniques are usually guided by different methods to improve their effectiveness. However, they have limitation as well. In this paper, we present an overview of grammar-based fuzzing tools and techniques that are used to guide them which include mutation, machine learning, and evolutionary computing. Few studies are conducted on this approach and show the effectiveness and quality in exploring new vulnerabilities in a program. Here we summarize the studied fuzzing tools and explain each one method, input format, strengths and limitations. Some experiments are conducted on two of the fuzzing tools and comparing between them based on the quality of generated fuzzing files.
The document summarizes a workshop on Service-Oriented Programming (SOP). SOP is a new programming methodology that allows developing software applications by connecting and composing existing services, facilitating software reuse. The workshop is divided into two parts: the first part describes SOP concepts and motivation, and the second introduces teaching materials through a demonstration of SOP techniques. The qualifications of the three presenters are also provided, including their research interests and experience in computer science education.
Automatic Generation of Multiple Choice Questions using Surface-based Semanti...CSCJournals
Multiple Choice Questions (MCQs) are a popular large-scale assessment tool. MCQs make it much easier for test-takers to take tests and for examiners to interpret their results; however, they are very expensive to compile manually, and they often need to be produced on a large scale and within short iterative cycles. We examine the problem of automated MCQ generation with the help of unsupervised Relation Extraction, a technique used in a number of related Natural Language Processing problems. Unsupervised Relation Extraction aims to identify the most important named entities and terminology in a document and then recognize semantic relations between them, without any prior knowledge as to the semantic types of the relations or their specific linguistic realization. We investigated a number of relation extraction patterns and tested a number of assumptions about linguistic expression of semantic relations between named entities. Our findings indicate that an optimized configuration of our MCQ generation system is capable of achieving high precision rates, which are much more important than recall in the automatic generation of MCQs. Its enhancement with linguistic knowledge further helps to produce significantly better patterns. We furthermore carried out a user-centric evaluation of the system, where subject domain experts from biomedical domain evaluated automatically generated MCQ items in terms of readability, usefulness of semantic relations, relevance, acceptability of questions and distractors and overall MCQ usability. The results of this evaluation make it possible for us to draw conclusions about the utility of the approach in practical e-Learning applications.
This study used multilevel analysis to determine the predictive value of selected intrinsic factors (gender,
computer ownership, mathematics background and computer experience) and institutional type (an
extrinsic factor) on undergraduates’ Self Efficacy in Java Computer Programming (SEiJCP) in South –
West, Nigeria. The study adopted a correlational design. Purposive Sampling was used to select 254
computer science undergraduates from four universities (three federal-owned and one state-owned) in
south-west, Nigeria. Three research questions were answered. Two research instruments namely,
Computer experience scale (r = 0.84) and Java Programming Self Efficacy Scale (JPSES, r = 0.96) were
used to collect data. Data were analysed using descriptive statistics, and null and linear growth model
(LGM) procedures. The intercorrelation coefficients among the extrinsic factor, intrinsic factors and
SEiJCP were moderate. Null model shows that the variations in SEiJCP accounted for by insitutional level
differences was 99.0%. The fixed part of the LGM of intrinsic factors showed that only mathematical
backgroung contributed significantly (p < 0.05) to the prediction of SEiJCP. The random part of the LGM
showed no significant contributions of the interactions of the intrinsic factors, to the prediction of SEiJCP.
About 60.0% of the student level variation in SEiJCP is explained by the differences in intrinsic factors.
The institution – level variable had large predictive value on programming self efficacy. Computer science
departments should increase the number of mathematics courses in their curriculum.
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...cscpconf
Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by
combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO) support vector machines (SVM) to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values,
Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.
IRJET- Automated Exam Question Generator using Genetic AlgorithmIRJET Journal
The document describes a proposed system for automatically generating exam questions using genetic algorithms. The system would take in previous exam questions categorized by Bloom's Taxonomy levels and chapters selected by instructors. It would then use genetic algorithms to generate new exam questions that cover different Bloom's levels and avoid repeating questions from the past two years. This aims to ease instructor workload while producing high-quality exam questions at different difficulty levels to evaluate students. The proposed system is described to be implemented using Java, with questions and details stored in a MySQL database.
This document proposes using Word2Vec and decision trees to extract keywords from textual documents and classify the documents. It reviews related work on keyword extraction and text classification techniques. The proposed approach involves preprocessing text, representing words as vectors with Word2Vec, calculating frequently occurring keywords for each category, and using decision trees to classify documents based on keyword similarity. Experiments using different preprocessing and Word2Vec settings achieved an F-score of up to 82% for document classification.
TALASH: A SEMANTIC AND CONTEXT BASED OPTIMIZED HINDI SEARCH ENGINEIJCSEIT Journal
This document summarizes a research paper that proposes three models for query expansion in a Hindi search engine: 1) Using lexical resources like HindiWordNet to find synonyms and related terms, 2) Using user context information like location, interests and profession, 3) Combining lexical resources and user context. An experiment compares the precision of results from simple Google searches to searches using each model. Precision was highest using the combined Model III at 0.79, showing that integrating lexical and user context information improves search quality in Hindi.
Dr. William Allan Kritsonis, PhD - Editor-in-Chief, NATIONAL FORUM JOURNALS (Established 1982). Dr. Kritsonis earned his PhD from The University of Iowa, Iowa City, Iowa; M.Ed., Seattle Pacific University; Seattle, Washington; BA Central Washington University, Ellensburg, Washington. He was also named as the Distinguished Alumnus for the College of Education and Professional Studies at Central Washington University.
This document provides an overview and requirements for the Stat project, an open source machine learning framework for text analysis. It describes the background, motivation, scope, and stakeholders of the project. Key requirements for the framework include being simplified, reusable, and providing built-in capabilities to naturally support text representation and processing tasks.
Question Retrieval in Community Question Answering via NON-Negative Matrix Fa...IRJET Journal
The document proposes using statistical machine translation via non-negative matrix factorization to address word ambiguity and mismatch problems in question retrieval for community question answering systems. It translates questions into other languages using Google Translate to leverage contextual information, representing the original and translated questions together in a matrix. Experimental results on a real CQA dataset show this approach improves over methods relying only on surface text matching.
Improving Annotations in Digital Documents using Document Features and Fuzzy ...IRJET Journal
The document proposes a system to automatically annotate digital documents using document features extracted via natural language processing techniques and fuzzy logic. It aims to improve on existing annotation systems by maintaining semantic accuracy while annotating large amounts of documents. The system first extracts features from documents like titles, sentence length, proper nouns etc. It then uses fuzzy logic to apply the best possible annotations based on weighted feature values. The approach is meant to accurately annotate documents in all conditions while preserving semantic meaning.
This document summarizes a research paper that proposes using a logistic regression classifier trained with stochastic gradient descent to predict Twitter users' personalities from their tweets. It begins with an abstract of the paper and an introduction on personality prediction from social media. It then provides more detail on the anatomy of the research, including defining personality prediction from Twitter, its applications, and the general process of using machine learning for the task. Next, it reviews several previous studies on personality prediction from Twitter and social networks, noting their approaches, findings and limitations. It identifies remaining research gaps, such as the need for improved linguistic analysis of tweets and more robust/scalable predictive models. Finally, it proposes using a logistic regression classifier as the personality prediction model to address
This document describes a plagiarism checker system that uses the Levenshtein algorithm to detect plagiarism in text documents. The system allows users to import paragraphs to check for plagiarism against a database of stored paragraphs. It analyzes the imported text and ranks how similar it is to texts in the database using the Levenshtein algorithm, which measures the minimum number of single-character edits needed to change one word into another. The system aims to efficiently detect plagiarism in medium to large volumes of student work with less human effort than other plagiarism checkers.
A Literature Review on Plagiarism Detection in Computer Programming AssignmentsIRJET Journal
This document summarizes research on plagiarism detection in computer programming assignments. It provides an abstract of the research topic and reviews several existing approaches for detecting plagiarism, including similarity-based, logic-based, and machine learning-based methods. Feature-based, string-matching, and algorithm-based techniques are discussed. The review identifies areas for further development, such as improving accuracy and supporting additional programming languages. The goal is to determine the best algorithm and methodology for developing a system to detect plagiarism in code.
This document summarizes various plagiarism detection techniques. It discusses detecting plagiarism in documents using web-enabled systems like Turnitin and SafeAssign or stand-alone systems like EVE and WCopyFind. It also covers detecting plagiarism in computer code using structure-based methods like Plague, YAP, and JPlag. Common plagiarism techniques discussed include string tiling and parse tree comparison. Algorithms are based on string comparisons and handle different levels of code modification. Existing tools use fingerprints, stylometry, or integrate search APIs to detect plagiarism.
The sarcasm detection with the method of logistic regressionEditorIJAERD
The document discusses sarcasm detection using logistic regression. It compares the performance of logistic regression and SVM classification for sarcasm detection. Logistic regression achieved higher accuracy of 93.5% for sarcasm detection, with lower execution time compared to SVM classification. The proposed approach uses data preprocessing, feature extraction using N-grams, and trains a logistic regression classifier on a manually labeled dataset to classify text as sarcastic or non-sarcastic. Accuracy and execution time analysis shows logistic regression performs better than SVM for this task.
Application of hidden markov model in question answering systemsijcsa
By the increase of the volume of the saved information on web, Question Answering (QA) systems have been very important for Information Retrieval (IR). QA systems are a specialized form of information retrieval. Given a collection of documents, a Question Answering system attempts to retrieve correct answers to questions posed in natural language. Web QA system is a sample of QA systems that in this system answers retrieval from web environment doing. In contrast to the databases, the saved information on web does not follow a distinct structure and are not generally defined. Web QA systems is the task of automatically answering a question posed in Natural Language Processing (NLP). NLP techniques are used in applications that make queries to databases, extract information from text, retrieve relevant documents from a collection, translate from one language to another, generate text responses, or recognize spoken words converting them into text. To find the needed information on a mass of the non-structured information we have to use techniques in which the accuracy and retrieval factors are implemented well. In this paper in order to well IR in web environment, The QA system in designed and also implemented based on the Hidden Markov Model (HMM)
IRJET- Design and Development of Web Application for Student Placement Tr...IRJET Journal
This document describes a web application called Knack Track that is designed to provide student placement training. The application aims to be a standardized platform for students to receive training and assessments in various areas like programming languages, numerical skills, vocabulary, and reasoning. It allows administrators to activate assessments and programming courses, and monitor student activities. Students can take assessments and practice programming questions. Their progress can be tracked using reports. The goal is to make placement preparation resources easily and freely available to students in one online location.
This document presents a summary of an online plagiarism checker tool. It discusses how plagiarism detection software works by identifying content similarity matches between documents. The paper then describes the problem statement of high plagiarism rates in academic papers that motivated the need for automated checking. It reviews related work on plagiarism detection methods and tools. The document outlines the proposed online plagiarism checker tool, which would use string matching algorithms like Karp-Rabin to compare documents and detect plagiarized content. If developed, this tool could help reduce human effort needed to check for plagiarism.
The document discusses the development of a computer-based grading system for the Technological University of the Philippines. It aims to replace the current manual grading system by automatically importing grades from teacher records and printing them in different formats. The proposed system would also allow storage and access of old student data. It seeks to address problems with the current system like delays in grade submission and issuance. The computer-based grading system would create a more user-friendly interface using a database to store student information. It is intended to benefit faculty by reducing effort, students by lessening delays, and the university by improving processing of grade reports. The scope is limited to implementation of the system using Visual Basic and Microsoft Access.
A COMPREHENSIVE STUDY ON E-LEARNING PORTALIRJET Journal
This document summarizes a research paper on developing an e-learning portal. The paper aims to establish better relationships between teachers, students, and parents through an integrated online learning system. It describes building a web-based portal using the Django framework and SQLite database to provide online courses, study materials like presentations and notes, and a way for teachers and students to communicate via comments. The system is designed to be user-friendly, secure, and allow personalized access via login credentials. It also aims to save students time and money compared to traditional coaching classes. The paper reviews several aspects of online learning systems from other literature and discusses how its proposed model would work and flow of data through login/registration, course access, and logout phases.
This document provides an agenda for research on knowledge discovery from web search. It begins with an introduction on knowledge discovery and how search engines can help extract information. It then outlines the goals and objectives, provides a literature review on related work, and discusses some common limitations observed, such as models achieving low accuracy and WSD approaches not being efficient enough. The document serves to provide background and planning for a research study on improving knowledge discovery through web search.
The document describes a proposed web application for automating project management tasks at an engineering institute. The application would allow students to form groups, get project approvals, submit work, and receive feedback and evaluations. It consists of two modules - one for online project work and another to evaluate student and project progress. The goal is to streamline project activities and provide a centralized platform for communication between students and guides.
Exploring the Efficiency of the Program using OOAD MetricsIRJET Journal
This document proposes a methodology to analyze the efficiency of object-oriented programs using OOAD (Object Oriented Analysis and Design) metrics. The methodology involves compiling a program successively until it is error-free, recording the error rate at each compilation. These results are then compared to determine how many compilations were needed for the program to be error-free, indicating its efficiency. The methodology is experimentally validated on a sample Java program, with results showing the error rate decreasing with each compilation until the program is error-free after the 8th compilation, demonstrating good efficiency.
A hybrid composite features based sentence level sentiment analyzerIAESIJAI
Current lexica and machine learning based sentiment analysis approaches
still suffer from a two-fold limitation. First, manual lexicon construction and
machine training is time consuming and error-prone. Second, the
prediction’s accuracy entails sentences and their corresponding training text
should fall under the same domain. In this article, we experimentally
evaluate four sentiment classifiers, namely support vector machines (SVMs),
Naive Bayes (NB), logistic regression (LR) and random forest (RF). We
quantify the quality of each of these models using three real-world datasets
that comprise 50,000 movie reviews, 10,662 sentences, and 300 generic
movie reviews. Specifically, we study the impact of a variety of natural
language processing (NLP) pipelines on the quality of the predicted
sentiment orientations. Additionally, we measure the impact of incorporating
lexical semantic knowledge captured by WordNet on expanding original
words in sentences. Findings demonstrate that the utilizing different NLP
pipelines and semantic relationships impacts the quality of the sentiment
analyzers. In particular, results indicate that coupling lemmatization and
knowledge-based n-gram features proved to produce higher accuracy results.
With this coupling, the accuracy of the SVM classifier has improved to
90.43%, while it was 86.83%, 90.11%, 86.20%, respectively using the three
other classifiers.
This document describes a student project to develop a plagiarism detection application for assignment submissions. The objectives are to help teachers efficiently check assignments for similarity, reduce grading time, and familiarize students with natural language processing. It will analyze assignments submitted online to calculate a text similarity percentage and flag potential plagiarism cases for the teacher. The application will allow students and teachers to login and submit/view assignments. Future enhancements may include additional modules and live streaming based on user feedback.
Online Examination and Evaluation SystemIRJET Journal
This document summarizes research on existing online examination and evaluation systems. It reviews 20 papers on different approaches for objective and subjective answer evaluation, including keyword matching, cosine similarity, machine learning algorithms, and natural language processing. The papers describe systems that automate the grading of exams through online proctoring, question banks, and tools to analyze student responses against model answers. The document concludes that a comprehensive examination system is needed that incorporates proctoring, online testing, and evaluation of both subjective and objective question types.
AI Based Student S Assignments Plagiarism DetectorAsia Smith
This document describes an AI-based student assignment plagiarism detector. It analyzes assignments uploaded in a folder using cosine similarity to detect plagiarism. The system architecture involves uploading an assignment folder, comparing files using cosine similarity, and generating a results table with file names, plagiarism percentages, and remarks. It analyzes past approaches and determines cosine similarity is better suited than alternatives like Euclidean distance and Jaccard similarity for plagiarism detection. Screenshots show the system's interface and features. The conclusion is that the system solves plagiarism by uniquely comparing multiple files with a single folder upload using cosine similarity.
This document discusses a system for detecting suspicious emails using machine learning algorithms. It presents a model that filters emails containing images and PDFs to improve performance over existing text-based spam filtering systems. The model uses data collection, preprocessing, machine learning classification algorithms like Naive Bayes, Support Vector Machines, and Decision Trees. It aims to achieve acceptable recall and precision values. The document also reviews related work on email spam detection using machine learning and discusses the future scope of applying the algorithm to embedded images/PDFs and larger datasets.
Twitter Sentiment Analysis: An Unsupervised ApproachIRJET Journal
The document describes a study that performs sentiment analysis on Twitter data using an unsupervised machine learning technique. It discusses how Twitter data was collected and preprocessed, including removing stopwords and lemmatizing words. It then used the FastText word embedding model to represent words as vectors, which is suitable for unlabeled data. The K-Means clustering algorithm was implemented to group the Twitter data into clusters in an unsupervised manner and classify the tweets as positive, negative, or neutral sentiment.
The document describes an automated process for bug triage that uses text classification and data reduction techniques. It proposes using Naive Bayes classifiers to predict the appropriate developers to assign bugs to by applying stopword removal, stemming, keyword selection, and instance selection on bug reports. This reduces the data size and improves quality. It predicts developers based on their history and profiles while tracking bug status. The goal is to more efficiently handle software bugs compared to traditional manual triage processes.
2. an efficient approach for web query preprocessing edit satIAESIJEECS
The emergence of the Web technology generated a massive amount of raw data by enabling Internet users to post their opinions, comments, and reviews on the web. To extract useful information from this raw data can be a very challenging task. Search engines play a critical role in these circumstances. User queries are becoming main issues for the search engines. Therefore a preprocessing operation is essential. In this paper, we present a framework for natural language preprocessing for efficient data retrieval and some of the required processing for effective retrieval such as elongated word handling, stop word removal, stemming, etc. This manuscript starts by building a manually annotated dataset and then takes the reader through the detailed steps of process. Experiments are conducted for special stages of this process to examine the accuracy of the system.
Similar to IRJET - Online Assignment Plagiarism Checking using Data Mining and NLP (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.