This document summarizes an automated system for generating question papers and checking answers. The system allows faculty to generate randomized question papers that cover selected chapters from a database of questions tagged by type, topic, difficulty level, and cognitive skill. It also proposes an automated scoring approach using Jaro-Winkler string similarity to compare a student's answer to a model answer and assign a score. The system aims to reduce the time and effort required for manual question paper generation and marking by faculty.
Comparison Intelligent Electronic Assessment with Traditional Assessment for ...CSEIJJournal
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems
have been used to conduct objective assessments for the last few years. This research deals with randomly
designed E-examinations and proposes an E-assessment system that can be used for subjective questions.
This system assesses answers to subjective questions by finding a matching ratio for the keywords in
instructor and student answers. The matching ratio is achieved based on semantic and document similarity.
The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and
grading. A survey and case study were used in the research design to validate the proposed system. The
examination assessment system will help instructors to save time, costs, and resources, while increasing
efficiency and improving the productivity of exam setting and assessments.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems
have been used to conduct objective assessments for the last few years. This research deals with randomly
designed E-examinations and proposes an E-assessment system that can be used for subjective questions.
This system assesses answers to subjective questions by finding a matching ratio for the keywords in
instructor and student answers.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity.
The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
Comparison Intelligent Electronic Assessment with Traditional Assessment for ...CSEIJJournal
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems
have been used to conduct objective assessments for the last few years. This research deals with randomly
designed E-examinations and proposes an E-assessment system that can be used for subjective questions.
This system assesses answers to subjective questions by finding a matching ratio for the keywords in
instructor and student answers. The matching ratio is achieved based on semantic and document similarity.
The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and
grading. A survey and case study were used in the research design to validate the proposed system. The
examination assessment system will help instructors to save time, costs, and resources, while increasing
efficiency and improving the productivity of exam setting and assessments.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems
have been used to conduct objective assessments for the last few years. This research deals with randomly
designed E-examinations and proposes an E-assessment system that can be used for subjective questions.
This system assesses answers to subjective questions by finding a matching ratio for the keywords in
instructor and student answers.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity.
The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
COMPARISON INTELLIGENT ELECTRONIC ASSESSMENT WITH TRADITIONAL ASSESSMENT FOR ...cseij
In education, the use of electronic (E) examination systems is not a novel idea, as E-examination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
An Adaptive Approach for Subjective Answer Evaluationvivatechijri
In current academic environment, assignments and home works are very necessary so that students
can increase their final grades. This assignments and home works are checked manually by teachers, due to this
it consumes lots of time and efforts. Due to manual checking sometimes human error may occur which may affect
to student’s grades. Students may misplace their hard copies of assignments because of this they have to rewrite
it again. In order to overcome these problems, the proposed system will convert the manual work to digital, in
which student will submit their assignments to the system and the system will generate and assign appropriate
grades. In the proposed system, by using K Nearest Neighbor Algorithm, it will collect keywords check for the
similarity and will generate similarity score. It will also check the relation of the keyword with respect to sentence.
To comparing the keywords with synonyms and similar meaning words Semantic Similarity Measure algorithm
will be used. After getting the similarity score the grades are assigned accordingly.
DEVELOPING A FRAMEWORK FOR ONLINE PRACTICE EXAMINATION AND AUTOMATED SCORE GE...ijcsit
ABSTRACT
Examination is the process by which the ability and the quality of the examinees can be measured. It is necessary to ensure the quality of the examinees. Online examination system is the process by which the participants can appear at the examination irrespective of their locations by connecting to examination site via Internet using desktop computers, laptops or smart phones. Automated score generation is the process by which the answer scripts of the examinations are evaluated automatically to generate scores. Although, there are many existing online examination systems, the main drawback of these systems is that they cannot compute automated score accurately, especially from the text-based answers. Moreover, most of them are unilingual in nature. As a result, examinees can appear at the examination in a particular language. Considering this fact, in this paper, we present a framework that can take Multiple Choice Questions (MCQ) examinations and written examinations in two different languages English and Bangla. We develop a database where the questions and answers are stored. The questions from the database are displayed in the web page with answering options for the MCQ questions and text boxes for the written questions. For generating the scores of the written questions, we performed several types of analysis of the answers of the written questions. However, for generating the scores of the MCQ questions, we simply compared between the database answers and the user’s answers. We conducted several experiments to check the accuracy of score generation by our system and found that our system can generate 100% accurate scores for MCQ questions and more than 90% accurate scores from text based questions.
Examination is the process by which the ability and the quality of the examinees can be measured. It is necessary to ensure the quality of the examinees. Online examination system is the process by which the participants can appear at the examination irrespective of their locations by connecting to examination site via Internet using desktop computers, laptops or smart phones. Automated score generation is the process by which the answer scripts of the examinations are evaluated automatically to generate scores. Although, there are many existing online examination systems, the main drawback of these systems is that they cannot compute automated score accurately, especially from the text-based answers. Moreover, most of them are unilingual in nature. As a result, examinees can appear at the examination in a particular language. Considering this fact, in this paper, we present a framework that can take Multiple Choice Questions (MCQ) examinations and written examinations in two different languages English and Bangla. We develop a database where the questions and answers are stored. The questions from the database are displayed in the web page with answering options for the MCQ questions and text boxes for the written questions. For generating the scores of the written questions, we performed several types of analysis of the answers of the written questions. However, for generating the scores of the MCQ questions, we simply compared between the database answers and the user’s answers. We conducted several experiments to check the accuracy of score generation by our system and found that our system can generate 100% accurate scores for MCQ questions and more than 90% accurate scores from text based questions.
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
An Adaptive Evaluation System to Test Student Caliber using Item Response TheoryEditor IJMTER
Computational creativity research has produced many computational systems that are
described as creative [1]. A comprehensive literature survey reveals that although such systems are
labelled as creative, there is a distinct lack of evaluation of the Creativity of creative systems [1].
Nowadays, a number of online testing websites exist but the drawback of these tests is that every
student who gives a particular test will always be given the same set of questions irrespective of their
caliber. Thus, a student with a very high Intelligence Quotient (IQ) may be forced to answer basic
level questions and in the same way weaker students may be asked very challenging questions which
they cannot response. This method of testing results into a wastage of time for the high IQ students
and can be quite frustrating for the weaker students. This would never benefit a teacher to understand
a particular student’s caliber for the subject under Consideration. Each learner has different learning
status and therefore different test items should be used in their evaluation. This paper proposes an
Adaptive Evaluation System developed based on an Item Response Theory and would be created for
mobile end user keeping in mind the flexibility of students to attempt the test from anywhere. This
application would not only dynamically customize questions for students based on the previous
question he/she has answered but also by adjusting the degree of difficulty for test questions
depending on student ability, a teacher can acquire a valid & reliable measurement of student’s
competency.
A New Active Learning Technique Using Furthest Nearest Neighbour Criterion fo...ijcsa
Active learning is a supervised learning method that is based on the idea that a machine learning algorithm can achieve greater accuracy with fewer labelled training images if it is allowed to choose the image from which it learns. Facial age classification is a technique to classify face images into one of the several predefined age groups. The proposed study applies an active learning approach to facial age classification which allows a classifier to select the data from which it learns. The classifier is initially trained using a small pool of labeled training images. This is achieved by using the bilateral two dimension linear discriminant analysis. Then the most informative unlabeled image is found out from the unlabeled pool using the furthest nearest neighbor criterion, labeled by the user and added to the
appropriate class in the training set. The incremental learning is performed using an incremental version of bilateral two dimension linear discriminant analysis. This active learning paradigm is proposed to be applied to the k nearest neighbor classifier and the support vector machine classifier and to compare the performance of these two classifiers.
A Study on Learning Factor Analysis – An Educational Data Mining Technique fo...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Analysis of sms feedback and online feedback using sentiment analysis for ass...eSAT Journals
Abstract In this, the system will collect SMS’s and online Feedback from students in free text format. To get instance response of lecturer feedback by analyzing free text messages using sentiment analysis. It is very low cost and plat form independent. It will show you three different outputs, i.e. Good, Bad and Average/Neutral. The result will be shown in graphical format. Key Words: Sentiment analysis, online feed-back.
A scoring rubric for automatic short answer grading systemTELKOMNIKA JOURNAL
During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing students’ learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the string-based similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 student’s answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error (푀퐴퐸) value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method.
An Adaptive Approach for Subjective Answer Evaluationvivatechijri
In current academic environment, assignments and home works are very necessary so that students
can increase their final grades. This assignments and home works are checked manually by teachers, due to this
it consumes lots of time and efforts. Due to manual checking sometimes human error may occur which may affect
to student’s grades. Students may misplace their hard copies of assignments because of this they have to rewrite
it again. In order to overcome these problems, the proposed system will convert the manual work to digital, in
which student will submit their assignments to the system and the system will generate and assign appropriate
grades. In the proposed system, by using K Nearest Neighbor Algorithm, it will collect keywords check for the
similarity and will generate similarity score. It will also check the relation of the keyword with respect to sentence.
To comparing the keywords with synonyms and similar meaning words Semantic Similarity Measure algorithm
will be used. After getting the similarity score the grades are assigned accordingly.
DEVELOPING A FRAMEWORK FOR ONLINE PRACTICE EXAMINATION AND AUTOMATED SCORE GE...ijcsit
ABSTRACT
Examination is the process by which the ability and the quality of the examinees can be measured. It is necessary to ensure the quality of the examinees. Online examination system is the process by which the participants can appear at the examination irrespective of their locations by connecting to examination site via Internet using desktop computers, laptops or smart phones. Automated score generation is the process by which the answer scripts of the examinations are evaluated automatically to generate scores. Although, there are many existing online examination systems, the main drawback of these systems is that they cannot compute automated score accurately, especially from the text-based answers. Moreover, most of them are unilingual in nature. As a result, examinees can appear at the examination in a particular language. Considering this fact, in this paper, we present a framework that can take Multiple Choice Questions (MCQ) examinations and written examinations in two different languages English and Bangla. We develop a database where the questions and answers are stored. The questions from the database are displayed in the web page with answering options for the MCQ questions and text boxes for the written questions. For generating the scores of the written questions, we performed several types of analysis of the answers of the written questions. However, for generating the scores of the MCQ questions, we simply compared between the database answers and the user’s answers. We conducted several experiments to check the accuracy of score generation by our system and found that our system can generate 100% accurate scores for MCQ questions and more than 90% accurate scores from text based questions.
Examination is the process by which the ability and the quality of the examinees can be measured. It is necessary to ensure the quality of the examinees. Online examination system is the process by which the participants can appear at the examination irrespective of their locations by connecting to examination site via Internet using desktop computers, laptops or smart phones. Automated score generation is the process by which the answer scripts of the examinations are evaluated automatically to generate scores. Although, there are many existing online examination systems, the main drawback of these systems is that they cannot compute automated score accurately, especially from the text-based answers. Moreover, most of them are unilingual in nature. As a result, examinees can appear at the examination in a particular language. Considering this fact, in this paper, we present a framework that can take Multiple Choice Questions (MCQ) examinations and written examinations in two different languages English and Bangla. We develop a database where the questions and answers are stored. The questions from the database are displayed in the web page with answering options for the MCQ questions and text boxes for the written questions. For generating the scores of the written questions, we performed several types of analysis of the answers of the written questions. However, for generating the scores of the MCQ questions, we simply compared between the database answers and the user’s answers. We conducted several experiments to check the accuracy of score generation by our system and found that our system can generate 100% accurate scores for MCQ questions and more than 90% accurate scores from text based questions.
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
An Adaptive Evaluation System to Test Student Caliber using Item Response TheoryEditor IJMTER
Computational creativity research has produced many computational systems that are
described as creative [1]. A comprehensive literature survey reveals that although such systems are
labelled as creative, there is a distinct lack of evaluation of the Creativity of creative systems [1].
Nowadays, a number of online testing websites exist but the drawback of these tests is that every
student who gives a particular test will always be given the same set of questions irrespective of their
caliber. Thus, a student with a very high Intelligence Quotient (IQ) may be forced to answer basic
level questions and in the same way weaker students may be asked very challenging questions which
they cannot response. This method of testing results into a wastage of time for the high IQ students
and can be quite frustrating for the weaker students. This would never benefit a teacher to understand
a particular student’s caliber for the subject under Consideration. Each learner has different learning
status and therefore different test items should be used in their evaluation. This paper proposes an
Adaptive Evaluation System developed based on an Item Response Theory and would be created for
mobile end user keeping in mind the flexibility of students to attempt the test from anywhere. This
application would not only dynamically customize questions for students based on the previous
question he/she has answered but also by adjusting the degree of difficulty for test questions
depending on student ability, a teacher can acquire a valid & reliable measurement of student’s
competency.
A New Active Learning Technique Using Furthest Nearest Neighbour Criterion fo...ijcsa
Active learning is a supervised learning method that is based on the idea that a machine learning algorithm can achieve greater accuracy with fewer labelled training images if it is allowed to choose the image from which it learns. Facial age classification is a technique to classify face images into one of the several predefined age groups. The proposed study applies an active learning approach to facial age classification which allows a classifier to select the data from which it learns. The classifier is initially trained using a small pool of labeled training images. This is achieved by using the bilateral two dimension linear discriminant analysis. Then the most informative unlabeled image is found out from the unlabeled pool using the furthest nearest neighbor criterion, labeled by the user and added to the
appropriate class in the training set. The incremental learning is performed using an incremental version of bilateral two dimension linear discriminant analysis. This active learning paradigm is proposed to be applied to the k nearest neighbor classifier and the support vector machine classifier and to compare the performance of these two classifiers.
A Study on Learning Factor Analysis – An Educational Data Mining Technique fo...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Analysis of sms feedback and online feedback using sentiment analysis for ass...eSAT Journals
Abstract In this, the system will collect SMS’s and online Feedback from students in free text format. To get instance response of lecturer feedback by analyzing free text messages using sentiment analysis. It is very low cost and plat form independent. It will show you three different outputs, i.e. Good, Bad and Average/Neutral. The result will be shown in graphical format. Key Words: Sentiment analysis, online feed-back.
A scoring rubric for automatic short answer grading systemTELKOMNIKA JOURNAL
During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing students’ learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the string-based similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 student’s answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error (푀퐴퐸) value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method.
Similar to Automated Question Paper Generator And Answer Checker Using Information Retrieval Approach (20)
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Unit 8 - Information and Communication Technology (Paper I).pdf
Automated Question Paper Generator And Answer Checker Using Information Retrieval Approach
1. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 5
Automated Question Paper Generator and
Answer Checker Using Information Retrieval
Approach
Mansi Palav#1
, Pranjal Singh#2
, Pratik Vishwakarma#3
, Pooja Pandit#4
, Prof. Neelam Phadnis#5
B.E, Department of Computer Engineering, Mumbai University
Mumbai, Maharashtra, India
Abstract
This is a challenging span due to the growth in
the field of computer science and demand we are
facing today. Hence, examinations play a vital role in
testing student’s performance. That is why it is
important to have a smart development question
model for growth of students as well as to test their
learning skills thereby keeping a check on student
performance. Generating an effective question paper
is a task of great importance for any educational
institute. The traditional method, where lecturers
manually prepare question paper, it is very tedious
and challenging. Our System allows faculty to
generate question papers with random questions,
which covers the chapters selected by the faculty. In
this study, we propose an automated scoring
approach for descriptive answers by usingJaro-
Winkler
Keywords - Question Paper, Answer Checker,
Randomize Algorithm, Jaro-Winkler.
I. INTRODUCTION
Generating an effective question paper is a task of
great importance for any educational institute. The
traditional method, where lecturers manually prepare
question paper, is very tedious and
challenging.Automated Question Paper Generator and
Answer Checker System can reduce time
consumption by replacing the conventional method of
question paper generation.The system fully automates
the process of question paper generation and selective
answer checker. The advanced system generates
question paper based on database such that all types
of questions and answers such as (MCQs, Theory
based, and objectives.) are stored in database. The
questions are randomly selected by the system from
database and generates a question paper such that it
covers all the chapters, which are selected, and allows
student to attempt the examination and to score
accordingly.Answer Checker checks the paper and
generates marks/score for the student by finding the
difference between model answer and student answer
by usingJaro-Winkler
II. LITERATURE REVIEW
This section presents the significant approaches in
the field of information retrieval,the techniques used
for text similarity and to understand the need for
automatic generation of question paper and answer
checking. Different models used to evaluate the
results of these techniques are also reviewed.
A. Background Study
1. Question Paper Generation
A literature survey was started to understand
the need for automatic generation of question paper.
As mentioned in, many existing LMS support tagging
feature but users may not utilize this feature fully.
The comparative study shows that Moodle is the best
LMSfor any educational institutionand to support
large number of users. But it allows user to define
only question type. Hence the questions in the
repository may have only basic tags or no tags at all.
So it becomes overhead for teachers to tag these
questions before using them. Properly tagged
questions can be efficiently retrieved from repository.
Hence it is very much necessary to tag the questions
before adding them to repository. A system which
offers generation of question paper using user given
input parameters considers only fixed range of values.
Our system not only supports upper and lowers
bounds for inputs but also supports more granular
level of topics than chapters and more question types
as compared to only three types offered by this
system. We are using automatically tagged question
repository as input instead of untagged questions.[5]
2. Answer Checker System
Different models used to evaluate the results are
reviewed.
a. Intelligent Essay Assessor (IEA)
It uses statistical model to compare descriptive
answers and checks the semantic
similaritybetweentwo answer sets. It is also used to
analyse and to give score for the essay type of
answers.
b. E-rater
E-rater is use to analyse the essay type of answers
and specifies syntactical and lexical issues in the text.
[3]
2. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 6
c. C-rater
C-rater is primarily use for assigning the marks
as per the student’s correct answers. It also deploys
the similar kind of words used in the answer, spelling
errors, syntax variations, whichare checked
automatically. [2]
3. Text to Text Similarity Approaches
The primary methods of similarity are classified as
knowledge-based similarity, corpus-based similarity,
and string-based similarity measures. [4]
a. Knowledge-based similarity
It applies text-to-text similarity to determine the
shortest path of similarity by detecting lexical chains
between pairs in a text using the WordNet hierarchy.
b. Corpus-based Similarity
It is used to find similarity between words
according to the sets called as corpus. It checks the
occurrences of the word in the particular answer.
c. String-based Similarity
String-based similarity evaluates the measures of
similarity or dissimilarity between two text strings.
There are two different types of string-based
algorithms for evaluating similarity between the
student's answer (SA) and the model answer (MA).
c.1Character-based similarity
Character-based similarity is used to find out the
distance between two strings and perform minimum
operations. Operations such as insertion, deletion,
substitution and transposition of a single character.
c.2Term-based similarity
Term-based similarity is the distance between
two itemsand sum of the distances of their
corresponding items
D. Cosine Similarity
The SA and MA are represented as vectors, where
student's answer and model answer are a set of terms;
each term has a weight, which reflects its importance
on that MA or SA. There are several ways to
calculate this weight, such as the Term Frequency-
Inverse Document Frequency (TF-IDF), where the
(TF) refers to the term frequency in the model answer,
and the IDF represents the importance of a term with
respect to the entire corpus. It is calculated by the
number of answers in the corpus divided by the
number of answers containing a term.
The cosine similarity measure is based on term
weighting scheme, which is the TF-IDF.
It is usually used as a weighting factor in information
retrieval and text mining. The formulas of TF, IDF
and TF-IDF are illustrated bellow as follows:
𝑇𝐹 =
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑜𝑐𝑐𝑢𝑟𝑒𝑛𝑐𝑒𝑠 𝑜𝑓 𝑡𝑒𝑟𝑚 𝑖𝑛 𝑎𝑛𝑠𝑤𝑒𝑟
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠 𝑖𝑛 𝑎𝑛𝑠𝑤𝑒𝑟
𝐼𝐷𝐹 =
𝑁
log𝑛𝑗+1
Where N is the total number of answers, NJ is the
number of answers containing the term.
𝑇𝐹𝐼𝐷𝐹 = 𝑇𝐹 ∗ 𝐼𝐷𝐹
The main idea behind this model is to calculate the
weight of each term in each answer with respect to
the entire corpus.
The TF-IDF is used to compare a student’s answer
vector with a model answer vector using cosine
similarity measure. Cosine similarity measures the
cosine of the angle between two vectors. Two vectors
of attributes, SA and MA, the cosine similarity and
cosine (θ) are represented by using a dot product and
magnitude as follows:
𝑐𝑜𝑠𝑖𝑛𝑒 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦(𝑆𝐴, 𝑀𝐴) = 𝐷𝑜𝑡 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 (𝑆𝐴 ,𝑀𝐴 )
𝑆𝐴 ∗|𝑀𝐴 |
Where Dot Product is:
Dot Product (SA, MA) =SA [0] *MA [0] +…..+ SA
[n] *MA [n]
And Distances | | and |MA| is defined as:
| | = [0] + [1] + ….. + [ ]
And
|MA| = MA [0] + MA [1] + ….. + MA [n]
After the cosine similarity between the model
answers and student answers are calculated, marks are
assigned.
4. Automated Tagging
The following four tags were identified for the
automatic generation of the question paper based on
Blooms Taxonomy.
a. Cognitive level Identification
It is the process to Understand, Percept, Evaluate,
Analyse, Recall and Create the functioning of the
system
b. Question-type Identification
Question-type is to identify the objective or
subjective type of questions.
Tags Values
Cognitive Level Recall, Understand, Apply,
Analyse, Evaluate, Create
Question Type Fill in the blanks, Multiple
choice, Match the
following, True False,
Answer in one word,
Definition
Content Topics and subtopic from
syllabus
Difficulty Level Low, Medium, High
3. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 7
c. Content Identification
It is to identify the topics as well as subtopics.
d. Difficulty- Level Identification
The difficulty level of a question depends on
concept involved, type of question, cognitive level
and decide whether it is low, high or medium.
III. PROPOSED SYSTEM
We are presenting an Automated Question Paper
Generator System and Answer Checker System that
can reduce time consumption by replacing the
traditional method of question paper generation.
System Architecture
B. Question paper generation system
The examiner will input the questions as per
their need in their respective database record if they
want to manipulate (Add, Delete or Change) the data
through a GUI. Questions will get updated as per the
function obtained. As this is a web-based application
Faculty can set the difficulty level, structure of the
answer, Chapters that they want to add according to
their subject, Exam Score for how many marks they
want to set their papers. The system will generate
random questions of the chapters specified by
extracting it from database by using algorithm the
questions will be well organized
1. Randomized Algorithm
Randomized Algorithm checks the duplicate
questions and use to display random questions. The
algorithm is as followed, N = total no. of questions
in the database. The randomized algorithm randomly
generates these questions
Step 1: Create an array of N locations.
Step 2: Generate random number.
Step 3: if (lock==0)
Store generated number.
Else
Compare the generated number with previous
number in array.
If matching value found, go to step 2;
Else
Store the no in next location.
Step 4: Repeat step 2 for N numbers.
Step 5: Select questions from DB matching with
values from array location one by one [1].
B. Answer Checker System
The Student appearing for the test needs to login
and appear for the test (Test will be of Subjective as
well as objectivepattern).The more accurate the
answers the more he/she will earn the marks.The
difference between
model answer and student answer is done by using
Jaro-Winkler and checks all the possible mistakes
that the student has made and gives the final score of
the test.
4. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 8
Answer Checking Process
1. Pre-processing
Pre-processing plays a very important role in answer
checking. The pre-processing operations needed are
segmentation, stop-words removal, normalization,
finding synonyms and extracting roots.
a. Tokenization
Tokenization is needed to identify the end of
each sentences. The sentences are ended with
various punctuation marks that can be dot (.),
comma (,), colons (:) etc.
b. Stop-words Removal
Stop-words are the most frequent words, which
we use in our answer such as prepositions, articles,
conjunctionsand are not much useful for automatic
scoring. Removal of these stop-words will improve
the performance of the system.
c. Normalization
Normalization is needed to modify the text to
make it definite as per needed by removing
unnecessary characters, non-alphanumeric characters
to improve the performance of the system.
d. Root Extraction
The sources of the keyword are extracted from
both student answer as well as model answer.
e. Jaro-Winkler distance
Jaro-Winkler is use to compare strings by
measuring the edit distance between two strings. The
minimum the distance, the more the strings are
similar. The score is normalize such that 0 is an
exact match and 1 is for similarity. Jaro-Winkler
similarity is 1- Jaro-Winkler distance.
The Jaro Similarity (simj) between two strings s1
and s2 .
sim 𝑗
= 0 𝑖𝑓𝑚 = 0
1
3
𝑚
|𝑠1|
+
𝑚
|𝑠2|
+
𝑚 − 𝑡
𝑚
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Where |Si| = length of the strings
M = Number of matching characters
T = is half the number of transposition.
Two strings s1 and s2 are considered matching only
if it is same and not greater than
max
( 𝑠1 , |𝑠2|)
2
− 1
Jaro-Similarity uses prefix scale (p) which gives
ratings to the strings that are matched from very start.
So for this Jaro-Winkler similarity will be
Simw = simj + lp (1 − simj)
Where
L is length of common prefix max up to 4 characters.
P is constant scaling factor to adjust upwards to have
common prefix. p should not be greater than 0.25
else the distance will become greater than 1.
Standard value of p = 0.1
Jaro-Winkler distance is dw.
dw = 1 − simw
1.6 Automatic Scoring
The similarity measure value is converted into a
score using the following formula:
Mark = Similarity – Value * Mark
Similarityis calculated by JaroWinkler method.
5. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 9
IV. IMPLEMENTATION OF THE SYSTEM
1. Login Page
Login Page
In our system login page is common for admin, staff
as well as students. All the username and password
are by default inserted in the database uniquely for all
type of user (Admin, Faculty and Student).
The user no need to register for their session.Only
they need to authenticate themselves through login
and start with their session respectively.
2. Add Question and Answer
The faculty can insert additional questions in the
database if required.They only need to enter subject
name, Chapter Number of which the question belongs
to.Then enter the question as well answer as the
faculty need to give their model answer in the
database.They have to select marks (1, 2, and 5)for
6. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 10
that particular question that they need to
allocate.Select the difficulty level (Low, Medium, and
High) and the type of question (MCQ, FIB, Define,
and Brief).
3. Generate Question Paper
Faculty need to enter the Subject name that they need
to generate. Pattern of the paper (20, 40, 60,
80).Chapter names are displayed faculty need to
select at least twoChapters to generate the paper
pattern.
Exam Date is selected when the exam is
conducted.By clicking on generate question paper the
question paper is generated with random questions as
per the selected chapters.
3.1 Sample Paper Generated
Inputs
7. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 11
Output
4. Student- Start Exam
8. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 12
Student login and start its session when it needs to
appear for the exam.Student need to select the
question paper, which is hosted and then the exam
session gets started by clicking on Let’s Begin.With
the help of marks of individual questions, total marks
is then calculated automatically and result is
displayed
5. Answer Sheet
9. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 13
5.1 Sample Answer Generated
Inputs
Output
10. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 14
After Submission
V CONCLUSIONS
We have implemented automatic question paper
generation using randomized algorithm. System can
generate 20, 40, 60 and 80 marks of question papers
with various types of questions like MCQ’s (Multiple
Choice Questions), Fill in the Blanks, Answer in one
sentence and Short answers also.The answerchecking
system generates the marks for all the individual
questions based on the similarity measure between
the student answer and the model answer. This
system can be used further in various colleges and
schools to reduce their work and effective time
utilization.
REFERENCES
[1] Kapil Naik, Shreyas Sule, Shruti Jadhav, Surya Pandey,
“ Automatic Question Paper Generation System Using
Randomization Algorithm”, (IJETR) ISSN: 2321-
0869,Volume-2, Issue-12, December 2014.
[2] C. Leacock and M. Chodorow, “C-Rater: Automated Scoring
of Short-Answer Question,” Computers and the
Humanities, vol. 37,no. 4, pp. 389-405, 2003.
11. International Journal of Computer Trends and Technology ( IJCTT ) - Volume 67 Issue 4 – April 2019
ISSN: 2231 – 2803 http://www.ijcttjournal.org Page 15
[3] Y.Attali, J. Burstein, "Automated Essay Scoring with E-
Rater V.2.0", Paper presented at the Conference of the
International Association for Educational Assessment (IAEA)
Philadelphia, 13 - June.
[4] M.Mohler, R. Mihalcea, "Text-to-text semantic similarity for
automatic short answer grading", Proceedings of the 12th
Conference of the European Chapter of the Association for
Computational Linguistics Association for Computational
Linguistics, pp. 567-575, March.
[5] Rababah, H., & Al-Taani, A. T. (2017). An automated
scoring approach for Arabic short answers essay questions.
2017 8th International Conference on Information
Technology (ICIT).
[6] Noor Hasimah Ibrahim Teo, Nordin Abu Bakar and Moamed
Rezduan Abd Rashid, “Representing Examination Question
Knowledge into Genetic Algorithm”, IEEE Global
Engineering Education Conference (EDUCON), 2014.
[7] A Statistical approach for Automatic Text Summarization by
Extraction Munesh Chandra Vikrant Gupta Santosh Kr. Paul
2011 International Conference on Communication Systems
and Network Technologies
[8] Deshpande, R., Vaze, K., Rathod, S., & Jarhad, T, (2014).
"Comparative Study of Document Similarity Algorithms and
Clustering Algorithms for Sentiment Analysis", International
Journal of Emerging Trends & Technology in Computer
Science, Volume 3, Issue 5, September-October 2014.
[9] A New Fuzzy Information Retrieval System Based on User
Preference Model Dae-Won Kim and Kwang H. Lee
Department of Computer Science Korea Advanced Institute
of Science and Technology Kusung-dong, Yusung-gu, 305-
701, Daejon, Korea.