The Essay Scoring Tool - TEST B.E Project presentation Submitted by: Abhinav Gupta 201/CO/03 Danish Contractor 233/CO/03 Gaurav Singh 238/CO/03 Himanshu Mehrotra 241/CO/03 Under the guidance of: Dr. Shampa Chakraverty  COE Dept. NSIT Date of presentation:  1 st  June 2007 NSIT, Delhi
PRIOR WORK NSIT, Delhi
Overview of the Software NSIT, Delhi Student Essay TEST Essay TEST Training Essays INPUTS Spelling & Grammatical Checks Corpus Facts Feedback to student Score OUTPUTS
Scoring Parameters NSIT, Delhi Scoring Engine Quality  of  Content Global Coherence Factual  Accuracy Local Coherence
SINGULAR VALUES (K) RETAINED NSIT, Delhi
Study Undertaken Set of essays given to Human Graders Essays rated as : Good Essays Bad Essays
LOCAL COHERENCE – Good Essays Average variance from gold standard - 0.0219 NSIT, Delhi
LOCAL COHERENCE – Other Essays NSIT, Delhi Average variance from gold standard - 0.212
LOCAL COHERENCE- Combined Essays NSIT, Delhi Series 1 :  Good essays Series 2 : Other Essays
LOCAL COHERENCE - MARKING SCHEME NSIT, Delhi
LOCAL COHERENCE - MARKS NSIT, Delhi
CONTENTS-ESSAYS TO BE MARKED NSIT, Delhi
CONTENT – Good Essays NSIT, Delhi
CONTENT – Other Essays NSIT, Delhi
CONTENT - COMBINED SERIES 1 : GOOD ESSAYS SERIES 5:  OTHER ESSAYS NSIT, Delhi
CONTENT-NORMALIZED MARKS NSIT, Delhi
GLOBAL COHERENCE Essays are classified as having a : Good Structure Average Structure Bad Structure NSIT, Delhi
GOOD STRUCTURED ESSAY NSIT, Delhi
AVERAGELY STRUCTURED ESSAY NSIT, Delhi
BADLY STRUCTURED ESSAY NSIT, Delhi
GLOBAL COHERENCE MARKS NSIT, Delhi
Fact Evaluation Module NSIT, Delhi TEST Fact Evaluation Module Topic Specific Keywords List of Essays Correct Facts List Incorrect Facts List Individual Essay Reports & Scores N X 1 Score Matrix (For Internal use by TEST)
Fact Evaluation  No. of facts matched:4 No. of Incorrect Facts matched:1 SCORE: 0.8 NSIT, Delhi
Breakup of Essay Scores  NSIT, Delhi
Human scores v/s TEST scores NSIT, Delhi
Performance of TEST Adjacent agreement with human graders around 77% Agreement among human graders around 73% NSIT, Delhi
TIME COMPLEXITY PRE-PROCESSING FOR GLOBAL COHERENCE  0(N^3), Where N = No. of sentences in corpus. O(t*n^2), t=no. of themes, n=no. of sentences in eval. Essay FACT MODULE – O(k^4) k=no. of keywords
COMPARISON OF TEST WITH OTHER AES TOOLS PEG IEA E-Rater TEST Evaluation parameters Essay length, Complexity of sentence and word length Similarity with gold standard Lexical complexity, Vocabulary, Essay organization and many more.. Similarity with gold standard, Essay organization,Fact Accuracy. Feedback No Yes Yes Yes Essay content checking No Yes Yes Yes Fact checking No No Yes Yes Training phase Time consuming & inexpensive Time consuming & inexpensive Time consuming & expensive Time consuming & inexpensive Language of essays English English English Hindi  Performance Correlation of 0.87 with human raters Correlation of 0.85 with human raters. Correlation of 0.87 with human raters. Correlation of 0.7652 with human raters.
NSIT, Delhi FUTURE WORK Include  OCR  (Optical Character Recognition).  Increasing the size and variety of the corpus. Incorporating modules for spelling and grammar evaluation.   The use of Random Indexing (RI) techniques can reduce the size of the matrix which is input for the SVD procedure and thus can reduce time-complexity.
LIMITATIONS Absence of grammatical checking Absence of a spell-check The tool is unable to check  individualistic styles of writing Domain – specific knowledge required before checking an essay NSIT, Delhi
CONTRIBUTION First AES tool for Hindi Local Coherence at granularity of sentences Good correlation with human raters SVD done only once for Local and Global Coherence NSIT, Delhi
References An Introduction to Latent Semantic Analysis   by Thomas K Landauer University of Colorado at Boulder, Peter W. Foltz, Department of Psychology, New Mexico State University, Darrell Laham, Department of Psychology University of Colorado at Boulder, Discourse Processes, 1998 2. The Measurement of Textual Coherence with Latent Semantic Analysis  by Peter W. Foltz, New Mexico State University Walter Kintsch and Thomas K. Landauer University of Colorado, Discourse Processes, 1998   3.   Indexing by Latent Semantic Analysis  by Scott Deerwester, Graduate Library School University of Chicago, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Bell Communications Research Richard Harshman, University of Western Ontario,   Journal of the American Society for Information Science, 1990   4.  On the notions of theme and topic in psychological process models of text comprehension  by Walter Kintsch, Department of Psychology, University of Colorado, Interdisciplinary Studies, 2002   5.  How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans  by Thomas K. Landauer, Darrell Laham, Bob Rehder, and M. E. Schreiner Department of Psychology & Institute of Cognitive Science University of Colorado, Boulder, corpus, 1996   6.  A Critiquing System to Support English Composition through the Use of Latent Semantic Analysis  by Kelvin C. Wong, Anders I. Mørch, William K. Cheung, Mason H. Lam1 and Janti P. Tang, Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong,   2005 IEEE International Conference on e-Technology, e-Commerce and e-Service (EEE'05)  pp. 576-581   7.  Finding the WRITE stuff: Automatic identification of discourse structure in student essays  by Jill Burstein, Daniel Marcu, and Kevin Knight. 2003b  IEEE Trans-actions on Intelligent Systems: Special Issue on Ad-vances in Natural Language Processin g, 181:32–39.   NSIT, Delhi
WE WOULD LIKE TO THANK Dr. Shampa Chakraverty, without her constant guidance and support we would have given up long ago. Dr. Niladri Chatterjee, Dept. of Mathematics, IIT Delhi for sharing his experience in the NLP field. Ms. Yasmin Contractor, Principal, Summerfields School, Gurgaon for providing us with the student essays. Faculty of COE Dept. and fellow students. NSIT, Delhi
Q & A ? NSIT, Delhi
Automatic Essay Evaluation Software B.E. Final Year Project : Final Evaluation Project Guide: Dr. Shampa Chakraverty Team: Abhinav Gupta  201/CO/03 Danish Contractor  233/CO/03   Gaurav Singh   238/CO/03 Himanshu Mehrotra  241/CO/03
Aim of the software To score students’ essays on a specific topic. Give feedback to the student on deficiencies in his/her essay.
Need for this software Teachers these days are overburdened with the evaluation of answer scripts. Teachers are unable to give personalized attention to the students’ needs. Students feel the need to practice writing essays in  a non-test environment. Many factors influence the scoring of essays and introduce error.
Overview of the Software
Parameters used for evaluation Similarity with the gold standard  Local coherence of essay Global and Theme coherence checker and Feedback generator. Fact checking
Latent Semantic Analysis (LSA)  Latent semantic analysis  is a statistical technique in natural language processing of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA derives a high-dimensional semantic space. Words and passages are represented as vectors in the space. The LSA measured similarities have been shown to closely mimic human judgments of meaning similarity.
  Training corpus of gold standard essay and other articles, essays on the same topic +   Essay under evaluation     Term-document matrix (M)   (After Singular-value decomposition)   Three matrices – T,S and D (T=Term matrix, S=Singular-values matrix and D=document matrix)       Dimensionality reduction and preserving only 2 largest dimensions in S gives S-improved   (Multiplying T, S-improved and D)   New Term by Document matrix         LSA: Steps involved
LSA Example  Titles of Some Technical Memos •  c1:  Human  machine  interface  for ABC  computer  applications •  c2: A  survey  of  user  opinion of  computer system response time •  c3: The  EPS user interface  management  system •  c4:  System  and  human system  engineering testing of  EPS •  c5: Relation of  user  perceived  response time  to error measurement •  m1: The generation of random, binary, ordered  trees •  m2: The intersection  graph  of paths in  trees •  m3:  Graph minors  IV: Widths of  trees  and well- quasi- ordering •  m4:  Graph mino rs  : A  survey
LSA Example : Term by document matrix
LSA Example: After SVD
LSA Example: Results   Similarity between documents: C1 and C2 = 0.91 (high) C1 and C3 = 1.00 (very-high) C1 with C5 = 0.85(high) C2 with C3 = 0.91 (high) C1 and M1 = -0.85 (low) M1 and M2 = 1.00 (very-high) M2 and M3 = 1.00 (very-high) C2 and C3 = 0.91 (high) 
Local Coherence Estimation What is Coherence? Each sentence in an essay is connected to previous sentences. The  degree of this connection measures the coherence of the sentence pairs.  Coherence estimation using LSA: By comparing vectors for two adjoining segments of text in a semantic space, LSA measures degree of semantic relatedness between the segments.   
Global and theme coherence checker and feedback generator   The global structure of the essay is as follows:    Introduction          Ideas in individual paragraphs           Conclusion   Ideas in an essay are presented in the following way: Main idea   Supporting idea  Explanation of 1. and 2
Global and theme coherence checker and feedback generator A set of possible introductions, conclusions and ideas are extracted from gold standard and other training essays.  The similarity of student essay introduction is measured against the set of introductions using LSA. The same is done for the ideas and conclusions. Using the similarity measures the presence or absence of ideas, introductions and conclusions can be determined.
Fact Evaluation To facilitate this we will have 2 sets of facts –Correct fact and incorrect facts, per essay topic.  The following guidelines would be used to evaluate facts: Set of “keywords" to be checked at the sentential level in the text.  Detection of two or more keywords invokes the checking module  2 databases of facts (Correct and Incorrect) contain sets of keywords to form a "fact". Each sentence would be assumed to have a maximum of one fact Connectives in sentences to be treated as "end-of-sentence" markers for fact evaluation purposes.
Fact Evaluation The keywords detected are paired and matched to form sets of "facts" and then checked in the database. Three cases may arise: It returns a positive match in both databases. It returns a positive match in the correct facts database. It returns a positive match in the incorrect facts database The time complexity of factual evaluation is around O ( m* (log p)^2 )  p= No of keywords m= Average sentence length  This could be a huge overhead while evaluating essays as fact evaluation is a very small aspect of the entire process.  The use of SQL (for reading facts) and other database optimizations should reduce the time required during computation
References An Introduction to Latent Semantic Analysis   by Thomas K Landauer University of Colorado at Boulder, Peter W. Foltz, Department of Psychology, New Mexico State University, Darrell Laham, Department of Psychology University of Colorado at Boulder, Discourse Processes, 1998 2. The Measurement of Textual Coherence with Latent Semantic Analysis  by Peter W. Foltz, New Mexico State University Walter Kintsch and Thomas K. Landauer University of Colorado, Discourse Processes, 1998   3.   Indexing by Latent Semantic Analysis  by Scott Deerwester, Graduate Library School University of Chicago, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Bell Communications Research Richard Harshman, University of Western Ontario,   Journal of the American Society for Information Science, 1990   4.  On the notions of theme and topic in psychological process models of text comprehension  by Walter Kintsch, Department of Psychology, University of Colorado, Interdisciplinary Studies, 2002   5.  How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans  by Thomas K. Landauer, Darrell Laham, Bob Rehder, and M. E. Schreiner Department of Psychology & Institute of Cognitive Science University of Colorado, Boulder, corpus, 1996   6.  A Critiquing System to Support English Composition through the Use of Latent Semantic Analysis  by Kelvin C. Wong, Anders I. Mørch, William K. Cheung, Mason H. Lam1 and Janti P. Tang, Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong,   2005 IEEE International Conference on e-Technology, e-Commerce and e-Service (EEE'05)  pp. 576-581   7.  Finding the WRITE stuff: Automatic identification of discourse structure in student essays  by Jill Burstein, Daniel Marcu, and Kevin Knight. 2003b  IEEE Trans-actions on Intelligent Systems: Special Issue on Ad-vances in Natural Language Processin g, 181:32–39.    
Local Coherence Module NSIT, Delhi The reduced term-document Matrix after LSA Evaluation Essay column  number in term-document  matrix Score on Local Coherence Feedback to Student Local Coherence Module
Local Coherence Results NSIT, Delhi
Content Evaluation Module NSIT, Delhi Set of Domain Specific Golden Standard Essays Set of Essays to be  evaluated Essay Content Evaluation Module Normalized scores on basis of Content
Content Evaluation Results NSIT, Delhi
Content Evaluation  Normalized Results NSIT, Delhi
Global Coherence Module NSIT, Delhi Golden Standard Essays Global Coherence Evaluation Module Feedback Score Evaluation Essay(s)
Global Coherence Evaluation Effect of K NSIT, Delhi

The Essay Scoring Tool (TEST) for Hindi

  • 1.
    The Essay ScoringTool - TEST B.E Project presentation Submitted by: Abhinav Gupta 201/CO/03 Danish Contractor 233/CO/03 Gaurav Singh 238/CO/03 Himanshu Mehrotra 241/CO/03 Under the guidance of: Dr. Shampa Chakraverty COE Dept. NSIT Date of presentation: 1 st June 2007 NSIT, Delhi
  • 2.
  • 3.
    Overview of theSoftware NSIT, Delhi Student Essay TEST Essay TEST Training Essays INPUTS Spelling & Grammatical Checks Corpus Facts Feedback to student Score OUTPUTS
  • 4.
    Scoring Parameters NSIT,Delhi Scoring Engine Quality of Content Global Coherence Factual Accuracy Local Coherence
  • 5.
    SINGULAR VALUES (K)RETAINED NSIT, Delhi
  • 6.
    Study Undertaken Setof essays given to Human Graders Essays rated as : Good Essays Bad Essays
  • 7.
    LOCAL COHERENCE –Good Essays Average variance from gold standard - 0.0219 NSIT, Delhi
  • 8.
    LOCAL COHERENCE –Other Essays NSIT, Delhi Average variance from gold standard - 0.212
  • 9.
    LOCAL COHERENCE- CombinedEssays NSIT, Delhi Series 1 : Good essays Series 2 : Other Essays
  • 10.
    LOCAL COHERENCE -MARKING SCHEME NSIT, Delhi
  • 11.
    LOCAL COHERENCE -MARKS NSIT, Delhi
  • 12.
    CONTENTS-ESSAYS TO BEMARKED NSIT, Delhi
  • 13.
    CONTENT – GoodEssays NSIT, Delhi
  • 14.
    CONTENT – OtherEssays NSIT, Delhi
  • 15.
    CONTENT - COMBINEDSERIES 1 : GOOD ESSAYS SERIES 5: OTHER ESSAYS NSIT, Delhi
  • 16.
  • 17.
    GLOBAL COHERENCE Essaysare classified as having a : Good Structure Average Structure Bad Structure NSIT, Delhi
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
    Fact Evaluation ModuleNSIT, Delhi TEST Fact Evaluation Module Topic Specific Keywords List of Essays Correct Facts List Incorrect Facts List Individual Essay Reports & Scores N X 1 Score Matrix (For Internal use by TEST)
  • 23.
    Fact Evaluation No. of facts matched:4 No. of Incorrect Facts matched:1 SCORE: 0.8 NSIT, Delhi
  • 24.
    Breakup of EssayScores NSIT, Delhi
  • 25.
    Human scores v/sTEST scores NSIT, Delhi
  • 26.
    Performance of TESTAdjacent agreement with human graders around 77% Agreement among human graders around 73% NSIT, Delhi
  • 27.
    TIME COMPLEXITY PRE-PROCESSINGFOR GLOBAL COHERENCE 0(N^3), Where N = No. of sentences in corpus. O(t*n^2), t=no. of themes, n=no. of sentences in eval. Essay FACT MODULE – O(k^4) k=no. of keywords
  • 28.
    COMPARISON OF TESTWITH OTHER AES TOOLS PEG IEA E-Rater TEST Evaluation parameters Essay length, Complexity of sentence and word length Similarity with gold standard Lexical complexity, Vocabulary, Essay organization and many more.. Similarity with gold standard, Essay organization,Fact Accuracy. Feedback No Yes Yes Yes Essay content checking No Yes Yes Yes Fact checking No No Yes Yes Training phase Time consuming & inexpensive Time consuming & inexpensive Time consuming & expensive Time consuming & inexpensive Language of essays English English English Hindi Performance Correlation of 0.87 with human raters Correlation of 0.85 with human raters. Correlation of 0.87 with human raters. Correlation of 0.7652 with human raters.
  • 29.
    NSIT, Delhi FUTUREWORK Include OCR (Optical Character Recognition). Increasing the size and variety of the corpus. Incorporating modules for spelling and grammar evaluation. The use of Random Indexing (RI) techniques can reduce the size of the matrix which is input for the SVD procedure and thus can reduce time-complexity.
  • 30.
    LIMITATIONS Absence ofgrammatical checking Absence of a spell-check The tool is unable to check individualistic styles of writing Domain – specific knowledge required before checking an essay NSIT, Delhi
  • 31.
    CONTRIBUTION First AEStool for Hindi Local Coherence at granularity of sentences Good correlation with human raters SVD done only once for Local and Global Coherence NSIT, Delhi
  • 32.
    References An Introductionto Latent Semantic Analysis by Thomas K Landauer University of Colorado at Boulder, Peter W. Foltz, Department of Psychology, New Mexico State University, Darrell Laham, Department of Psychology University of Colorado at Boulder, Discourse Processes, 1998 2. The Measurement of Textual Coherence with Latent Semantic Analysis by Peter W. Foltz, New Mexico State University Walter Kintsch and Thomas K. Landauer University of Colorado, Discourse Processes, 1998   3. Indexing by Latent Semantic Analysis by Scott Deerwester, Graduate Library School University of Chicago, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Bell Communications Research Richard Harshman, University of Western Ontario, Journal of the American Society for Information Science, 1990   4. On the notions of theme and topic in psychological process models of text comprehension by Walter Kintsch, Department of Psychology, University of Colorado, Interdisciplinary Studies, 2002   5. How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans by Thomas K. Landauer, Darrell Laham, Bob Rehder, and M. E. Schreiner Department of Psychology & Institute of Cognitive Science University of Colorado, Boulder, corpus, 1996   6. A Critiquing System to Support English Composition through the Use of Latent Semantic Analysis by Kelvin C. Wong, Anders I. Mørch, William K. Cheung, Mason H. Lam1 and Janti P. Tang, Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong, 2005 IEEE International Conference on e-Technology, e-Commerce and e-Service (EEE'05) pp. 576-581   7. Finding the WRITE stuff: Automatic identification of discourse structure in student essays by Jill Burstein, Daniel Marcu, and Kevin Knight. 2003b IEEE Trans-actions on Intelligent Systems: Special Issue on Ad-vances in Natural Language Processin g, 181:32–39.   NSIT, Delhi
  • 33.
    WE WOULD LIKETO THANK Dr. Shampa Chakraverty, without her constant guidance and support we would have given up long ago. Dr. Niladri Chatterjee, Dept. of Mathematics, IIT Delhi for sharing his experience in the NLP field. Ms. Yasmin Contractor, Principal, Summerfields School, Gurgaon for providing us with the student essays. Faculty of COE Dept. and fellow students. NSIT, Delhi
  • 34.
    Q & A? NSIT, Delhi
  • 35.
    Automatic Essay EvaluationSoftware B.E. Final Year Project : Final Evaluation Project Guide: Dr. Shampa Chakraverty Team: Abhinav Gupta 201/CO/03 Danish Contractor 233/CO/03 Gaurav Singh 238/CO/03 Himanshu Mehrotra 241/CO/03
  • 36.
    Aim of thesoftware To score students’ essays on a specific topic. Give feedback to the student on deficiencies in his/her essay.
  • 37.
    Need for thissoftware Teachers these days are overburdened with the evaluation of answer scripts. Teachers are unable to give personalized attention to the students’ needs. Students feel the need to practice writing essays in a non-test environment. Many factors influence the scoring of essays and introduce error.
  • 38.
  • 39.
    Parameters used forevaluation Similarity with the gold standard Local coherence of essay Global and Theme coherence checker and Feedback generator. Fact checking
  • 40.
    Latent Semantic Analysis(LSA) Latent semantic analysis is a statistical technique in natural language processing of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA derives a high-dimensional semantic space. Words and passages are represented as vectors in the space. The LSA measured similarities have been shown to closely mimic human judgments of meaning similarity.
  • 41.
      Training corpusof gold standard essay and other articles, essays on the same topic + Essay under evaluation     Term-document matrix (M) (After Singular-value decomposition)   Three matrices – T,S and D (T=Term matrix, S=Singular-values matrix and D=document matrix)       Dimensionality reduction and preserving only 2 largest dimensions in S gives S-improved (Multiplying T, S-improved and D)   New Term by Document matrix       LSA: Steps involved
  • 42.
    LSA Example Titles of Some Technical Memos • c1: Human machine interface for ABC computer applications • c2: A survey of user opinion of computer system response time • c3: The EPS user interface management system • c4: System and human system engineering testing of EPS • c5: Relation of user perceived response time to error measurement • m1: The generation of random, binary, ordered trees • m2: The intersection graph of paths in trees • m3: Graph minors IV: Widths of trees and well- quasi- ordering • m4: Graph mino rs : A survey
  • 43.
    LSA Example :Term by document matrix
  • 44.
  • 45.
    LSA Example: Results  Similarity between documents: C1 and C2 = 0.91 (high) C1 and C3 = 1.00 (very-high) C1 with C5 = 0.85(high) C2 with C3 = 0.91 (high) C1 and M1 = -0.85 (low) M1 and M2 = 1.00 (very-high) M2 and M3 = 1.00 (very-high) C2 and C3 = 0.91 (high) 
  • 46.
    Local Coherence EstimationWhat is Coherence? Each sentence in an essay is connected to previous sentences. The degree of this connection measures the coherence of the sentence pairs. Coherence estimation using LSA: By comparing vectors for two adjoining segments of text in a semantic space, LSA measures degree of semantic relatedness between the segments.  
  • 47.
    Global and themecoherence checker and feedback generator The global structure of the essay is as follows:   Introduction         Ideas in individual paragraphs          Conclusion   Ideas in an essay are presented in the following way: Main idea Supporting idea Explanation of 1. and 2
  • 48.
    Global and themecoherence checker and feedback generator A set of possible introductions, conclusions and ideas are extracted from gold standard and other training essays. The similarity of student essay introduction is measured against the set of introductions using LSA. The same is done for the ideas and conclusions. Using the similarity measures the presence or absence of ideas, introductions and conclusions can be determined.
  • 49.
    Fact Evaluation Tofacilitate this we will have 2 sets of facts –Correct fact and incorrect facts, per essay topic. The following guidelines would be used to evaluate facts: Set of “keywords" to be checked at the sentential level in the text. Detection of two or more keywords invokes the checking module 2 databases of facts (Correct and Incorrect) contain sets of keywords to form a "fact". Each sentence would be assumed to have a maximum of one fact Connectives in sentences to be treated as "end-of-sentence" markers for fact evaluation purposes.
  • 50.
    Fact Evaluation Thekeywords detected are paired and matched to form sets of "facts" and then checked in the database. Three cases may arise: It returns a positive match in both databases. It returns a positive match in the correct facts database. It returns a positive match in the incorrect facts database The time complexity of factual evaluation is around O ( m* (log p)^2 ) p= No of keywords m= Average sentence length This could be a huge overhead while evaluating essays as fact evaluation is a very small aspect of the entire process. The use of SQL (for reading facts) and other database optimizations should reduce the time required during computation
  • 51.
    References An Introductionto Latent Semantic Analysis by Thomas K Landauer University of Colorado at Boulder, Peter W. Foltz, Department of Psychology, New Mexico State University, Darrell Laham, Department of Psychology University of Colorado at Boulder, Discourse Processes, 1998 2. The Measurement of Textual Coherence with Latent Semantic Analysis by Peter W. Foltz, New Mexico State University Walter Kintsch and Thomas K. Landauer University of Colorado, Discourse Processes, 1998   3. Indexing by Latent Semantic Analysis by Scott Deerwester, Graduate Library School University of Chicago, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Bell Communications Research Richard Harshman, University of Western Ontario, Journal of the American Society for Information Science, 1990   4. On the notions of theme and topic in psychological process models of text comprehension by Walter Kintsch, Department of Psychology, University of Colorado, Interdisciplinary Studies, 2002   5. How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans by Thomas K. Landauer, Darrell Laham, Bob Rehder, and M. E. Schreiner Department of Psychology & Institute of Cognitive Science University of Colorado, Boulder, corpus, 1996   6. A Critiquing System to Support English Composition through the Use of Latent Semantic Analysis by Kelvin C. Wong, Anders I. Mørch, William K. Cheung, Mason H. Lam1 and Janti P. Tang, Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong, 2005 IEEE International Conference on e-Technology, e-Commerce and e-Service (EEE'05) pp. 576-581   7. Finding the WRITE stuff: Automatic identification of discourse structure in student essays by Jill Burstein, Daniel Marcu, and Kevin Knight. 2003b IEEE Trans-actions on Intelligent Systems: Special Issue on Ad-vances in Natural Language Processin g, 181:32–39.    
  • 52.
    Local Coherence ModuleNSIT, Delhi The reduced term-document Matrix after LSA Evaluation Essay column number in term-document matrix Score on Local Coherence Feedback to Student Local Coherence Module
  • 53.
  • 54.
    Content Evaluation ModuleNSIT, Delhi Set of Domain Specific Golden Standard Essays Set of Essays to be evaluated Essay Content Evaluation Module Normalized scores on basis of Content
  • 55.
  • 56.
    Content Evaluation Normalized Results NSIT, Delhi
  • 57.
    Global Coherence ModuleNSIT, Delhi Golden Standard Essays Global Coherence Evaluation Module Feedback Score Evaluation Essay(s)
  • 58.
    Global Coherence EvaluationEffect of K NSIT, Delhi