This document describes a proposed AI-based question answering system to generate and evaluate questions for students. Key points:
- The system would generate objective and subjective questions from course materials uploaded by instructors to provide automated assessments.
- It would use natural language processing and machine learning models to understand text, extract keywords to frame questions, and generate answer options for multiple-choice questions.
- Student responses would be evaluated by comparing them to ideal answers using similarity metrics like cosine similarity and Jaccard similarity to provide scores.
- An implementation tested on sample course materials found the model could accurately generate questions and evaluate student knowledge within the limitations of not handling diagrams, math formulations, or subjective literature evaluations.