The document discusses quality assurance in large scale e-assessments. It outlines a quality assurance process that involves (1) planning assessments according to constructive alignment principles by defining learning outcomes and designing an assessment blueprint, (2) developing assessments by creating test items and compiling tests, and (3) analyzing and evaluating assessments by analyzing item-level metrics like difficulty and discrimination and test-level reliability. The process aims to ensure assessments are valid, objective, and reliable. Quality assurance is about more than just technical issues - it also requires communication and buy-in from students and faculty.
The document discusses formative assessment and its contribution to teaching and learning foreign languages. It provides definitions of key terms related to assessment such as assessment, testing, performance, competence, measurement, evaluation, reliability, and validity. It explains the differences between assessment for learning and assessment of learning. The document also discusses the Common European Framework of Reference for Languages and its use of descriptive levels and illustrative descriptors to define language proficiency standards.
This document discusses the evolution of programmatic assessment in UK medical training over the past 30 years. It outlines how assessment has shifted from high-stakes exit exams to integrated programs that use workplace-based assessments like mini-CEX, DOPS, and CbD. Key organizations like the GMC, PMETB, and foundation program have developed principles of good assessment including assessing multiple competencies through various methods. The foundation program initially piloted four assessment tools but has since refined these to better provide feedback and identify trainees needing support. Overall, the document traces the progression towards valid programmatic assessment across medical education in the UK.
This document outlines a dissertation that investigates whether un-moderated or moderated group participation is better for knowledge creation and convergence within an online community of practice (CoP). The study uses a mixed methods sequential explanatory design including a quasi-experimental pre-post test and qualitative interviews and content analysis. Results from the pre-post tests and qualitative analysis indicate that both moderated and un-moderated groups showed learning, but the moderated group performed better. The study suggests online collaboration can help build legal skills and minimize degraded judgments by facilitating knowledge convergence within the organization.
Peering through the Looking Glass: Towards a Programmatic View of the Qualify...MedCouncilCan
André De Champlain presented on developing a programmatic view of the MCC Qualifying Examination. Key points include:
1) The Assessment Review Task Force recommended validating and updating the blueprint for MCC examinations and exploring a more integrated, continuous model of assessment along the physician's educational continuum.
2) A proposed Medical Education Assessment Advisory Committee would provide guidance on incorporating authentic, linked assessments throughout training and practice.
3) Validating a program of assessment would require evaluating the reliability of individual elements as well as the entire program, and gathering multiple types of evidence to support the validity of score interpretations.
Overall, assessments are used either as a Programmatic Assessment or as a Learning Assessment. One of the most familiar learning assessments is the multiple choice assessment that reflects the typical pen and paper traditional classroom test (Popham, 2006). However, these tests are not very easy to construct to ensure validity due to unclear directions, ambiguous statements, unintended clues, complicated syntax and difficult vocabulary (Popham, 2006). Other learning assessments with construct validity, such as the essay and the reflective journal, tend to focus on student-centered pedagogy. These assessments are ideal for assessing the learning outcomes of the individual and increase the student’s personal responsibility for their own learning. This reading document provides a brief summary of assessment tools that are available for both programmatic and learning.
This document proposes a model for programmatic assessment that optimizes assessment for learning while arriving at robust decisions about learner progress. The model distinguishes between learning activities, assessment activities, and learner support activities throughout an ongoing curriculum. Individual assessments are designed to be maximally informative for learning, while a longitudinal program of various assessment methods contributes to certification decisions. The principles discussed include ensuring validity in standardized and non-standardized assessments, using both quantitative and qualitative data, and relying on expert judgement at various evaluation points. An example is provided of how this model could be applied to a blended TeleGeriatrics Nurse Training Course.
A journey towards programmatic assessmentMedCouncilCan
The document discusses programmatic assessment in medical education. It begins by outlining various assessment methods and frameworks for evaluating competencies. It then discusses research findings on the validity, reliability, and educational impact of assessment methods. Key findings include that no single method can adequately measure all competencies, and that both standardized and unstandardized methods are needed. Reliability increases with larger samples and aggregation of data from multiple methods and assessors. Assessment works best when it provides meaningful feedback to support learning. The document concludes by describing examples of programmatic assessment approaches that integrate various longitudinal methods to provide rich data for high-stakes decisions.
The document discusses formative assessment and its contribution to teaching and learning foreign languages. It provides definitions of key terms related to assessment such as assessment, testing, performance, competence, measurement, evaluation, reliability, and validity. It explains the differences between assessment for learning and assessment of learning. The document also discusses the Common European Framework of Reference for Languages and its use of descriptive levels and illustrative descriptors to define language proficiency standards.
This document discusses the evolution of programmatic assessment in UK medical training over the past 30 years. It outlines how assessment has shifted from high-stakes exit exams to integrated programs that use workplace-based assessments like mini-CEX, DOPS, and CbD. Key organizations like the GMC, PMETB, and foundation program have developed principles of good assessment including assessing multiple competencies through various methods. The foundation program initially piloted four assessment tools but has since refined these to better provide feedback and identify trainees needing support. Overall, the document traces the progression towards valid programmatic assessment across medical education in the UK.
This document outlines a dissertation that investigates whether un-moderated or moderated group participation is better for knowledge creation and convergence within an online community of practice (CoP). The study uses a mixed methods sequential explanatory design including a quasi-experimental pre-post test and qualitative interviews and content analysis. Results from the pre-post tests and qualitative analysis indicate that both moderated and un-moderated groups showed learning, but the moderated group performed better. The study suggests online collaboration can help build legal skills and minimize degraded judgments by facilitating knowledge convergence within the organization.
Peering through the Looking Glass: Towards a Programmatic View of the Qualify...MedCouncilCan
André De Champlain presented on developing a programmatic view of the MCC Qualifying Examination. Key points include:
1) The Assessment Review Task Force recommended validating and updating the blueprint for MCC examinations and exploring a more integrated, continuous model of assessment along the physician's educational continuum.
2) A proposed Medical Education Assessment Advisory Committee would provide guidance on incorporating authentic, linked assessments throughout training and practice.
3) Validating a program of assessment would require evaluating the reliability of individual elements as well as the entire program, and gathering multiple types of evidence to support the validity of score interpretations.
Overall, assessments are used either as a Programmatic Assessment or as a Learning Assessment. One of the most familiar learning assessments is the multiple choice assessment that reflects the typical pen and paper traditional classroom test (Popham, 2006). However, these tests are not very easy to construct to ensure validity due to unclear directions, ambiguous statements, unintended clues, complicated syntax and difficult vocabulary (Popham, 2006). Other learning assessments with construct validity, such as the essay and the reflective journal, tend to focus on student-centered pedagogy. These assessments are ideal for assessing the learning outcomes of the individual and increase the student’s personal responsibility for their own learning. This reading document provides a brief summary of assessment tools that are available for both programmatic and learning.
This document proposes a model for programmatic assessment that optimizes assessment for learning while arriving at robust decisions about learner progress. The model distinguishes between learning activities, assessment activities, and learner support activities throughout an ongoing curriculum. Individual assessments are designed to be maximally informative for learning, while a longitudinal program of various assessment methods contributes to certification decisions. The principles discussed include ensuring validity in standardized and non-standardized assessments, using both quantitative and qualitative data, and relying on expert judgement at various evaluation points. An example is provided of how this model could be applied to a blended TeleGeriatrics Nurse Training Course.
A journey towards programmatic assessmentMedCouncilCan
The document discusses programmatic assessment in medical education. It begins by outlining various assessment methods and frameworks for evaluating competencies. It then discusses research findings on the validity, reliability, and educational impact of assessment methods. Key findings include that no single method can adequately measure all competencies, and that both standardized and unstandardized methods are needed. Reliability increases with larger samples and aggregation of data from multiple methods and assessors. Assessment works best when it provides meaningful feedback to support learning. The document concludes by describing examples of programmatic assessment approaches that integrate various longitudinal methods to provide rich data for high-stakes decisions.
Introduction to the e-Learning networ in mathematics in Saxony - E-Assessment...metamath
This document introduces an e-learning network in mathematics across universities in Saxony, Germany. The network shares electronic assessments created using ONYX and MAXIMA. Over 50 authors have created more than 1000 questions across various topics in mathematics. The assessments provide interactive practice and feedback for students and inform instructors. OPAL is the central learning platform used by 80,000 members across 11 universities. ONYX allows for different question types and MAXIMA can analyze student responses with random parameters and expressions as answers. The speaker's university courses use 4 online tests throughout a semester to provide practice for approximately 200 students.
Presentation of examples of modern scenarios with digital mediametamath
The document discusses modern teaching and learning methods using digital media. It presents examples of how professors in Saxonian universities are using technologies like digital texts, videos, simulations, and online surveys. Specific examples are given of uses like central distribution of materials, flexibility in timing with video lectures, and demonstrations with digital media. Implications for constructive alignment of learning outcomes, assessments and teaching activities are discussed. The use of social learning technologies like wikis, blogs, and video conferencing are also examined. Throughout, implications for integrating these methods into teaching projects are highlighted.
Intelligent Adaptive Services for Workplace-Integrated Learning on Shop Floorsmetamath
The document discusses intelligent adaptive services to support workplace-integrated learning on the shop floor. It provides context on Industry 4.0 and the transformation of manufacturing workplaces through digitalization and cyber-physical systems. The APPsist project aims to develop assistance and knowledge acquisition services for smart production environments. Services select appropriate work processes, learning content, and assistance based on the user and machine context to guide operators and support flexible on-the-job learning. The services were implemented and tested in pilot scenarios at industry partners.
The document describes a project to develop self-directed e-learning mathematics courses on an online platform to help students from diverse educational backgrounds succeed in their university studies, with features like entry tests, short instructional videos, interactive examples, and online exercises with personalized feedback to support learning both individually and collaboratively before classroom lessons.
This document provides guidance for authoring advanced math exercises in Math-Bridge, an education solution. It explains that exercise steps should include different interaction types and that the order of transition conditions is important, with the first matching the final correct answer and the default transition last. It also recommends using syntactic comparison for the exact correct answer and semantic comparison for other conditions like correct but simplified answers or typical errors. Partial credit can be given based on syntactic and semantic analysis of responses.
Math-Bridge is an education solution that uses an event framework to facilitate communication between its components. The event framework allows components to publish events about actions taken, which other interested components can subscribe to and listen for. Events contain information about the action, timestamp, and source. Example events include a page being presented in a book, an exercise being started or completed, and individual exercise steps. The event framework supports listening, subscribing, and publishing events to allow components like the student model and exercise subsystem to share information and update each other.
The document describes the architecture of a math training program called Math-Bridge. It includes components like Apache Tomcat for web delivery, Maverick as a model-view-controller framework, and a core component for system functionality. Content is stored and indexed in a ContentDB using technologies like Java, Lucene and OMDoc. A learner model tracks user progress. Other components include presentation of content, user accounts, exercises that interface with a computer algebra system, and a tutorial component. The program uses technologies like Java, databases, XMLRPC and XSLT to power its functionality.
This document provides instructions for translating the user interface of a math training program called Math-Bridge. It explains that interface phrases are saved in Java properties files labeled Phrases_LANG.properties using UTF-8 encoding, and that translators should open the file, translate the phrases, save it, and run an 'ant i18n' target to restart the system with the new translations.
Math-Bridge is an education solution that allows for the creation of static learning objects (LOs). It features a WYSIWYG authoring tool that allows editing of LO metadata and inclusion of mathematical formulas. Different types of LOs can be defined with their own applicable metadata. The tool allows users to create, edit, translate, and publish LOs. Published LOs are moved from the local workspace to collections for sharing.
This document describes the Math-Bridge education solution which provides pre-recorded math courses covering basic to advanced mathematics topics. The courses include videos, exercises, and assessments. Content areas include numbers, arithmetic, algebra, functions, geometry, trigonometry, calculus, probability, and more. The content is organized into collections and designed to be reusable across programs and institutions. Some content has been implemented at Leuphana University in Lüneburg, Germany for a bridging mathematics course.
its a complete procedure of software testing.
Software Testing Research Paper.
step by step procedure of Software testing.
Software testing Techniques in this research paper.
introduction and Procedure software testing.
Exploratory testing is an approach that emphasizes freedom and responsibility of individual testers in a process where continuous learning, test design, and execution occur simultaneously. It is a disciplined, planned, and controlled form of testing that focuses on continuous learning. Research has shown there is no significant difference in results between exploratory testing and preplanned test cases, but exploratory testing requires significantly less effort overall. Effective exploratory testing requires skills like making models, keeping an open mind, and risk-based testing approaches. Both the strengths and potential blind spots of exploratory testing are discussed.
Introduction to the e-Learning networ in mathematics in Saxony - E-Assessment...metamath
This document introduces an e-learning network in mathematics across universities in Saxony, Germany. The network shares electronic assessments created using ONYX and MAXIMA. Over 50 authors have created more than 1000 questions across various topics in mathematics. The assessments provide interactive practice and feedback for students and inform instructors. OPAL is the central learning platform used by 80,000 members across 11 universities. ONYX allows for different question types and MAXIMA can analyze student responses with random parameters and expressions as answers. The speaker's university courses use 4 online tests throughout a semester to provide practice for approximately 200 students.
Presentation of examples of modern scenarios with digital mediametamath
The document discusses modern teaching and learning methods using digital media. It presents examples of how professors in Saxonian universities are using technologies like digital texts, videos, simulations, and online surveys. Specific examples are given of uses like central distribution of materials, flexibility in timing with video lectures, and demonstrations with digital media. Implications for constructive alignment of learning outcomes, assessments and teaching activities are discussed. The use of social learning technologies like wikis, blogs, and video conferencing are also examined. Throughout, implications for integrating these methods into teaching projects are highlighted.
Intelligent Adaptive Services for Workplace-Integrated Learning on Shop Floorsmetamath
The document discusses intelligent adaptive services to support workplace-integrated learning on the shop floor. It provides context on Industry 4.0 and the transformation of manufacturing workplaces through digitalization and cyber-physical systems. The APPsist project aims to develop assistance and knowledge acquisition services for smart production environments. Services select appropriate work processes, learning content, and assistance based on the user and machine context to guide operators and support flexible on-the-job learning. The services were implemented and tested in pilot scenarios at industry partners.
The document describes a project to develop self-directed e-learning mathematics courses on an online platform to help students from diverse educational backgrounds succeed in their university studies, with features like entry tests, short instructional videos, interactive examples, and online exercises with personalized feedback to support learning both individually and collaboratively before classroom lessons.
This document provides guidance for authoring advanced math exercises in Math-Bridge, an education solution. It explains that exercise steps should include different interaction types and that the order of transition conditions is important, with the first matching the final correct answer and the default transition last. It also recommends using syntactic comparison for the exact correct answer and semantic comparison for other conditions like correct but simplified answers or typical errors. Partial credit can be given based on syntactic and semantic analysis of responses.
Math-Bridge is an education solution that uses an event framework to facilitate communication between its components. The event framework allows components to publish events about actions taken, which other interested components can subscribe to and listen for. Events contain information about the action, timestamp, and source. Example events include a page being presented in a book, an exercise being started or completed, and individual exercise steps. The event framework supports listening, subscribing, and publishing events to allow components like the student model and exercise subsystem to share information and update each other.
The document describes the architecture of a math training program called Math-Bridge. It includes components like Apache Tomcat for web delivery, Maverick as a model-view-controller framework, and a core component for system functionality. Content is stored and indexed in a ContentDB using technologies like Java, Lucene and OMDoc. A learner model tracks user progress. Other components include presentation of content, user accounts, exercises that interface with a computer algebra system, and a tutorial component. The program uses technologies like Java, databases, XMLRPC and XSLT to power its functionality.
This document provides instructions for translating the user interface of a math training program called Math-Bridge. It explains that interface phrases are saved in Java properties files labeled Phrases_LANG.properties using UTF-8 encoding, and that translators should open the file, translate the phrases, save it, and run an 'ant i18n' target to restart the system with the new translations.
Math-Bridge is an education solution that allows for the creation of static learning objects (LOs). It features a WYSIWYG authoring tool that allows editing of LO metadata and inclusion of mathematical formulas. Different types of LOs can be defined with their own applicable metadata. The tool allows users to create, edit, translate, and publish LOs. Published LOs are moved from the local workspace to collections for sharing.
This document describes the Math-Bridge education solution which provides pre-recorded math courses covering basic to advanced mathematics topics. The courses include videos, exercises, and assessments. Content areas include numbers, arithmetic, algebra, functions, geometry, trigonometry, calculus, probability, and more. The content is organized into collections and designed to be reusable across programs and institutions. Some content has been implemented at Leuphana University in Lüneburg, Germany for a bridging mathematics course.
its a complete procedure of software testing.
Software Testing Research Paper.
step by step procedure of Software testing.
Software testing Techniques in this research paper.
introduction and Procedure software testing.
Exploratory testing is an approach that emphasizes freedom and responsibility of individual testers in a process where continuous learning, test design, and execution occur simultaneously. It is a disciplined, planned, and controlled form of testing that focuses on continuous learning. Research has shown there is no significant difference in results between exploratory testing and preplanned test cases, but exploratory testing requires significantly less effort overall. Effective exploratory testing requires skills like making models, keeping an open mind, and risk-based testing approaches. Both the strengths and potential blind spots of exploratory testing are discussed.
The curriculum development cycle has three main stages: design, implementation, and validation. During the design stage, learning objectives, content, strategies, and assessments are planned. The implementation stage involves instructors delivering training based on the curriculum. Finally, the validation stage evaluates the curriculum through expert review, pre-/post-testing, and the CIPP method to provide feedback for revisions.
This document discusses different types of validity including content validity, criterion validity, and construct validity. It provides definitions and steps for establishing each type of validity. Specifically, it explains that content validity determines if a test adequately measures the intended content area. Criterion validity compares test scores to an external outcome measure concurrently or predictively. Construct validity establishes if a test measures a theoretical construct through examining correlations between various measures of that construct. The document also notes factors that can impact a test's validity such as length, ability range, and ambiguous directions. Overall, the document provides an overview of establishing and interpreting different aspects of test validity.
Designing useful evaluations - An online workshop for the Jisc AF programme_I...Rachel Harris
This document summarizes a workshop on designing useful evaluations for projects funded by the JISC and Becta Curriculum Delivery Programme. The workshop covered the evaluation cycle, identifying intended outcomes and impact, determining what to evaluate, developing evaluation questions, and methods for undertaking evaluations. Examples of evaluation approaches used by different projects were discussed, including action research, external evaluators, and using both qualitative and quantitative data sources. Participants were guided in developing evaluation plans for their own projects by considering stakeholders, measures, sources of evidence, and refining evaluation questions.
Dr. Nick Saville, language assessment specialist with University of Cambridge ESOL Examinations, presents at the 2011 Language Teaching Research Colloquium in Ann Arbor, MI.
Academic Recruitment Best Practices -Project Report-Final 7.8.15Brian Groeschel, MA
The document provides a summary of a project conducted by GLW Consultants for the UC Davis Academic Affairs department to develop best practices for academic recruitment. It includes:
1. An introduction describing the objectives of identifying common practices and inconsistencies across UC Davis schools to develop recruitment best practices.
2. A description of the challenges faced, including limiting participation in focus groups and the decentralized structure of academic recruiting at UC Davis.
3. The outcomes of best practices charts, tip sheets for the UC Recruit system, and a draft online toolkit.
4. Recommendations to conduct additional focus groups to develop a more comprehensive best practices list, and to organize resources into an online toolkit to increase accessibility.
The document discusses several key principles of language assessment:
1) Practicality refers to the logistical issues of administering an assessment, such as time, costs, and ease of scoring. A practical test stays within budget, can be completed in the allotted time, and has clear administration directions.
2) Objectivity means different scorers will obtain the same results. Objective tests like multiple choice aim for this.
3) Washback effect refers to how a test influences teaching and learning. A test with beneficial washback positively impacts both and provides useful feedback.
4) Authenticity is the correspondence between a test task and real-world language use. An authentic test uses natural language and simulates realistic
Surveying the landscape: An overview of tools for direct observation and asse...MedCouncilCan
This document provides an overview of a framework and tools for direct observation and assessment in high-stakes settings. It discusses the challenges of incorporating workplace-based assessments into high-stakes evaluations due to issues with sampling, training, and measurement error. While there is limited psychometric evidence to support the use of workplace-based assessments in high-stakes contexts, jointly attesting to direct observation data may provide useful information to inform licensing decisions and identify gaps in assessment blueprints. The document advocates for more research on the reliability and validity of workplace-based assessments before incorporating local scores into high-stakes evaluations.
Van der vleuten_-_twelve_tips_for_programmatic_assessmentcnmcmeu
This document provides 12 tips for implementing programmatic assessment. Programmatic assessment aims to optimize assessment's learning, decision-making, and quality assurance functions by purposefully choosing individual assessments and aggregating information across assessments. The tips include: developing a master assessment plan aligned with the curriculum; promoting feedback over pass/fail decisions for individual assessments; and adopting a robust electronic portfolio system to collect and aggregate assessment information.
This document discusses key concepts in developing effective tests. It defines a test as a method for measuring ability or knowledge in a domain. Five criteria for evaluating tests are discussed: practicality, reliability, validity, authenticity, and washback effect. Practicality refers to a test being reasonably priced, timed, and easy to administer and score. Reliability means a test consistently measures what it intends to regardless of conditions. Validity is the degree to which a test actually measures the targeted ability or knowledge. Authenticity means tasks resemble real-world applications. Washback effect refers to how a test impacts teaching and learning.
The document summarizes a study examining the validity of assessments for the Barbados NVQ-B Level I in Amenity Horticulture. The study had two phases: [1] an exploratory study examining how the NVQ-B is used to measure employee competence, and [2] an assessment validation to evaluate the validity of the NVQ-B assessment system, content, and decisions. Key findings from Phase I included that the NVQ-B assessment process involves gathering evidence of competence, and that assessments presented challenges. Phase II found generally good coherence between assessment content and standards, but identified threats to validity from measurement of some constructs and ambiguity in assessment criteria. The study concluded with validity arguments that assessments provide a reasonable indication
The document discusses key qualities of measurement devices: validity, reliability, practicality, and backwash effect. It defines each quality and provides examples. Validity refers to what a test measures, and includes content, construct, criterion-related, concurrent, and predictive validity. Reliability is how consistent measurements are, including equivalency, stability, internal, and inter-rater reliability. Practicality means a test is easy to construct, administer, score and interpret. Backwash effect is a test's influence on teaching and learning.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance
https://takeoffprojects.com/final-year-projects
Takeoff Projects supports final year projects for computing and Engineering, Computer Networks, Computer Communications, Computer Applications, and knowledge Technology streams that cause BS/ME/MTECH/MS/MSC – any Post Graduate degree courses offered by the schools across the india.
Similar to Quality Assurance in Large Scale E-Assessments (20)
Probability Theory and Mathematical Statistics in Tver State Universitymetamath
Project MetaMath outlines a probability theory and mathematical statistics course offered at Tver State University. The course is offered over two semesters for a total of 9 credits. It includes lectures, laboratory work, seminars, course projects each semester, and exams. The goal of the course is to present basic information about probability models that account for random factors. Upon completing the course, students should have mastered key probability and statistics concepts and techniques. The course also discusses modernizing elements like pre-testing students and incorporating online homework assignments.
This document compares the Discrete Mathematics curricula and courses between OMSU (National Research Ogarev Mordovia State University) in Russia and TUT (Tampere University of Technology) in Finland. It analyzes the competencies, topics, and learning outcomes covered in the Discrete Mathematics courses based on three levels of difficulty. Overall, the OMSU course covers more topics like set theory, combinatorics, algebraic structures, and coding theory over a longer duration, while the TUT course focuses more on number theory over a shorter period. The document proposes increasing engineering applications and using an online learning system to help modernize the Discrete Mathematics courses.
This document outlines a course of calculus for IT students at Lobachevsky State University of Nizhni Novgorod. The course is divided into 3 terms covering sequences, differential calculus, integral calculus, and series. Tests and exams are given throughout each term to assess student competency in mathematical thinking and problem solving. The course aims to develop skills in applying modern mathematical tools. Plans are discussed to modernize the course by adding an introductory section to address low student preparation, using online tools like METAMATH to support independent work, and testing key concepts to address educational problems.
The document discusses the discrete mathematics curriculum at Saint-Petersburg Electrotechnical University. It provides an overview of which discrete math topics are covered in each year of study for different degree programs. It also compares course parameters like credits and hours between the university and TUT. Key modules covered in the second year Math Logic and Algorithm Theory course are outlined. Competencies addressed in the curriculum are mapped to SEFI levels, with additional competencies covered uniquely at the university. Suggested modifications to improve the curriculum structure are presented.
Probability Theory and Mathematical Statisticsmetamath
This document provides information about a Probability Theory and Mathematical Statistics course taught at KNITU, Russia. It includes details about the course such as the number of students, preliminary courses required, distribution of working time, topics covered in lectures and workshops/laboratories. It also compares the methodology and topics studied in this course to a similar course taught at TUT, Finland. Key differences highlighted include the use of Matlab at TUT and more emphasis on practical work/tutorials versus lectures. Overall competencies covered are also summarized and compared between the two courses based on the SEFI framework.
This document compares the optimization methods courses between KNITU (Russia) and TUT (Finland).
The KNITU course is mandatory, has fewer credits (3 vs 5), and less time spent (108 student hours vs 138). Key topics are similar but KNITU spends less time on lectures (10 vs 28) and nonlinear optimization.
The main difference is KNITU has fewer lectures, almost half that of TUT. This could be addressed by using an online math platform like Math-Bridge to provide additional lecture material and practice problems. Mid-term tests on Math-Bridge could help evaluate knowledge gained from the extra online content.
This document summarizes the course content and structure for Discrete Mathematics at the National Research Ogarev Mordovia State University. The course is divided into 4 modules covering set theory, graph theory, algebraic structures, and coding theory. Students take exams and write 3 essays throughout the semester to assess their understanding of each module. Pedagogical methods include lectures, practice problems, subgroup work, computer programming assignments, and a final exam to evaluate students on a 100 point scale.
SEFI comparative study: Course - Algebra and Geometrymetamath
The document describes a course in Algebra and Geometry for Informatics and Computer Science (ICS) and Programming Engineering (PE) majors. It analyzes the course content based on the SEFI framework and finds that the course covers most competencies in linear algebra and geometry at the core and level 1 levels. Some level 2 and 3 competencies are also covered. However, not all competencies are addressed as some assume knowledge from secondary school, others are covered in other courses, and some are not necessary for the ICS and PE profiles.
This document discusses the mathematical foundations of fuzzy systems, including:
- The curriculum covers theory of fuzzy sets, theory of possibility, crisp vs. fuzzy values, model tasks, and possibilistic optimization tasks over two semesters for a total of 324 hours.
- The theory of possibility introduced in 1978 uses axiomatic approach and possibility measures to define possibilistic space and possibilistic (fuzzy) variables characterized by possibility distributions.
- Model tasks and possibilistic optimization tasks are presented, where the coefficients can be crisp or possibilistic variables.
Calculus - St. Petersburg Electrotechnical University "LETI"metamath
This document provides an overview of the calculus concepts covered in school and in various university courses at the Electrotechnical University “LETI” in Saint Petersburg, Russia. It outlines the key competencies developed in functions, sequences, series, logarithmic/exponential functions, rates of change, differentiation, integration, and other topics. The levels of mastery increase across the core courses in Calculus, Computing Mathematics, and some additional advanced topics covered in only two specialized groups.
1. The document outlines discrete mathematics competencies covered at different levels in the undergraduate curriculum at Saint-Petersburg Electrotechnical University.
2. Many competencies are covered in the discrete mathematics course in the first year, while others are covered in courses like mathematical logic and algorithm theory in later years.
3. LETI aims to develop additional competencies beyond the SEFI levels, such as skills in mathematical logic, graphs, algorithms, and finite state machines.
Probability Theory and Mathematical Statisticsmetamath
This document discusses a computer tutorial on probability theory and mathematical statistics that was developed for a bachelor's degree program in computer science and engineering. It provides details on the course, including the typical number and gender of students, prerequisite courses, and time allocation. It also outlines the history of the degree program and standards from 1990 to 2014. The document describes the contents, structure, and development of the computer tutorial, and shows some screenshots of different learning management systems used to deliver the tutorial over time, including Lotus Learning Space, IBM Workplace Collaborative Learning, and Blackboard.
This document provides an overview of optimization methods. It discusses both single-variable and multi-variable optimization techniques, including necessary and sufficient conditions for local minima. Specific optimization methods covered include golden section search, dichotomous search, gradient descent, Newton's method, the simplex method for linear programming problems, and the method of Lagrange multipliers for constrained optimization problems. The document is intended to provide information about an optimization methods course, including preliminary courses, time distribution, and types of optimization techniques taught.
Math Education for STEM disciplines in the EUmetamath
The document discusses math education reforms in the EU. It notes declining math skills among students and describes efforts across Europe to shift from a content-focused approach to developing mathematical competencies. Recommendations include changing curricula to emphasize real-world problem solving, improving teacher training, and leveraging technology as a teaching tool while maintaining the important role of educators. Overall, the document outlines the need for pedagogical reforms to address shortcomings identified by assessments like PISA and better prepare students for STEM careers.
International Activities of the University in academic fieldmetamath
The document summarizes the international activities of Kazan National Research Technical University (KNRTU-KAI) in academic fields. It outlines several milestones in the university's international relations starting from the 1950s when it first hosted foreign students. It then discusses KNRTU-KAI's participation in international projects, associations, and TEMPUS programs. The document also provides details on international accreditation of academic programs, the new German-Russian Institute of Advanced Technologies, and KNRTU-KAI's approach to developing new curricula/modules based on the qualifications framework of the European Higher Education Area.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
1. 1Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
QUALITY ASSURANCE
IN LARGE SCALE E-ASSESSMENTS
Prof. Dr. Heinz-Werner Wollersheim
2. 2Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
Content
1) Quality assurance: more than a technical problem
2) Aspects of quality assurance in MCA
3) Shortcut: important terms
4) Quality assurance process in MCA
5) Prospects
3. 3Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
4. 4Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) More than a technical problem
quality
management
quality
assurance
analysis
current state
implementing
improvements
documentation
measuring
effects
1. detecting & making aware
processes
2. detecting & fixing frictions
5. 5Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
optimization
circle
of
quality
development
quality
assurance
analysis
current state
implementing
improvements
documentation
measuring
effects
6. 6Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
quality
management
Goals to achieve:
Doing the right things. (effectiveness)
Doing the things in the right manner. (efficiency)
Doing the things at the right time. (efficiency)
„Our mission is to do the right things
right at the right time.“
7. 7Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
Beyond technical “controlmania“
§ technical control is working just in addition to social control (not as
a substitute)
§ reaching high acceptance: university students, colleagues
§ needs lots of communication: The right things done right at the right
time?
§ But: Assessing verifiable objectives is indicating a shift from holistic
„education“ to specific „training“
8. 8Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
Doing the right things: validity
kinds of validity relevance determining/ indicators
content validity best way to operationalize contents ratings by experts
construct validity
Does the test measure the intended
learning outcomes and competences?
a) convergent validity
(data of tests designed to measure
similar competences are
highly correlating)
b) divergent validity
(data of tests designed to measure
different competences are
lowly correlating)
criterion validity
correlation between measurement
instrument and empirical criteria
a) diagnostic validity
a) prognostic validity
9. 9Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
1) Quality assurance: more than a technical problem
Academic teaching: typical issues
§ common practice at the high schools and universities:
modules and courses are designed at most
§ challenge:
developing quality assured assessments in this context
§ validity reduced to content validity
§ Modified Constructive Alignment (MCA)
10. 10Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
2) Aspects of quality assurance in MCA
11. 11Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
2) Aspects of quality assurance in MCA on
§ objectivity: implementing objectivity
objectivity of analysis
§ validity: content validity à ratings by experts
à assessement plan (blueprint)
§ reliability: internal consistency à consistency analysis
12. 12Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
2) Aspects of quality assurance in MCA on
analysis of items:
§ empirical difficulty: index of difficulty 20 ≤ Pi ≤ 80
§ discrimination: coefficient of discrimination r ≥ .20
13. 13Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
2) Aspects of quality assurance in MCA
§ objectivity: objectivity of implementation
objectivity of analysis
§ validity: content validity à ratings by experts,
à assessement plan (blueprint)
§ reliability: internal consistency à consistency analysis
§ empirical difficulty: index of difficulty à 20 ≤ Pi ≤ 80
§ discrimination: coefficient of discrimination à r ≥ .20
14. 14Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
15. 15Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ selected response
– selected response: multiple possible answers,
choosing the right one(s)
– legal term (Germany): Antwort-Wahl-Verfahren
– terms that describes the way
to give an answer: multiple-choice question type (MCQ),
matching, sequence, hotspot
– assessment question: item
16. 16Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ item
“The smallest separately identified question or task within an assessment plus,
its associated information (for example mark scheme, curriculum reference,
media content, performance information etc), usually a single objective
question. Distinguished from a ‘question’, which may be a longer and less-
objective task but often used synonymously.”
(Qualifications and Curriculum Authority 2007, 107)
17. 17Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ selected response
“The selected-response item format is the best choice
for test developers interested in efficient, effective measurement
of cognitive achievement or ability.”
(Downing 2006, 287)
18. 18Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ item structure
Das Bild kann derzeit nicht angezeigt werden.high quality structure
item stem
vignette and question
options
correct answer
and
several incorrect answer
(distractors)
more detailed stem
short stem
(A) short options (A) long options
avoid
(B)
(C)
(D)
(E)
(B)
(C)
(D)
(E)
19. 19Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ constructive alignment: the german view
Ø „Beim 'Constructive-Alignment'-Konzept geht es im Kern darum, dass
die intendierten Outcomes des Lernprozesses klar definiert und den
Studierenden explizit verdeutlicht werden und die Prüfungs- und
Lernaktivitäten stringent auf die Learning Outcomes abgestimmt
werden.“
(Schaper 2012, 62)
à In nuce C.A. means coherence between
learning outcomes,
assessment and
learning process
20. 20Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ constructive alignment
“In constructive alignment, we start with the outcomes we intend students to learn, and align
teaching and assessment to those outcomes.
The outcome statements contain a learning activity, a verb, that students need to perform to best
achieve the outcome, such as “apply expectancy-value theory of motivation”, or “explain the
concept of … “. That verb says what the relevant learning activities are that the students need to
undertake in order to attain the intended learning outcome.
Learning is constructed by what activities the students carry out; learning is about what they do,
not about what we teachers do. Likewise, assessment is about how well they achieve the intended
outcomes, not about how well they report back to us what we have told them. […]
Constructive alignment can be used for individual courses, for degree programmes, and at the
institutional level, for aligning all teaching to graduate attributes.”
Source: http://www.johnbiggs.com.au/academic/constructive-alignment/
21. 21Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
3) Shortcut: important terms
§ learning outcomes
Ø “Learning outcomes describe what a learner is expected to know,
understand and be able to do after successful completion of a process
of learning.”
(ECTS Users‘ Guide, Europäische Gemeinschaft 2009, 11)
22. 22Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
23. 23Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
§ focus:
24. 24Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
2 dimensions of quality assurance
planningmodelfor
didacticsinhigher
education
constructive alignment
analysis evaluation
process stages
development implementation
25. 25Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
Planning stages according to constructive alignment
learning outcomes
§ defining intended learning outcomes
before starting the course
§ align teaching and assessment to those
outcomes
designing tests
§ developing assessment according to
intended learning outcomes
§ adapting those outcomes if necessary
designing learning process
§ designing learning process according to
intended learning outcomes after
developing assessment
1
2
3
learning
outcomes
learning process
and
learning activities
assessment
1
2
3
26. 26Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
Workflow:
Creating e-assessements according to constructive alignment
Das Bild kann derzeit nicht angezeigt werden.
planninglevels
modul
course (seperate course unit)
modul
workload/
ECTS
contentstrucutredesign
of the course
topicofacourse
indexing
of topics and
contents
previous
knowledge
competence
descriptions
DQR/
HQR
issues to be considered
learningoutcomesthat
arerelavantfor
assessments
(ifnecessary)
formatandmethod
ofassessment
assessmentplan
(blueprint)oftargetstate
(contents,performancelevels,
typesoftestitems)
blueprint
ofcurrentstate
peer-reviewprocess
creatingitems
creatingtestitempools
designing tests compiling tests
creatingtests
planningstages
topicareas
specificcontents
learningoutcomes
27. 27Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
2 dimensions of quality assurance
planningmodelfor
didacticsinhigher
education
constructive alignment
analysis evaluation
process stages
development implementation
specific implementation?
28. 28Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
simplified process model for e-assessment
developing
an e-exam
implementing
an e-exam
adapting
an e-exam
analysing and
evaluating
an e-exam
29. 29Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
developing
an e-exam
implementing
an e-exam
adapting
an e-exam
analysing and
evaluating
an e-exam
§ constructive alignment (Biggs and Tang 2007)
§ taxonomy for learning, teaching, and assessing
(Anderson and Krathwohl 2001)
§ learning outcomes
§ assessment plan (blueprint)
§ scoring model
§ content and form of test items
§ peer-review
analysis of tests and items
§ empirical difficulty
§ discrimination
§ reliability
renormalization
§ practice e-exam session
§ instructions for working
with test items
§ risk management
2 levels:
§ content criteria for organizing the
course and developing the tests
§ ideas for innovating the electronical
assessments system
30. 30Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
§ knowledge of constructive alignment (Biggs and Tang 2007) and the taxonomy for
learning, teaching, and assessing (Anderson and Krathwohl 2001)
§ knowledge of the form and content of correct test items
§ different work assistance tools to organize the developing process
§ defining learning outcomes for courses
§ designing the assessment plan (blueprint):
number of items, performance levels, forms of knowledge, topics;
ensuring one-dimensionality
§ calculating guessing probability
§ organizing the item developing (work assistance tools)
§ organizing the peer review process
§ creating tests:
selecting items according to the assessment plan (blueprint)
equal opportunities in case of more than one test in course
Entwicklung
der E-Klausur
developing
an e-exam
31. 31Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
implementing
an e-exam
§ training version of the e-exam session: university students get to know about test
procedure (e.g. navigation) and kinds of item types and designs to become familiar with
undertaking e-exams
§ instructions for working with test items
§ transparency of scoring model, e.g. getting marks according to all-or-nothing-principle or
for each part of the right answer
§ risk management: controling examination server, back-up server, log files, bug documentation
32. 32Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
analysing and
evaluating
an e-Exam
analysis of items and test:
item level:
§ empirical difficulty: index of difficulty 20 ≤ ≤ 80
§ selectivity (corrected item-total discrimination):
coefficient of item discrimination r ≥ .20
test level:
§ reliability: internal consistency à consistency analysis .80 ≤ KR 20 ≤ .90
renormalization
33. 33Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
4) Quality assurance process in MCA
simplified process model for e-assessment
developing
an e-exam
implementing
an e-exam
adapting
an e-exam
analysing and
evaluating
an e-exam
34. 34Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
35. 35Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
objectivity
validity
reliability
empirical difficulty
selectivity / item-total discrimination
security against miscalculation
scaling
dealing with guessing of university students
dealing with different valuate modes in
item analysis
quality criterion/ quality feature
actually
solved
implemented
in
ILIAS 4.3.7
36. 36Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
§ descriptive statistics + inferential statistics are necessary to
verification the one-dimensionality of the items
„Die einzig angemessene Umgehensweise mit dem Problem des Ratens ist
der Einsatz ganz spezieller Methoden der IRT, die bei der Schätzung
des jeweils gesuchten Fähigkeitsparameters einer Person aufgabenspezifisch
das faktische Erfolgsausmaß beim Versuch des Lösungerratens mit ein-
kalkulieren (d.s. insbesondere das 3-PL Modell und das Difficulty plus
Guessing PL Modell; vgl. wieder Kubinger, 2009).“
(Kubinger 2014, 170)
§ Beispiel: Verrechnungsregel im Rahmen der Skalierung
(z.B. Anzahl gelöster Aufgaben)
à sollte mit Hilfe der Item-Response-Theorie auf
empirische Angemessenheit geprüft werden
37. 37Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
dealing with guessing – proposal for solution I:
§ guessing probability included in the calculation of scoring model
à issue: different behaviour of university students
in relation to guessing
38. 38Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
dealing with guessing – proposal for solution II:
§ increasing the number of distracors
§ “1 of 6“, “1 of 7“, “1 of 8“
§ “x of 5“, scoring model with all-or-nothing-principle
à issues:
1. valid assessment required appropriate distracors
(e.g. plausible, homogeneous)
à analysing distractor quality
2. scoring model with all-or-nothing-principle (applied to “x of 5“)
assumes that partly knowledge is not enough
(cf. Kubinger 2014)
39. 39Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
5) Prospects
minimum goals to reach validity
creating workflow to
managing the working process,
observing the standards,
improving the usability
40. 40Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
literature reference
Amtsblatt der Europäische Union (AblEU) Nr. 2008/C 111/01 v. 6.5.2008.
Source: http://eur-lex.europa.eu/LexUriServ/LexUri-Serv.do?uri=OJ:C:2008:111:0001:0007:DE:PDF
Downing, Steven M. (2006): Selected-Response Item Formats in Test Development. In: Downing, Steven M. / Haladyna, Thomas
M.: Handbook of Test Development. Mahwah, N.J., S. 287-301.
Europäische Gemeinschaft (2009): ECTS-Leitfaden.
Source: http://ec.europa.eu/education/tools/docs/ects-guide_de.pdf
Fisseni, H.-J. (1990): Lehrbuch der psychologischen Diagnostik. Göttingen: Hogrefe.
Kubinger, K. D. (2009): Psychologische Diagnostik - Theorie und Praxis psychologischen Diagnostizierens. Göttingen: Hogrefe.
Kubinger, K. D. (2014): Kubinger, Klaus D. (2014): Gutachten zur Erstellung „gerichtsfester” Multiple-Choice-Prüfungsaufgaben.
In: Psychologische Rundschau 65 (3), S. 169–178.
Lienert, G. A. & Raatz, U. (1998): Testaufbau und Testanalyse. Weinheim: Beltz PVU.
Schaper, N. (2012): Fachgutachten zur Kompetenzorientierung in Studium und Lehre.
Source: http://www.hrk-nexus.de/fileadmin/redaktion/hrk-nexus/07-Downloads/07-02-Publikationen/fachgutachten_kompetenzorientierung.pdf
Qualifications and Curriculum Authority (2007): e-Assessment. Guide to effective practice.
Soruce: http://www.e-assessment.com/wp-content/uploads/2014/08/e-assessment_-_guide_to_effective_practice_full_version.pdf
41. 41Prof. Dr. H.-W. Wollersheim
Quality assurance
in Large Scale E-Assessments
Tempus
Workshop
Laubusch 2016
THANK YOU FOR LISTENING.
Prof. Dr. Heinz-Werner Wollersheim
Leipzig University
Institute of Educational Sciences
Chair for General Pedagogy
wollersheim@uni-leipzig.de