The document discusses three main challenges of modern computer-based assessment: usability, scoring, and assessing digital natives. It summarizes the COGSIM test development process which included multiple usability studies and pilot tests to improve the usability, identify scoring approaches, and ensure the assessment engaged digital native test-takers. The process led to enhancements in areas like the graphical user interface, how performance was measured both in terms of products and processes, and design elements to attract and maintain the interest of digital natives accustomed to technology.
This document summarizes a research paper on full-reference metrics for image quality assessment. [1] It discusses different types of image quality assessment methods including subjective (based on human observers) and objective methods. [2] Full-reference metrics require the original reference image and compare it to the distorted image to evaluate quality. Common full-reference metrics like PSNR and MSE are discussed. [3] The document also briefly outlines no-reference and reduced-reference metrics that don't require the original image for comparison.
This document summarizes a study on the impact of classes playing roles in design patterns. The study analyzed classes playing zero, one, or two roles across six Java programs. It found that on average 8.24% of classes played one role and 17.81% played two roles. Classes playing roles, especially two roles, had significantly higher values for internal metrics like coupling and cohesion. Classes playing two roles also changed significantly more than other classes. The results confirm previous findings and justify further study into ranking design pattern occurrences based on class roles and metrics.
The ItemBuilder is an authoring tool that allows non-technical users to create computer-based assessment items through a graphical user interface. It was developed to empower item authors and separate item design from execution. Items developed in ItemBuilder can be integrated into different delivery platforms like CBA Server or via a web delivery environment. ItemBuilder supports developing a wide range of static and dynamic item types including simulations, problem solving scenarios, and interactions through clicking, highlighting, or text/number entry. It provides templates and an easy to use WYSIWYG editor to streamline the item development process.
TAO is an open-source computer-based assessment platform that allows users to create, manage, deliver, and analyze tests and assessments. It includes features for item development, test development, test taker management, group management, test delivery, results management, and process management. TAO uses standard web technologies, is customizable, and has no annual license fees. It aims to provide full control and interoperability for computer-based assessment needs.
The document summarizes the evaluation of a home automation prototype using multiple methods. It describes conducting cognitive walkthroughs, user testing, and heuristic evaluations to discover usability issues. Testing uncovered both strengths and weaknesses in the prototype's design. Evaluators observed users completing tasks and gathered feedback to identify problems and areas for improvement. The results will be used to redesign the prototype to have better usability and meet user expectations.
Oplægget blev holdt ved InfinIT-arrangementet "Usability- evaluering i softwareusvikling", der blev afholdt den 16. september 2010. Læs mere om arrangementet her: http://infinit.dk/dk/hvad_kan_vi_goere_for_dig/viden/reportager/usability-evaluering_paa_forkant_er_rigtig_god_business.htm
This document summarizes a research paper on full-reference metrics for image quality assessment. [1] It discusses different types of image quality assessment methods including subjective (based on human observers) and objective methods. [2] Full-reference metrics require the original reference image and compare it to the distorted image to evaluate quality. Common full-reference metrics like PSNR and MSE are discussed. [3] The document also briefly outlines no-reference and reduced-reference metrics that don't require the original image for comparison.
This document summarizes a study on the impact of classes playing roles in design patterns. The study analyzed classes playing zero, one, or two roles across six Java programs. It found that on average 8.24% of classes played one role and 17.81% played two roles. Classes playing roles, especially two roles, had significantly higher values for internal metrics like coupling and cohesion. Classes playing two roles also changed significantly more than other classes. The results confirm previous findings and justify further study into ranking design pattern occurrences based on class roles and metrics.
The ItemBuilder is an authoring tool that allows non-technical users to create computer-based assessment items through a graphical user interface. It was developed to empower item authors and separate item design from execution. Items developed in ItemBuilder can be integrated into different delivery platforms like CBA Server or via a web delivery environment. ItemBuilder supports developing a wide range of static and dynamic item types including simulations, problem solving scenarios, and interactions through clicking, highlighting, or text/number entry. It provides templates and an easy to use WYSIWYG editor to streamline the item development process.
TAO is an open-source computer-based assessment platform that allows users to create, manage, deliver, and analyze tests and assessments. It includes features for item development, test development, test taker management, group management, test delivery, results management, and process management. TAO uses standard web technologies, is customizable, and has no annual license fees. It aims to provide full control and interoperability for computer-based assessment needs.
The document summarizes the evaluation of a home automation prototype using multiple methods. It describes conducting cognitive walkthroughs, user testing, and heuristic evaluations to discover usability issues. Testing uncovered both strengths and weaknesses in the prototype's design. Evaluators observed users completing tasks and gathered feedback to identify problems and areas for improvement. The results will be used to redesign the prototype to have better usability and meet user expectations.
Oplægget blev holdt ved InfinIT-arrangementet "Usability- evaluering i softwareusvikling", der blev afholdt den 16. september 2010. Læs mere om arrangementet her: http://infinit.dk/dk/hvad_kan_vi_goere_for_dig/viden/reportager/usability-evaluering_paa_forkant_er_rigtig_god_business.htm
The document describes the Goal Question Metric (GQM) approach for defining and interpreting operational and measurable software goals. The GQM approach defines goals at the conceptual level, then refines each goal into questions at the operational level, and finally associates metrics at the quantitative level to answer each question. An example GQM model is provided to illustrate how to structure goals, questions, and metrics in a hierarchical manner to measure a specific goal of improving the timeliness of change request processing. The GQM approach combines product, process, and resource measurement to provide a framework for defining measurable goals tailored to an organization.
User Testing talk by Chris Rourke of User Visiontechmeetup
This document provides an overview of usability testing and discusses key aspects of conducting usability tests, including:
1) Defining usability in terms of effectiveness, efficiency and satisfaction from the user's perspective.
2) Explaining the importance of usability testing and incorporating direct user feedback throughout the development process.
3) Detailing essential elements of usability testing such as recruiting appropriate users, designing test tasks, metrics, and observation techniques.
4) Discussing when during the design/development process usability testing should occur for maximum impact and the relationship between time of testing and impact on design.
Web-Based Self- and Peer-Assessment of Teachers’ Educational Technology Compe...Hans Põldoja
This document summarizes a research project that aimed to develop a web-based tool for assessing teachers' educational technology competencies through self-assessment and peer assessment. It outlines existing competency frameworks, the design challenges, methodology used which included personas, scenarios, and participatory design sessions. Prototypes were created including a competency test, profile, grouping and requirements features. Future work includes expanding assessment tasks and integrating the tool into other digital platforms.
The document provides information about a course on software engineering taught by Dr. P. Visu at Velammal Engineering College. It includes the course objectives, outcomes, syllabus, and learning resources. The key points are:
- The course aims to teach students about software processes, requirements engineering, object-oriented concepts, software design, testing, and project management.
- The outcomes include comparing process models, formulating requirements engineering concepts, understanding object-oriented fundamentals, applying design procedures, and evaluating testing techniques and project management.
- The syllabus covers topics like software processes, requirements analysis, object-oriented concepts, software design, and testing across 5 units over 45 periods.
- Recomm
This document provides information on a course titled "Software Engineering" taught by Dr. P. Visu at Velammal Engineering College. The objectives of the course are outlined, including understanding software project phases, requirements engineering, object-oriented concepts, enterprise integration, and testing and project management techniques. Six course outcomes are also listed relating to comparing process models, requirements engineering, object-oriented fundamentals, software design, testing techniques, and project estimation and scheduling. The document then provides details on the 5 course units covering software process and agile development, requirements analysis, object-oriented concepts, software design, and testing and project management. Learning resources including textbooks and online links are also listed.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
1. What it is?. Philosophy and Principles.
2. How to use it? methodology and basic tools.
3. Beyond UCD. Alternatives methodologies: Activity Centered Design and Goal Directed Design.
This document discusses the design principles of advanced task elicitation systems. It begins with an introduction that outlines the motivation and challenges of manual task elicitation in software development. It then reviews related work on task elicitation systems and the need to evaluate their design principles empirically. The methodology section describes a design science research approach used to conceptualize and evaluate an artifact called REMINER. Evaluation results show that semi-automatic task elicitation and leveraging imported knowledge bases can significantly increase elicitation productivity compared to manual elicitation. The discussion covers limitations and opportunities for future research at the intersection of task elicitation and software development processes.
This document discusses human-computer interaction (HCI) and usability engineering. It covers HCI in the software development process, including design rules, evaluation techniques, and universal design. Specific topics covered include the software life cycle, usability engineering, iterative design and prototyping, design rationale, and evaluation methods. Prototyping techniques like storyboards and simulations are also discussed. The goal of the document is to provide an overview of how usability and user experience is incorporated into the software engineering process.
This document discusses issues with existing software quality models and proposes a new approach using design patterns. The proposed approach focuses on flexibility, reusability, robustness, scalability, and usability. It involves identifying programs that use certain patterns, assessing the quality of pattern usage, computing metrics on the programs, and linking the metrics to quality assessments using machine learning. This allows evaluating subsets of a program's design based on patterns rather than evaluating entire programs or single classes.
The Catena® Launch package from OpEx provides tools and training to help organizations improve their business processes. It includes a Catena® software license, camcorder, and 24 hours of e-training for $3,000, with the goal of saving more than the cost through a user's first improvement project. The 12-module training program teaches process analysis, layout design, workstation design, scheduling, and metrics to establish continuous improvement. A money-back guarantee is offered if costs savings do not exceed the package price.
Usability testing involves identifying users, understanding their needs and goals as well as client needs. It includes conceptual design research, prototyping, and production testing. Methods include interviews, surveys, observations, card sorting, focus groups and user testing. Tests involve an opening, pre-session questionnaire, tasks with pre and post task questionnaires, and a post-session questionnaire to collect performance, issue, behavioral and self-reported metrics. Planning considers equipment, location, questionnaires, tasks and forms.
The document discusses the importance of usability testing in technology product development. It defines usability and outlines several key aspects of usability including learnability, efficiency, errors and satisfaction. The document also describes different methods of usability testing such as heuristic evaluation, formative evaluation and testing prototypes with representative users and tasks. It notes that usability testing is particularly important during the design and development phases of a project. Finally, it discusses how emerging technologies are presenting new challenges for usability testing.
Agile2012 presentation miki_konno (aug2012)drewz lin
Miki Konno presented agile UX research practices that can provide user feedback to development teams on a sprint cadence. These include RITE studies that allow continuous design iteration and testing in a single day, online customer panels run bi-weekly by product owners, and quick pulse studies that can be completed in a week with findings provided to the team. Other approaches include creating personas to represent target users and involving the team through field visits and persona happy hours to build empathy for users. These agile UX research methods aim to provide faster feedback to teams compared to traditional research.
This document discusses using machine learning to objectively assess quality of experience (QoE). It begins with a brief introduction to machine learning and outlines the steps to set up an ML-based objective metric: defining the feature space, selecting an ML paradigm, and robust model selection and testing. It then provides an example of using features related to image structure and color to select an algorithm for image restoration. The document concludes with a SWOT analysis of using machine learning for objective QoE assessment.
This document discusses using agile methodologies for requirement determination in system analysis. It describes continual user involvement, agile usage-centered design, and eXtreme Programming's planning game as agile methods. Continual user involvement removes stereotypes by involving users throughout analysis and design through iterative feedback. Agile usage-centered design develops paper prototypes of user interfaces through a 9 step process. eXtreme Programming's planning game involves a business player and development player who collaborate through exploration, commitment, and steering phases to choose tasks and adjust plans. The outcome is a system requirement specification document describing features, behavior, and requirements of the system.
This document discusses analyzing problems and designing object-oriented solutions. It includes exercises to identify objects, attributes, and operations in a soccer league case study, and design class diagrams using UML notation. The objectives are to analyze problems using object-oriented analysis and design classes from which objects can be created.
ALE 2012 session description: In this highly collaborative workshop, we will apply a couple of UX practices and techniques, such as empathy maps, stakeholder maps, storyboards, sketchboards and paper prototype usability testing that will allow teams to focus on quick validation and delivery of killer apps that will work for users.
Qo E E2 E5 User Centric Approach Katrien De Moorimec.archive
The document summarizes presentations from a closing event on Quality of Experience (QoE) held at IMEC, Leuven on January 29, 2009. It discusses three main topics: 1) Evaluating QoE by bridging the gap between technical parameters and human experience factors. 2) Situating network neutrality in context and developing an analytical framework for distributing internet content. 3) The European response to network neutrality in the context of electronic communications reform. It also outlines challenges in conceptualizing and measuring QoE, and the need for interdisciplinary and anticipatory approaches.
Beyond Usability Testing: Assessing the Usefulness of Your DesignDan Berlin
This document discusses how usability testing can be adapted to assess the usefulness of a design when the goals differ from just finding usability problems. It proposes conducting usability tests with three components: 1) Pre-task questions that set the context of usefulness instead of just demographics, 2) Participant-directed tasks instead of predefined tasks, and 3) Post-task questions that compare expectations and value instead of just satisfaction. This adapted approach leverages the strengths of usability testing while allowing different objectives of understanding usefulness rather than just usability problems.
TAO is an open-source computer-based assessment platform that allows users to develop, manage, deliver, and analyze tests and assessments. It provides tools for item development, test development, test taker management, group management, test delivery, and results management. Content can be developed using standard web technologies and customized stylesheets. Test and item data, along with user and group information, can be stored in a database and results can be transformed and exported to other formats. TAO is freely available and its open-source community encourages contributions to its continued development.
The document discusses the challenges and opportunities of e-assessment for learning, including balancing constructivist learning approaches with institutional reliability needs. It provides examples of formative and summative computer-assisted assessment tools and strategies across various subjects. The findings suggest that formative assessment may not significantly improve outcomes but has potential with further optimization of assessment strategies.
More Related Content
Similar to TAO DAYS - Challenges of Modern Computer Based Assessment
The document describes the Goal Question Metric (GQM) approach for defining and interpreting operational and measurable software goals. The GQM approach defines goals at the conceptual level, then refines each goal into questions at the operational level, and finally associates metrics at the quantitative level to answer each question. An example GQM model is provided to illustrate how to structure goals, questions, and metrics in a hierarchical manner to measure a specific goal of improving the timeliness of change request processing. The GQM approach combines product, process, and resource measurement to provide a framework for defining measurable goals tailored to an organization.
User Testing talk by Chris Rourke of User Visiontechmeetup
This document provides an overview of usability testing and discusses key aspects of conducting usability tests, including:
1) Defining usability in terms of effectiveness, efficiency and satisfaction from the user's perspective.
2) Explaining the importance of usability testing and incorporating direct user feedback throughout the development process.
3) Detailing essential elements of usability testing such as recruiting appropriate users, designing test tasks, metrics, and observation techniques.
4) Discussing when during the design/development process usability testing should occur for maximum impact and the relationship between time of testing and impact on design.
Web-Based Self- and Peer-Assessment of Teachers’ Educational Technology Compe...Hans Põldoja
This document summarizes a research project that aimed to develop a web-based tool for assessing teachers' educational technology competencies through self-assessment and peer assessment. It outlines existing competency frameworks, the design challenges, methodology used which included personas, scenarios, and participatory design sessions. Prototypes were created including a competency test, profile, grouping and requirements features. Future work includes expanding assessment tasks and integrating the tool into other digital platforms.
The document provides information about a course on software engineering taught by Dr. P. Visu at Velammal Engineering College. It includes the course objectives, outcomes, syllabus, and learning resources. The key points are:
- The course aims to teach students about software processes, requirements engineering, object-oriented concepts, software design, testing, and project management.
- The outcomes include comparing process models, formulating requirements engineering concepts, understanding object-oriented fundamentals, applying design procedures, and evaluating testing techniques and project management.
- The syllabus covers topics like software processes, requirements analysis, object-oriented concepts, software design, and testing across 5 units over 45 periods.
- Recomm
This document provides information on a course titled "Software Engineering" taught by Dr. P. Visu at Velammal Engineering College. The objectives of the course are outlined, including understanding software project phases, requirements engineering, object-oriented concepts, enterprise integration, and testing and project management techniques. Six course outcomes are also listed relating to comparing process models, requirements engineering, object-oriented fundamentals, software design, testing techniques, and project estimation and scheduling. The document then provides details on the 5 course units covering software process and agile development, requirements analysis, object-oriented concepts, software design, and testing and project management. Learning resources including textbooks and online links are also listed.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
1. What it is?. Philosophy and Principles.
2. How to use it? methodology and basic tools.
3. Beyond UCD. Alternatives methodologies: Activity Centered Design and Goal Directed Design.
This document discusses the design principles of advanced task elicitation systems. It begins with an introduction that outlines the motivation and challenges of manual task elicitation in software development. It then reviews related work on task elicitation systems and the need to evaluate their design principles empirically. The methodology section describes a design science research approach used to conceptualize and evaluate an artifact called REMINER. Evaluation results show that semi-automatic task elicitation and leveraging imported knowledge bases can significantly increase elicitation productivity compared to manual elicitation. The discussion covers limitations and opportunities for future research at the intersection of task elicitation and software development processes.
This document discusses human-computer interaction (HCI) and usability engineering. It covers HCI in the software development process, including design rules, evaluation techniques, and universal design. Specific topics covered include the software life cycle, usability engineering, iterative design and prototyping, design rationale, and evaluation methods. Prototyping techniques like storyboards and simulations are also discussed. The goal of the document is to provide an overview of how usability and user experience is incorporated into the software engineering process.
This document discusses issues with existing software quality models and proposes a new approach using design patterns. The proposed approach focuses on flexibility, reusability, robustness, scalability, and usability. It involves identifying programs that use certain patterns, assessing the quality of pattern usage, computing metrics on the programs, and linking the metrics to quality assessments using machine learning. This allows evaluating subsets of a program's design based on patterns rather than evaluating entire programs or single classes.
The Catena® Launch package from OpEx provides tools and training to help organizations improve their business processes. It includes a Catena® software license, camcorder, and 24 hours of e-training for $3,000, with the goal of saving more than the cost through a user's first improvement project. The 12-module training program teaches process analysis, layout design, workstation design, scheduling, and metrics to establish continuous improvement. A money-back guarantee is offered if costs savings do not exceed the package price.
Usability testing involves identifying users, understanding their needs and goals as well as client needs. It includes conceptual design research, prototyping, and production testing. Methods include interviews, surveys, observations, card sorting, focus groups and user testing. Tests involve an opening, pre-session questionnaire, tasks with pre and post task questionnaires, and a post-session questionnaire to collect performance, issue, behavioral and self-reported metrics. Planning considers equipment, location, questionnaires, tasks and forms.
The document discusses the importance of usability testing in technology product development. It defines usability and outlines several key aspects of usability including learnability, efficiency, errors and satisfaction. The document also describes different methods of usability testing such as heuristic evaluation, formative evaluation and testing prototypes with representative users and tasks. It notes that usability testing is particularly important during the design and development phases of a project. Finally, it discusses how emerging technologies are presenting new challenges for usability testing.
Agile2012 presentation miki_konno (aug2012)drewz lin
Miki Konno presented agile UX research practices that can provide user feedback to development teams on a sprint cadence. These include RITE studies that allow continuous design iteration and testing in a single day, online customer panels run bi-weekly by product owners, and quick pulse studies that can be completed in a week with findings provided to the team. Other approaches include creating personas to represent target users and involving the team through field visits and persona happy hours to build empathy for users. These agile UX research methods aim to provide faster feedback to teams compared to traditional research.
This document discusses using machine learning to objectively assess quality of experience (QoE). It begins with a brief introduction to machine learning and outlines the steps to set up an ML-based objective metric: defining the feature space, selecting an ML paradigm, and robust model selection and testing. It then provides an example of using features related to image structure and color to select an algorithm for image restoration. The document concludes with a SWOT analysis of using machine learning for objective QoE assessment.
This document discusses using agile methodologies for requirement determination in system analysis. It describes continual user involvement, agile usage-centered design, and eXtreme Programming's planning game as agile methods. Continual user involvement removes stereotypes by involving users throughout analysis and design through iterative feedback. Agile usage-centered design develops paper prototypes of user interfaces through a 9 step process. eXtreme Programming's planning game involves a business player and development player who collaborate through exploration, commitment, and steering phases to choose tasks and adjust plans. The outcome is a system requirement specification document describing features, behavior, and requirements of the system.
This document discusses analyzing problems and designing object-oriented solutions. It includes exercises to identify objects, attributes, and operations in a soccer league case study, and design class diagrams using UML notation. The objectives are to analyze problems using object-oriented analysis and design classes from which objects can be created.
ALE 2012 session description: In this highly collaborative workshop, we will apply a couple of UX practices and techniques, such as empathy maps, stakeholder maps, storyboards, sketchboards and paper prototype usability testing that will allow teams to focus on quick validation and delivery of killer apps that will work for users.
Qo E E2 E5 User Centric Approach Katrien De Moorimec.archive
The document summarizes presentations from a closing event on Quality of Experience (QoE) held at IMEC, Leuven on January 29, 2009. It discusses three main topics: 1) Evaluating QoE by bridging the gap between technical parameters and human experience factors. 2) Situating network neutrality in context and developing an analytical framework for distributing internet content. 3) The European response to network neutrality in the context of electronic communications reform. It also outlines challenges in conceptualizing and measuring QoE, and the need for interdisciplinary and anticipatory approaches.
Beyond Usability Testing: Assessing the Usefulness of Your DesignDan Berlin
This document discusses how usability testing can be adapted to assess the usefulness of a design when the goals differ from just finding usability problems. It proposes conducting usability tests with three components: 1) Pre-task questions that set the context of usefulness instead of just demographics, 2) Participant-directed tasks instead of predefined tasks, and 3) Post-task questions that compare expectations and value instead of just satisfaction. This adapted approach leverages the strengths of usability testing while allowing different objectives of understanding usefulness rather than just usability problems.
Similar to TAO DAYS - Challenges of Modern Computer Based Assessment (20)
TAO is an open-source computer-based assessment platform that allows users to develop, manage, deliver, and analyze tests and assessments. It provides tools for item development, test development, test taker management, group management, test delivery, and results management. Content can be developed using standard web technologies and customized stylesheets. Test and item data, along with user and group information, can be stored in a database and results can be transformed and exported to other formats. TAO is freely available and its open-source community encourages contributions to its continued development.
The document discusses the challenges and opportunities of e-assessment for learning, including balancing constructivist learning approaches with institutional reliability needs. It provides examples of formative and summative computer-assisted assessment tools and strategies across various subjects. The findings suggest that formative assessment may not significantly improve outcomes but has potential with further optimization of assessment strategies.
This document summarizes the goals and progress of a project to develop an online diagnostic assessment system in Hungary. The project is led by the Center for Research on Learning and Instruction at the University of Szeged and has received funding from the Hungarian Development Agency. The first phase involved developing frameworks and item banks for reading, mathematics, and science. The second phase will expand the system's reach to 20% of students and integrate it more fully into schools. Ultimately, the system aims to serve 600,000 students, communicate with 60,000 teachers, and provide sophisticated feedback to help personalize learning.
This document discusses GeoGebra, a dynamic mathematics software, and TAO 2.0, an assessment platform. It provides an overview of GeoGebra's history from its origins in 2001 as a tool for analytic geometry to its current community of over 50 institutes and millions of users. GeoGebra can be used for teaching mathematics through visualization, representations, and experimentation. Current projects include GeoGebra 4.0 with new tools and GeoGebraMobile to make applets work on mobile devices. The document also describes a project using GeoGebra in primary schools and how TAO collects data on tool usage to analyze learning.
This document discusses GeoGebra and TAO 2.0 and their synergy for assessment in geometry. It introduces TAO's technological shift from version 1 to 2.0, opening up to web standards like HTML5 and becoming more interactive through features like JavaScript. It also discusses how TAO 2.0 integrates GeoGebra, a major mathematics education program, to provide dynamic geometry assessments that go beyond static paper-and-pencil tests. The document argues this combination leverages each program's strengths and prepares assessment for the evolving demands of education technology.
The document discusses the roadmap for future versions of TAO. Key points include:
1) TAO is built on knowledge technologies from Generis and will benefit from Generis' roadmap.
2) Main focuses are addressing scalability issues, supporting advanced tests and results, improving security, and supporting new forms of testing and devices.
3) Methods to improve scalability include tools for benchmarking, optimizing code and workflows, experimenting with knowledge representation layers and databases.
4) Enhancing security involves improving authentication, controlling test delivery, managing item exposure and analyzing user behaviors.
5) Contributions to the roadmap are welcome and can be made through the TAO
The document discusses moving assessment items to a web-based format. It suggests that items could be developed as simple web applications using standard web technologies that are easy to learn, widely adopted, and can integrate with existing platforms. Items would essentially be a collection of web applications that could leverage open web standards like XHTML, CSS, and JavaScript. This would allow items to have interactivity and be developed with familiar tools. It would also allow standalone items to interact with assessment platforms through programming interfaces. Overall, the document proposes a more open and flexible approach to online assessment items using web technologies.
This document summarizes the development of an advanced item for a learning platform. It discusses selecting the right format for communication and interaction, including using APIs. It also covers formally developing the item using technologies like XHTML, CSS, and JavaScript. The document concludes by discussing how to download, run, and import the item into the platform.
This document describes a TAO Result Extension tool. The tool allows users to gather test results from multiple tests, organize the results into tables, and provide clear presentations of the data. It features the ability to select the appropriate test results, create tables with test variables and scores, apply filters and searches, export data, and add custom columns. The document walks through using the tool by selecting a test, loading result tables, naming columns, and contacting support for any additional questions.
This document discusses the TAO authoring tool and standard. It introduces TAO as a way to design high-level items easily using an intuitive authoring tool. It describes how TAO and the authoring tool allow for reusable content through templates and widgets. Examples of item types that can be created include multiple choice questionnaires, maps, media players and more. The goal of the tool is to make authoring such complex interactive items simpler and faster.
This document discusses supporting organizational processes with workflows and TAO's workflow engine. It begins by introducing workflows and the CBA process, then discusses designing organizational processes using a top-down and bottom-up approach. It explains that TAO uses one workflow engine for orchestrating tasks, people, tools, and time under various constraints. Specific examples are provided around item creation, testing, delivery, and results analysis. It concludes by asking where the workflow engine is employed and what the future roadmap includes.
The document describes the Test Authoring Tool (TAO) process extension, which provides a way to organize and manage the collaborative activities involved in large-scale computer-based assessment projects beyond just test execution and content creation. TAO extends these activities by defining workflows and integrating customizable web services. It allows users to design processes by adding activities, services, and logic to control workflow transitions. This provides a structured approach for organizing all the components of an assessment project.
This document discusses TAO APIs for integrating standalone items into computer-based assessment platforms. It provides an overview of the available APIs, including client-side and server-side APIs, and how they can be used for item I/O, backend setup, event logging, item state, and more. It also describes how to run a standalone item and which features are needed to integrate it with a CBA platform using the Item Runtime and Workflow Runtime APIs. The document encourages contributions to the TAO APIs by discussing support resources on the TAO Forge.
The document discusses QTI (Question and Test Interoperability), which is a specification that uses XML to represent questions, tests, and results. It allows sharing and reuse of assessment content across systems. The document outlines that QTI has 16 interaction types, supports standard web technologies, and can describe almost any assessment item. It provides examples of item types like multiple choice, ordering, and hotspot. The conclusion section describes a workshop that includes sample QTI items on topics like the Apollo program, historical figures, planets and moons, geography, and Shakespeare plays.
TAO is an e-assessment platform that was updated to version 2.0. The new version features two new item types (QTI and open web item types), advanced workflow capabilities for tests and organizational processes, and improved interoperability through CSV, QTI packaging and XLS formats. The architecture of TAO 2.0 separates the application layer from the persistence layer with a generis API.
TAO is an open source web platform for developing and delivering computer-based assessments. It allows users to manage test takers and groups, develop tests as item sequences, and deliver tests to collect results. The platform uses standard web technologies and supports importing and exporting items and tests using QTI format. TAO also has a community for sharing resources and providing support through forums, social media, and translation assistance.
TAO is an open-source computer-based assessment platform that allows users to create, deliver, and analyze tests and assessments. It includes modules for developing test items and tests, managing test takers and groups, delivering tests online or offline, and analyzing results. The platform is open-source, supports multiple languages and item types, and has been used successfully in education and research projects in Europe. Future updates will provide additional features and compatibility with standards like QTI IMS.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Choosing The Best AWS Service For Your Website + API.pptx
TAO DAYS - Challenges of Modern Computer Based Assessment
1. Challenges of modern
Computer Based Assessment:
Usability, Scoring and “Digital Natives”
Sonnleitner, Pa., Brunner, M.a, Keller, U.a, Martin, R.a,
Latour, T.b, Hazotte, C.b, Mayer, H.b
a… University of Luxembourg
b… Centre de Recherche Henri Tudor
TAO-Days 2011
30.03.2011
2. What is meant by „modern“ Computer Based Assessment?
Kubinger(1995) differentiates between 2 types of CBA…
• Computerized administration of paper-pencil tests
• Tests originally developed for CBA like:
- Objective Personality Tests
(Ortner, Proyer & Kubinger, 2006)
- Complex Problem Solving Scenarios
(Greiff & Funke, 2009; Sonnleitner et al., 2010)
Advantages of the latter are manifold (Kyllonen, 2009;
Martin, 2008; Ridgway & McCusker, 2003):
• process- as well as product measures
• assessment of more complex cognitive abilities
• ICT-Literacy is covered
•…
4. The COGSIM project:
Assessing General Cognitive Ability by means of
Complex Problem Solving Scenarios
developed in close collaboration with…
• Centre de Recherche Public Henri Tudor
• University of Heidelberg
Aims
• Development of computer-based assessment of GCA
based on complex problem solving scenarios
• Investigation of psychometric quality and fairness
of the assessment with a large, representative sample of students
• Free distribution of the assessment (via open-source licence)
6. Traditional Test Development Process (Shum, 2006):
Specify Construct
Check Literature for existing Test
Choose a Measurement Model
Write and Edit Items
2 possible
Administer and Analyse Responses Feedback-loops
Select „Best“ Items for Test
Check Reliability and Validity
Norm
Prepare Test Manual
Publish Test
7. COGSIM Test Development Process:
1st Usability Study, n = 8
Redesign & Programing
Specify Construct
Check Literature for existing Test
2nd Usability Study, n = 8
Choose a Measurement Model Modification & Programing
Write and Edit Items 4 development
1st Pilot Study, n = 59 cycles
Administer and Analyse Responses
Modification & Programing
Select „Best“ Items for Test
Check Reliability and Validity
2nd Pilot Study, n = 79
Norm
Prepare Test Manual
3rd Usability Study incl.
Focus group, n = 7
Publish Test
Modification & Programing
8. COGSIM Test Development Process:
1st Usability Study, n = 8 qualitative analysis
Redesign & Programing
2nd Usability Study, n = 8
qualitative analysis
Modification & Programing 3 main challenges
were identified:
1st Pilot Study, n = 59
quantitative analysis - Usability
Modification & Programing
- Scoring
- Digital natives
2nd Pilot Study, n = 79
3rd Usability Study incl.
Focus group, n = 7 qualitative analysis
Modification & Programing
9. These elements are interconnected…
Assessment Instrument
(Scoring of Performance)
Target Population Usability
(Digital Natives) (Instructions + GUI)
11. The role of Usability…
Assessment Instrument
Design,
Scoring/ Validity
Semantics, etc.
Usability
Target Population Instructions
Aim of good interface design is to reduce construct-
irrelevant variance that could be attributed to test method
(Fulcher, 2003; Messick, 1989)
12. Identifiying Usability-Problems:
Qualitative data:
1st Usability Study, n = 8 • Think-aloud protocols
• Observation protocols
• Interviews
2nd Usability Study, n = 8 + Focus group in SSUS 3
1st Pilot Study, n = 59
Quantitative data:
2nd Pilot Study, n = 79 • Usability Questionnaire incl.
- Functionality of each element
- Comprehensibility
3rd Usability Study incl. - Subjective Difficulty
Focus group, n = 7 - Attractiveness
13. Classifying Usability-Problems:
Identified Construct related:
Problem
• due to difficulty of tasks
possible change of construction rationale
Usability related – 3 categories:
• basic level
(e.g. size of letters, use of colors, …)
• medium level
(e.g. navigation within instruction/ between items,
guidance of attention, …)
• high level
(e.g. working on task, using the concept map, …)
14. „Evolution“ of some elements of the GUI:
Basic level problem – position of variable-value:
SSUS 1
Pilot study 1
Main study –
final version
15. „Evolution“ of some elements of the GUI:
Medium level problem – navigate within a task:
SSUS 1
Pilot study 1
Main study –
final version
16. „Evolution“ of some elements of the GUI:
High level problem – using the concept map:
SSUS 1
Pilot study 1
Main study –
final version
17. Correlations
Wie oft
PC-Spiele/ sc.sysex.r sc.gdk.gl sc.stars2.c
Woche? el.tot obal.tot trl.raw.tot
Wie oft Pearson Correlation 1 .076 .281 .327*
Indicators for improved usability?
PC-Spiele/Woche? Sig. (2-tailed) . .661 .083 .042
N 39 36 39 39
sc.sysex.rel.tot Pearson Correlation .076 1 .539** .347*
Pilot study 1 Sig. (2-tailed)
N Correlations
.661 . .001 .038
36 36 36 36
sc.gdk.global.tot Pearson Correlation Correlations
.281 .539** 1 .399*
Wie oft
Sig. (2-tailed) Tage proPC-Spiele/ pro .001
.083
Stunden sc.sysex.r sc.gdk.gl. sc.stars2.c .012
N Woche mit Woche? Wochemit el.tot36
39 39
obal.tot trl.raw.tot 39
Wie oft
sc.stars2.ctrl.raw.tot Pearson Correlation Computer Computerspi sc.sysex.r sc.gdk.gl sc.stars2.c
1
.327* .076
.347* .281
.399* .327*1
PC-Spiele/Woche? Sig. (2-tailed) spielen .042elen
. el.tot
.661
.038 obal.tot
.083
.012 trl.raw.tot . Summe ICT
.042
Tage pro Woche mit Pearson Correlation
N 1 .801** .138 -.017 .174 .228
39 36 39 39
Computerspielen Sig. (2-tailed) Correlation . .076 .000 .296 .897 .187 .086
sc.sysex.rel.tot is significant at the 0.05 level (2-tailed).
*. Correlation Pearson 1 .539** .347*
N 59 59 59 59 59 58
**. Correlation Sig. (2-tailed) .661 . .001 .038
Stunden pro Wochemitis significant at the 0.01 level .801**
Pearson Correlation (2-tailed). 1 .094 .048 .087 .276*
N 36 36 36 36
Computerspielen Sig. (2-tailed)
sc.gdk.global.tot Pearson Correlation .000 .281
.
.539**
.473 .715
1
.508
.399*
.035
N 59 60 60 60 60 59
Sig. (2-tailed) .083 .001 . .012
sc.sysex.rel.tot
N
Sig. (2-tailed)
Enhancement of Usability
Pearson Correlation .138
39
.094
36
1
39
.401** .307*
39
.203
sc.stars2.ctrl.raw.tot Pearson Correlation .296 .327*
.473
.347*
. .001
.399*
.016
1
.121
N 59 60 61 61 61 60
Sig. (2-tailed) .042 .038 .012 .
sc.gdk.global.tot Pearson Correlation -.017 .048 .401** 1 .544** .295*
Pilot study 2 N
Sig. (2-tailed) .897
Correlations
39 36
Correlations.001
.715
39
.
39
.000 .022
*. Correlation is significant at the 0.05 level (2-tailed).
N 59 60 61 61 61 60
**. Correlation Tage pro Tage pro
Stunden pro Stunden pro
sc.stars2.ctrl.raw.tot is significant at the 0.01 level .174
Pearson Correlation (2-tailed). .087 .307* .544** 1 .299*
Woche mit Wochemitmit
Woche Wochemit
Sig. (2-tailed)
Computer Computerspi .508
.187
Computer sc.sysex.r .016
Computerspi .000 sc.stars2.c.
sc.sysex.r
sc.gdk.gl sc.gdk.gl .020
sc.stars2.c
N spielen 59 spielen
elen 60 elen 61 obal.tot 61 trl.raw.tot
el.tot el.tot obal.tot Summe60
61 trl.raw.tot
ICT Sum
Summe ICTWoche mit Pearson Correlation 1
Tage pro
Tage pro Woche mit Pearson Correlation Pearson Correlation .228 .801** .276* .138 .801** -.017 .138
1 .203 .295* .299*
.174 -.017 1 .174
.228
Computerspielen Computerspielen
Sig. (2-tailed) Sig. (2-tailed)
Sig. (2-tailed) . .086 .000 . .035 .296 .000 .121 .022
.897 .296 .020
.187 .897 ..187
.086
N N N 59 58 5959 59 59 60 59 59 6059 60
59 59 60 59
58
StundenPearson Correlation
**. pro Wochemit Pearson Correlation
Stunden pro Wochemit Correlation is significant at the 0.01 level (2-tailed).
.801** .801**
1 .094 1 .048 .094 .087 .048 .087
.276*
Computerspielen Computerspielen
Sig. (2-tailed) Sig. (2-tailed) .000 . .473 .715 .508
*. Correlation is significant at the 0.05 level .000
(2-tailed). . .473 .715 .508 .035
N N 59 6059 60 60 60 60 60 60 59 60
sc.sysex.rel.tot sc.sysex.rel.tot Correlation
Pearson Pearson Correlation
.138 .138
.094 1 .094 .401** 1 .307* .401** .203.307*
Sig. (2-tailed) Sig. (2-tailed) .296 .296
.473 . .473 .001 . .016 .001 .121.016
18. But loosing weight is definitely positive…
Close interaction between Usability and Validity!
20. Challenge 2: Scoring or what happens if we consider the process?
Traditional MC-item:
• Correct or False (Product), Process is unknown
Modern CBA-item:
• Product and Process itself gets measured
• Nearly endless possibilities to score (time, …)
21. Example Control Phase: Product or Process?
The task: to achieve certain target values within 3 steps
23. Comparison of Product (target achievement) and
Process (way to target) Score:
Item 4:
+
Dx= -2
A X
Xt+1= Xt + 1*At + (-1)*T (with T = 1)
B - Y Dy= 0
T
Walk 1: 0,1 / 0,1/ 1,1 Product Score: 5/5
Process Score: 3/3
Overestimation of
Walk 2: 1,0 / 0,1/ 0,1 Product Score: 5/5
performance
Process Score: 2/3
Walk 3: 0,1 / 0,1/ 1,0 Product Score: 5/5
Process Score: 2/3
Walk 4: 0,1 / 0,1/ 0,0 Product Score: 3/5
Process Score: 1/3
25. Challenge 3: „Digital Natives“ (Prensky, 2001, Veen & Vracking, 2006)
Who are they?
• generation born since 1990
• grown up in a world in which ICT is permanently available
What makes them special?
• used to process huge amounts of information
• permanent information overload – filtering strategies
• strongly rely on images and symbols
• deal with new technology in a non-linear way
(start to play before reading instructions)
• technology is there to solve them
• if problems occur, technology is blamed
• used to video games
• used to learn by discovery and by experimenting
• they posess iconic skills
(use symbols, icons and color-code to navigate)
26. Challenge 3: „Digital Natives“ (Prensky, 2001, Veen & Vracking, 2006)
Why is this important for test developers?
• want to be active from the first minute on (like in video games)
• they expect perfect functioning technology
• they expect an appealing GUI
• they are used to actively explore and learn, not being told
• most likely motivated when feel attracted by the design/
if task seems to be interesting and challenging
e.g. static instructions are not a good idea
27. Challenge 3: „Digital Natives“ (Prensky, 2001, Veen & Vracking, 2006)
How to react?
• extensive usability studies with digital natives as experts
• ensure perfect functioning
• ensure appealing design
• keep them active and in exercises from the beginning on
• keep text to an absolute minimum
• include game-like characteristics
• explain using images/ animations rather than text
• use symbols, icons and color-code in an expected way
28. Challenge 3: „Digital Natives“ (Prensky, 2001, Veen & Vracking, 2006)
Evidence from our studies:
• arising questions during instruction phase
• ignored written information
• exercises including feedback improved understanding
• game-like characteristics were appreciated
•…
it is wise to consider characteristics
of target population
29. These elements are interconnected…
Assessment Instrument
(Scoring of Performance)
Target Population Usability
(Digital Natives) (Instructions + GUI)
30. Modified Test Development Process for CBA:
Specify Construct Specify Target Population
Check Literature for existing Test
Choose a Measurement Model
Write and Edit Items Design of User Interface
Administer and Analyse Responses
regarding Construct regarding Usability
Select „Best“ Items for Test
Integration of 2 new feedback loops
Check Reliability and Validity
Norm
Publish Test
35. „Take home“-messages:
When dealing with modern CBA:
- pay attention to usability and consider it
during development process
- think about more complex ways to score performance
- think about the special needs of your target population