This is a presentation given by Peter Shea from the "What You Should Know About Learning Analytics" NERCOMP workshop on Friday, January 22nd in Southbridge, MA.
This document describes a case study conducted to evaluate the Vimeo website using the 5 second test methodology. Two test formats were used - memory dump to measure memorability and attitudinal to measure visual appeal. Tests were conducted both online and in-person. Results found that most users recognized the video streaming purpose but few identified sharing features. While visual appeal was rated highly, the logo was rarely remembered. Recommendations included emphasizing upload/sharing features and reworking the logo design. Insights highlighted best practices for crafting instructions, image optimization, question wording and using both moderated and unmoderated testing.
This document provides an overview of assessment tools in the Angel 7.3 learning management system. It discusses questions that students and instructors have about assessments, learning objectives about using discussion forum scoring rubrics and navigating the gradebook. It then covers topics like assessment settings, configuring quizzes and exams, overriding scores, scoring test items and discussion forums, and using scoring rubrics to grade discussions. The goal is to help instructors understand and effectively use the various assessment and gradebook features in Angel.
Usability is how successfully and satisfactorily a person uses a product, document, website, or app to achieve goals effectively & efficiently. Good usability is measured by these factors: memorability, efficiency, errors, learnability, and satisfaction.
This document provides guidance on conducting DIY usability testing in three easy steps:
1. Recruit 5-10 participants per user group that match the actual or potential users. Schedule testing sessions and backups.
2. Test users using scenarios, tasks, or a script. Observe their performance and have them complete tasks while tracking metrics. Make sure to have the necessary equipment and follow best practices for the testing environment.
3. Analyze the results by identifying usability issues, compiling the data, comparing to goals, and prioritizing issues. Create a report to communicate findings.
This document discusses methods for evaluating eLearning programs, including formative evaluation during development and usability testing with end users. It describes the Kirkpatrick model for evaluating learning programs at different levels. Formative evaluation seeks to improve a program and ensure it is effective, efficient and meets user needs. Methods include expert reviews, user reviews, usability testing, and alpha/beta testing. Usability testing involves observing representative users performing tasks to evaluate ease of use, speed, errors and satisfaction. Multiple evaluators are better than a single evaluator to find most usability problems.
The document discusses whether coaching is effective and how its effectiveness can be evaluated. It notes that while coaching is a large industry, the research on its effectiveness has limitations. Specifically, much of the research only looks at client satisfaction rather than objective outcomes, and there is a lack of long-term or comparative studies. The document argues that systematic reflection, such as providing feedback and examples for learners to analyze, can help improve performance by promoting deeper cognitive processing. One study found that participants who received feedback plus an opportunity for reflection showed greater improvement between two tasks than those in other conditions. Overall, the document advocates for more rigorous research on coaching that incorporates systematic reflection techniques.
This document discusses a collaborative educational tool for helping classrooms work on Venn diagrams. It provides user data from a test of the tool that shows students found the drag and drop and delete functions easy to use but had issues with the color scheme and font. User feedback suggests making it more colorful, allowing images, and optimizing it for touch devices. The tool aims to better engage students in the learning process compared to traditional whiteboards.
This is a presentation given by Peter Shea from the "What You Should Know About Learning Analytics" NERCOMP workshop on Friday, January 22nd in Southbridge, MA.
This document describes a case study conducted to evaluate the Vimeo website using the 5 second test methodology. Two test formats were used - memory dump to measure memorability and attitudinal to measure visual appeal. Tests were conducted both online and in-person. Results found that most users recognized the video streaming purpose but few identified sharing features. While visual appeal was rated highly, the logo was rarely remembered. Recommendations included emphasizing upload/sharing features and reworking the logo design. Insights highlighted best practices for crafting instructions, image optimization, question wording and using both moderated and unmoderated testing.
This document provides an overview of assessment tools in the Angel 7.3 learning management system. It discusses questions that students and instructors have about assessments, learning objectives about using discussion forum scoring rubrics and navigating the gradebook. It then covers topics like assessment settings, configuring quizzes and exams, overriding scores, scoring test items and discussion forums, and using scoring rubrics to grade discussions. The goal is to help instructors understand and effectively use the various assessment and gradebook features in Angel.
Usability is how successfully and satisfactorily a person uses a product, document, website, or app to achieve goals effectively & efficiently. Good usability is measured by these factors: memorability, efficiency, errors, learnability, and satisfaction.
This document provides guidance on conducting DIY usability testing in three easy steps:
1. Recruit 5-10 participants per user group that match the actual or potential users. Schedule testing sessions and backups.
2. Test users using scenarios, tasks, or a script. Observe their performance and have them complete tasks while tracking metrics. Make sure to have the necessary equipment and follow best practices for the testing environment.
3. Analyze the results by identifying usability issues, compiling the data, comparing to goals, and prioritizing issues. Create a report to communicate findings.
This document discusses methods for evaluating eLearning programs, including formative evaluation during development and usability testing with end users. It describes the Kirkpatrick model for evaluating learning programs at different levels. Formative evaluation seeks to improve a program and ensure it is effective, efficient and meets user needs. Methods include expert reviews, user reviews, usability testing, and alpha/beta testing. Usability testing involves observing representative users performing tasks to evaluate ease of use, speed, errors and satisfaction. Multiple evaluators are better than a single evaluator to find most usability problems.
The document discusses whether coaching is effective and how its effectiveness can be evaluated. It notes that while coaching is a large industry, the research on its effectiveness has limitations. Specifically, much of the research only looks at client satisfaction rather than objective outcomes, and there is a lack of long-term or comparative studies. The document argues that systematic reflection, such as providing feedback and examples for learners to analyze, can help improve performance by promoting deeper cognitive processing. One study found that participants who received feedback plus an opportunity for reflection showed greater improvement between two tasks than those in other conditions. Overall, the document advocates for more rigorous research on coaching that incorporates systematic reflection techniques.
This document discusses a collaborative educational tool for helping classrooms work on Venn diagrams. It provides user data from a test of the tool that shows students found the drag and drop and delete functions easy to use but had issues with the color scheme and font. User feedback suggests making it more colorful, allowing images, and optimizing it for touch devices. The tool aims to better engage students in the learning process compared to traditional whiteboards.
The document provides information about conducting usability testing. It discusses what usability testing involves, including setting tasks for test participants and noting any problems they encounter. It provides tips for testing, such as teaming up with a partner, selecting 3-5 test participants, having them complete 2-3 tasks in 30-50 minutes, and one person acting as note-taker and moderator. The document also discusses how to find participants, what to tell them, questions to ask as moderator, common testing errors to avoid, and metrics to capture like completion rates, time on task, errors and satisfaction.
Introduction to Usability Testing for Survey ResearchCaroline Jarrett
This document provides guidance on planning and preparing for usability testing of surveys. It discusses determining what aspects of a survey to test, who to recruit as participants, and where to conduct the testing. Key recommendations include deciding what to test at least a month before testing, recruiting 5-10 participants to represent intended users, and conducting testing in rounds with revisions between rounds rather than one large test. Locations for testing can either be at the organization conducting the test or in participants' natural environments.
The cognitive walkthrough is a usability inspection method that evaluates how easily users can learn to use an interface by exploring it. It involves defining tasks, expected action sequences, and users. Evaluators then walk through each task step-by-step to identify any issues like mismatches between actions and effects or inadequate feedback. The goal is to catch problems that could hinder a user's ability to learn through exploration.
This document provides a checklist for students to ensure they have included all key elements in blog posts for their Year 12 ICT Wordpress assignment. The checklist includes sections for students to document their e-portfolio pages covering topics from each term between 2011-2013, including the software covered, concepts learned, assessment tasks, examples of work, and a personal review. It also includes a section for a 2013 social justice issue persuasive prose blog post where students select an issue to discuss and provide opinions and potential solutions.
This document provides guidance for students taking various IMPACT technology surveys. It outlines key details about each survey such as expected duration, participation statements, and clarification on certain survey questions and vocabulary. Key points include: the surveys should take 20-25 minutes; participation is required as the school receives IMPACT funding; clarify that certain questions do not apply to the school; and explanations of technology-related terms are provided.
The document reviews and compares three mobile assessment software options: Kahoot, Lino, and MasteryConnect. Kahoot is recommended due to its engaging design, free price, ability to connect globally, and the reviewer's successful personal experience using it in the classroom to keep students participating and learning. While Lino promotes collaboration, it requires a paid subscription and was difficult to set up. MasteryConnect allows real-time data analysis but has the highest annual fee per teacher.
The document discusses techniques for designing engaging eLearning courses using gamification principles. It suggests moving from a "push" model, where all learners receive the same information, to a "pull" model where learners are motivated to access content based on their individual needs. Specific techniques mentioned include: setting goals and objectives with different levels of difficulty; providing frequent feedback; measuring and displaying progress; rewarding effort; and using pedagogical agents. Examples are given comparing a traditional compliance training to an updated interactive version that applies these gamification design techniques.
Socrative and Kahoot are student response systems used for formative assessment. While both are free, they each have strengths and weaknesses. Socrative allows for quick impromptu questions, different question types including short answer, and provides individual reports. However, students can't review questions after class. Kahoot has a game-like interface that motivates students, but lacks question and individual reports. After reviewing features, the document recommends Socrative for its enhanced student interaction and participation capabilities.
Brian A. Morris - EDIT 5395 Kahoot! v PollEverywherebmorrisatsayre
This document discusses a mobile assessment software. It has features like game-based classroom response, timed responses from 0-30 seconds, and providing instant feedback. The author has used the software previously with sophomore, junior, and senior classes, who enjoyed it. The goal was for remediation in algebra, geometry, and English. Pros include it being easy to use, allowing any device to access it, and being fun for students and teachers. Cons are that it requires a projector and is limited to 30 seconds per question. The author recommends Kahoot! as it is free to use and highly engaging for students.
Heuristic Analysis of Oregon Unemployment ApplicationScholarStudio
The document analyzes the user experience of Oregon's unemployment application system and identifies several areas for improvement. It finds that the system provides little guidance to users, allows for easy data loss, lacks consistency and clear error messages. This creates undue stress and risk of users not being able to complete the important process of applying for unemployment benefits. The analysis recommends frontloading key information, adding the ability to save progress, using consistent design, providing in-context help and fixing issues that cause data loss and errors.
The principal plays a key role in facilitating school improvement and professional learning for teachers. As an agent of change, the principal must intentionally address barriers to teacher learning, such as focusing too much on confirming existing ideas rather than challenging them. Some strategies for interrupting barriers include using protocols to structure discussion, making preconceptions explicit, and viewing mistakes as learning opportunities. The principal also ensures school goals are aligned to student needs based on data and provides resources to support teachers in achieving goals.
This document provides an overview of usability testing and highlights from its history. It discusses why usability testing is important and how even simple, qualitative testing can identify major usability issues. Examples of usability metrics like effectiveness, efficiency and satisfaction are given. The document then describes how to plan and conduct DIY usability tests with only a few participants through defining goals, tasks, recruiting testing and debriefing. It also discusses testing accessibility, mobile usability, and using tools like prototyping and A/B testing.
This document provides guidance for conducting usability testing and quality assurance. It discusses identifying websites to test and key tasks for each site. It encourages identifying sites that may be new to classmates and, ideally, sites that those conducting the test work on. For each chosen site, it recommends identifying 3-5 key tasks that a user should be able to accomplish on the site.
Usability refers to how easy user interfaces are to use. It is measured based on six factors: effectiveness, learnability, efficiency, memorability, error prevention, and satisfaction. Usability testing should start early in the design process and continue through iterations to refine the design. Implementing usability principles leads to products that are intuitive and enjoyable to use, improving user experience and business outcomes.
The document discusses different methods for evaluating user interface designs, including expert evaluation techniques like heuristic evaluation and cognitive walkthroughs. It also covers user testing, which is considered more reliable than expert evaluation alone. Formative evaluation involves testing prototypes during development to identify issues, while summative evaluation assesses the final product. Both qualitative and quantitative methods are important to identify usability problems from the user's perspective.
Webinar: How to Conduct Unmoderated Remote Usability TestingUserZoom
The webinar covered how to conduct unmoderated remote usability testing in 3 parts: an introduction and case study, how to plan, design, recruit for, and analyze a remote unmoderated usability study. It discussed choosing goals and metrics, creating study scripts with tasks and questions, recruiting participants, and analyzing results including task success rates, efficiency metrics, satisfaction scores, and behavioral data. The presentation provided examples and tips for each part of the process.
This document discusses usability testing. It defines usability testing as a method of directly observing users to evaluate the ease of use of a system. Usability is defined by factors like learnability, efficiency, and satisfaction. Usability testing should be done early and often using representative tasks and users. During testing, users are asked to think aloud to reveal their thought processes while using the interface. The results provide insights into how users interpret the UI that can be used to improve usability.
My presentation at 24 hours of UX
Links and special mentions
Medium posts:
https://uxdesign.cc/measuring-the-perceived-usability-of-a-system-using-the-system-usability-scale-3418971dd7a3 - Katerina Maniataki
https://uxdesign.cc/measuring-and-quantifying-user-experience-8f555f07363d - Matej Latin
Books:
Designing with Data - Rochelle King, Elizabeth F. Churchill, Caitlin Tan
Measuring the User Experience - Tom Tullis, Bill Albert
This document discusses user experience (UX), user interface (UI), and usability. It defines each term and explains that UX encompasses the entire experience a user has with a product, including visual design, while UI refers specifically to the interface that allows user interaction. Usability considers how easily users can accomplish tasks. The document provides examples of UX disciplines and emphasizes the importance of user research to understand needs. It also outlines techniques for testing like A/B testing, card sorting, and usability testing. Best practices for testing forms are presented, focusing on labels, fields, grouping, actions, and scrolling.
This document discusses methods for evaluating eLearning programs, including formative evaluation during development and summative evaluation after completion. It describes Kirkpatrick's model of evaluation, including levels measuring reaction, learning, behavior change, and results/ROI. Formative methods covered include expert reviews of interfaces and content, and user reviews through observations and testing. Summative usability testing methods are also outlined, such as heuristic evaluation involving experts and user testing involving representative tasks. The document recommends involving multiple evaluators and 5 users to reliably find a high percentage of usability problems.
Simple Ways of Planning, Designing and Testing Usability of a Software Produc...KAROLINA ZMITROWICZ
Originally presented at QS-Tag 2016
https://www.qs-tag.de/en/abstracts/tag-1/simple-ways-of-planning-designing-and-testing-usability-of-a-software-product/
The document provides information about conducting usability testing. It discusses what usability testing involves, including setting tasks for test participants and noting any problems they encounter. It provides tips for testing, such as teaming up with a partner, selecting 3-5 test participants, having them complete 2-3 tasks in 30-50 minutes, and one person acting as note-taker and moderator. The document also discusses how to find participants, what to tell them, questions to ask as moderator, common testing errors to avoid, and metrics to capture like completion rates, time on task, errors and satisfaction.
Introduction to Usability Testing for Survey ResearchCaroline Jarrett
This document provides guidance on planning and preparing for usability testing of surveys. It discusses determining what aspects of a survey to test, who to recruit as participants, and where to conduct the testing. Key recommendations include deciding what to test at least a month before testing, recruiting 5-10 participants to represent intended users, and conducting testing in rounds with revisions between rounds rather than one large test. Locations for testing can either be at the organization conducting the test or in participants' natural environments.
The cognitive walkthrough is a usability inspection method that evaluates how easily users can learn to use an interface by exploring it. It involves defining tasks, expected action sequences, and users. Evaluators then walk through each task step-by-step to identify any issues like mismatches between actions and effects or inadequate feedback. The goal is to catch problems that could hinder a user's ability to learn through exploration.
This document provides a checklist for students to ensure they have included all key elements in blog posts for their Year 12 ICT Wordpress assignment. The checklist includes sections for students to document their e-portfolio pages covering topics from each term between 2011-2013, including the software covered, concepts learned, assessment tasks, examples of work, and a personal review. It also includes a section for a 2013 social justice issue persuasive prose blog post where students select an issue to discuss and provide opinions and potential solutions.
This document provides guidance for students taking various IMPACT technology surveys. It outlines key details about each survey such as expected duration, participation statements, and clarification on certain survey questions and vocabulary. Key points include: the surveys should take 20-25 minutes; participation is required as the school receives IMPACT funding; clarify that certain questions do not apply to the school; and explanations of technology-related terms are provided.
The document reviews and compares three mobile assessment software options: Kahoot, Lino, and MasteryConnect. Kahoot is recommended due to its engaging design, free price, ability to connect globally, and the reviewer's successful personal experience using it in the classroom to keep students participating and learning. While Lino promotes collaboration, it requires a paid subscription and was difficult to set up. MasteryConnect allows real-time data analysis but has the highest annual fee per teacher.
The document discusses techniques for designing engaging eLearning courses using gamification principles. It suggests moving from a "push" model, where all learners receive the same information, to a "pull" model where learners are motivated to access content based on their individual needs. Specific techniques mentioned include: setting goals and objectives with different levels of difficulty; providing frequent feedback; measuring and displaying progress; rewarding effort; and using pedagogical agents. Examples are given comparing a traditional compliance training to an updated interactive version that applies these gamification design techniques.
Socrative and Kahoot are student response systems used for formative assessment. While both are free, they each have strengths and weaknesses. Socrative allows for quick impromptu questions, different question types including short answer, and provides individual reports. However, students can't review questions after class. Kahoot has a game-like interface that motivates students, but lacks question and individual reports. After reviewing features, the document recommends Socrative for its enhanced student interaction and participation capabilities.
Brian A. Morris - EDIT 5395 Kahoot! v PollEverywherebmorrisatsayre
This document discusses a mobile assessment software. It has features like game-based classroom response, timed responses from 0-30 seconds, and providing instant feedback. The author has used the software previously with sophomore, junior, and senior classes, who enjoyed it. The goal was for remediation in algebra, geometry, and English. Pros include it being easy to use, allowing any device to access it, and being fun for students and teachers. Cons are that it requires a projector and is limited to 30 seconds per question. The author recommends Kahoot! as it is free to use and highly engaging for students.
Heuristic Analysis of Oregon Unemployment ApplicationScholarStudio
The document analyzes the user experience of Oregon's unemployment application system and identifies several areas for improvement. It finds that the system provides little guidance to users, allows for easy data loss, lacks consistency and clear error messages. This creates undue stress and risk of users not being able to complete the important process of applying for unemployment benefits. The analysis recommends frontloading key information, adding the ability to save progress, using consistent design, providing in-context help and fixing issues that cause data loss and errors.
The principal plays a key role in facilitating school improvement and professional learning for teachers. As an agent of change, the principal must intentionally address barriers to teacher learning, such as focusing too much on confirming existing ideas rather than challenging them. Some strategies for interrupting barriers include using protocols to structure discussion, making preconceptions explicit, and viewing mistakes as learning opportunities. The principal also ensures school goals are aligned to student needs based on data and provides resources to support teachers in achieving goals.
This document provides an overview of usability testing and highlights from its history. It discusses why usability testing is important and how even simple, qualitative testing can identify major usability issues. Examples of usability metrics like effectiveness, efficiency and satisfaction are given. The document then describes how to plan and conduct DIY usability tests with only a few participants through defining goals, tasks, recruiting testing and debriefing. It also discusses testing accessibility, mobile usability, and using tools like prototyping and A/B testing.
This document provides guidance for conducting usability testing and quality assurance. It discusses identifying websites to test and key tasks for each site. It encourages identifying sites that may be new to classmates and, ideally, sites that those conducting the test work on. For each chosen site, it recommends identifying 3-5 key tasks that a user should be able to accomplish on the site.
Usability refers to how easy user interfaces are to use. It is measured based on six factors: effectiveness, learnability, efficiency, memorability, error prevention, and satisfaction. Usability testing should start early in the design process and continue through iterations to refine the design. Implementing usability principles leads to products that are intuitive and enjoyable to use, improving user experience and business outcomes.
The document discusses different methods for evaluating user interface designs, including expert evaluation techniques like heuristic evaluation and cognitive walkthroughs. It also covers user testing, which is considered more reliable than expert evaluation alone. Formative evaluation involves testing prototypes during development to identify issues, while summative evaluation assesses the final product. Both qualitative and quantitative methods are important to identify usability problems from the user's perspective.
Webinar: How to Conduct Unmoderated Remote Usability TestingUserZoom
The webinar covered how to conduct unmoderated remote usability testing in 3 parts: an introduction and case study, how to plan, design, recruit for, and analyze a remote unmoderated usability study. It discussed choosing goals and metrics, creating study scripts with tasks and questions, recruiting participants, and analyzing results including task success rates, efficiency metrics, satisfaction scores, and behavioral data. The presentation provided examples and tips for each part of the process.
This document discusses usability testing. It defines usability testing as a method of directly observing users to evaluate the ease of use of a system. Usability is defined by factors like learnability, efficiency, and satisfaction. Usability testing should be done early and often using representative tasks and users. During testing, users are asked to think aloud to reveal their thought processes while using the interface. The results provide insights into how users interpret the UI that can be used to improve usability.
My presentation at 24 hours of UX
Links and special mentions
Medium posts:
https://uxdesign.cc/measuring-the-perceived-usability-of-a-system-using-the-system-usability-scale-3418971dd7a3 - Katerina Maniataki
https://uxdesign.cc/measuring-and-quantifying-user-experience-8f555f07363d - Matej Latin
Books:
Designing with Data - Rochelle King, Elizabeth F. Churchill, Caitlin Tan
Measuring the User Experience - Tom Tullis, Bill Albert
This document discusses user experience (UX), user interface (UI), and usability. It defines each term and explains that UX encompasses the entire experience a user has with a product, including visual design, while UI refers specifically to the interface that allows user interaction. Usability considers how easily users can accomplish tasks. The document provides examples of UX disciplines and emphasizes the importance of user research to understand needs. It also outlines techniques for testing like A/B testing, card sorting, and usability testing. Best practices for testing forms are presented, focusing on labels, fields, grouping, actions, and scrolling.
This document discusses methods for evaluating eLearning programs, including formative evaluation during development and summative evaluation after completion. It describes Kirkpatrick's model of evaluation, including levels measuring reaction, learning, behavior change, and results/ROI. Formative methods covered include expert reviews of interfaces and content, and user reviews through observations and testing. Summative usability testing methods are also outlined, such as heuristic evaluation involving experts and user testing involving representative tasks. The document recommends involving multiple evaluators and 5 users to reliably find a high percentage of usability problems.
Simple Ways of Planning, Designing and Testing Usability of a Software Produc...KAROLINA ZMITROWICZ
Originally presented at QS-Tag 2016
https://www.qs-tag.de/en/abstracts/tag-1/simple-ways-of-planning-designing-and-testing-usability-of-a-software-product/
Heuristic evaluation is a usability inspection method where 3-5 evaluators examine a user interface and judge its compliance with recognized usability principles called "heuristics." Each evaluator independently explores the interface twice and notes any violations of heuristics, such as consistency, visibility of system status, or flexibility of use. Evaluators then aggregate their findings and rate the severity of identified usability problems to prioritize fixes. With 3-5 evaluators, heuristic evaluation typically identifies around 75% of usability issues in a cost-effective manner.
The document discusses various methods for evaluating user experience design when users are located in different countries, including heuristic evaluation, usability testing, and GOMS analysis. Heuristic evaluation involves having 3-5 evaluators examine a user interface and note where it violates established usability heuristics. Usability testing involves testing an interface with real users performing representative tasks and collecting both quantitative and qualitative data. GOMS analysis estimates the time required to complete tasks based on the number and types of user actions involved.
The document discusses various methods for evaluating user experience design when users are located in different countries, including heuristic evaluation, usability testing, GOMS analysis, and collecting different types of data. Heuristic evaluation involves having 3-5 evaluators examine a user interface and identify usability issues based on usability heuristics. Usability testing involves testing an interface with real users to observe what they do and collect their feedback. GOMS analysis estimates the time and effort required to complete tasks in an interface. It is recommended to use multiple evaluation methods and data types to get a comprehensive understanding of the user experience.
Heuristic evaluation is a usability inspection method where 3-5 evaluators examine a user interface and judge its compliance with recognized usability principles called "heuristics." Each evaluator independently explores the interface twice and notes any violations of heuristics, such as consistency, visibility of system status, or flexibility of use. Evaluators then meet to aggregate their findings and determine the severity of usability problems. With 3-5 evaluators, heuristic evaluation can find around 75% of usability issues in a cost-effective manner.
The document discusses various methods for evaluating user experience when users are located in different countries, including heuristic evaluation, usability testing, GOMS analysis, and collecting different types of data. Heuristic evaluation involves having 3-5 evaluators examine a user interface and identify any violations of usability principles or heuristics. Usability testing involves testing the interface with representative users performing tasks and collecting both quantitative and qualitative data. GOMS analysis estimates the time required to complete tasks based on the number and types of user actions. The document recommends using multiple evaluation methods and data collection approaches.
The document discusses various methods for evaluating user experience design when users are located in different countries, including heuristic evaluation, usability testing, and GOMS analysis. Heuristic evaluation involves having 3-5 evaluators examine a user interface and note where it violates established usability heuristics. Usability testing involves testing an interface with real users performing representative tasks and collecting both quantitative and qualitative data. GOMS analysis estimates the time required to complete tasks based on the number and types of user actions involved.
The document discusses various methods for evaluating user experience design when users are located in different countries, including heuristic evaluation, usability testing, GOMS analysis, and collecting different types of data. Heuristic evaluation involves having 3-5 evaluators examine a user interface and identify usability issues based on established usability heuristics. Usability testing involves testing an interface with real users to observe what they do and collect their feedback. GOMS analysis estimates the time and cognitive effort required to complete tasks in an interface. The document recommends using multiple evaluation methods and data collection approaches to comprehensively evaluate a remote user experience.
The document discusses various methods for evaluating user experience when users are located in different countries, including heuristic evaluation, usability testing, GOMS analysis, and collecting different types of data. Heuristic evaluation involves having 3-5 evaluators examine a user interface and identify usability issues based on established usability heuristics. Usability testing involves testing an interface with real users to observe what they do and collect their feedback. GOMS analysis estimates the time and cognitive load required to complete tasks in an interface. The document recommends using multiple evaluation methods and data collection approaches to comprehensively evaluate remote user experience.
Similar to Do-It-Yourself Usability Testing for eLearning (20)
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
22. Observing users
Body
• Fidgeting
• Rubbing head/eyes/neck
• Leaning in
Eyes
• Skimming
• Squinting
• Looking around page, confused
Hand
• Hovering to see if element is interactive
• Hesitating before clicking
• Clicking randomly
• Struggling with scrolling or dragging