The document summarizes the findings of a survey study on how usability practitioners conduct analysis of usability evaluation data in practice. The survey found that: (1) Analysis typically requires 48 hours for usability testing and 24 hours for inspections; (2) Practitioners rely on heuristics, guidelines and participant opinions to identify problems, with few using research tools; (3) Deliverables include problem lists and visual redesign suggestions generated throughout analysis. The study suggests researchers should learn from, rather than assume, practitioners' processes to better support analysis in practice.
The document justifies the approaches used for developing an e-menu prototype for a Thai restaurant. It uses a case study methodology to understand business requirements in-depth. Rapid Application Development and iterative prototyping are used to develop the software in an agile manner to accommodate changing requirements. Direct observation and interviews gather requirements and evaluate users' perceptions of the prototype.
The document discusses various qualitative user research methods that can be used at different stages of the design process. It provides descriptions of methods such as participant observation, cultural probes, scenarios of use, and focus groups. These methods are used to understand user behaviors, needs, and experiences in order to inform the design process from early concept development through testing and evaluation. The document also notes that qualitative research does not provide statistically significant data about large populations.
Systematic Literature Reviews and Systematic Mapping Studiesalessio_ferrari
Lecture slides on Systematic Literature Reviews and Systematic Mapping Studies in software engineering. It describes the different steps, discusses differences between the two methods, and gives guidelines on how to conduct these types of study.
This document describes a case study research approach for evaluating a requirements defect detection tool in a software engineering company. The following key points are discussed:
1. The study will evaluate the accuracy, usability, and areas for improvement of the tool using both quantitative and qualitative data collection methods.
2. Context details about the subject company and study participants are important to characterize. Quantitative data such as precision/recall scores and usability questionnaires will be collected. Qualitative data such as sources of inaccuracies and improvement feedback will be analyzed.
3. Validity will be addressed through triangulation of multiple data sources and manual classification of defects. The research questions aim to evaluate the tool's accuracy, sources of errors, us
Relation between the Quality of the Review Process of Papers and the Impact o...Jorge Alarcon
This document proposes researching the relationship between the quality of a journal's review process and its impact factor for papers in the field of artificial intelligence. It hypothesizes that journals with higher quality reviews will tend to have higher impact factors. A theoretical model is presented involving surveys of authors and reviewers to measure review quality factors. Journals will then be evaluated and any anomalies analyzed to refine the model. The research aims to determine if impact is a good indicator of review quality for AI publications.
Empirical Software Engineering for Software Environments - University of Cali...Marco Aurelio Gerosa
Second class of the Software Environment course. In this class, we discuss how to use Empirical Software Engineering techniques to support the construction and evaluation of software tools.
Qualitative Studies in Software Engineering - Interviews, Observation, Ground...alessio_ferrari
This
Lecture about qualitative data collection methods and qualitative data analysis in software engineering. Topics covered are:
1. Sampling
2. Interviews
3. Observation and Participant Observation
4. Archival Data Collection
5. Grounded theory, Coding, Thematic Analysis
6. Threats to validity in qualitative studies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
The document justifies the approaches used for developing an e-menu prototype for a Thai restaurant. It uses a case study methodology to understand business requirements in-depth. Rapid Application Development and iterative prototyping are used to develop the software in an agile manner to accommodate changing requirements. Direct observation and interviews gather requirements and evaluate users' perceptions of the prototype.
The document discusses various qualitative user research methods that can be used at different stages of the design process. It provides descriptions of methods such as participant observation, cultural probes, scenarios of use, and focus groups. These methods are used to understand user behaviors, needs, and experiences in order to inform the design process from early concept development through testing and evaluation. The document also notes that qualitative research does not provide statistically significant data about large populations.
Systematic Literature Reviews and Systematic Mapping Studiesalessio_ferrari
Lecture slides on Systematic Literature Reviews and Systematic Mapping Studies in software engineering. It describes the different steps, discusses differences between the two methods, and gives guidelines on how to conduct these types of study.
This document describes a case study research approach for evaluating a requirements defect detection tool in a software engineering company. The following key points are discussed:
1. The study will evaluate the accuracy, usability, and areas for improvement of the tool using both quantitative and qualitative data collection methods.
2. Context details about the subject company and study participants are important to characterize. Quantitative data such as precision/recall scores and usability questionnaires will be collected. Qualitative data such as sources of inaccuracies and improvement feedback will be analyzed.
3. Validity will be addressed through triangulation of multiple data sources and manual classification of defects. The research questions aim to evaluate the tool's accuracy, sources of errors, us
Relation between the Quality of the Review Process of Papers and the Impact o...Jorge Alarcon
This document proposes researching the relationship between the quality of a journal's review process and its impact factor for papers in the field of artificial intelligence. It hypothesizes that journals with higher quality reviews will tend to have higher impact factors. A theoretical model is presented involving surveys of authors and reviewers to measure review quality factors. Journals will then be evaluated and any anomalies analyzed to refine the model. The research aims to determine if impact is a good indicator of review quality for AI publications.
Empirical Software Engineering for Software Environments - University of Cali...Marco Aurelio Gerosa
Second class of the Software Environment course. In this class, we discuss how to use Empirical Software Engineering techniques to support the construction and evaluation of software tools.
Qualitative Studies in Software Engineering - Interviews, Observation, Ground...alessio_ferrari
This
Lecture about qualitative data collection methods and qualitative data analysis in software engineering. Topics covered are:
1. Sampling
2. Interviews
3. Observation and Participant Observation
4. Archival Data Collection
5. Grounded theory, Coding, Thematic Analysis
6. Threats to validity in qualitative studies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
Usability evaluation is aimed at finding usability problems in user interfaces to provide qualitative and quantitative data about user behavior. There are two main types of evaluations: formative evaluations conducted during design to gather feedback, and summative evaluations conducted after completion to identify problems. Common methods include heuristic evaluation where experts judge compliance with usability principles, cognitive walkthroughs to simulate user behavior, and usability testing where representative users perform tasks. Usability evaluations provide benefits for both technology improvements and business outcomes.
Usability evaluation methods (part 2) and performance metricsAndres Baravalle
This document provides an overview of usability evaluation methods and performance metrics. It discusses usability testing methods like usability testing, usability inspections, and usability inquiry. It also covers specific techniques like heuristic evaluations, cognitive walkthroughs, surveys, and contextual inquiry. The document then discusses different types of performance metrics that can be used to measure the user experience, including task success rates, levels of success, errors, efficiency, and learnability.
This document summarizes research conducted on an ISM app to support outfit choices. Methods included questionnaires, interviews, and usability testing with 40 respondents and 7 participants. Most participants were female ages 18-29 and considered fashion very or extremely important. The research found ISM fails to support outfit choices because it lacks condition-based searching to find outfits fitting specific weather or occasions. Suggested improvements included adding filters for weather and occasion, improving the search functionality to find options from a user's wardrobe more quickly, and fixing some technical issues like crashing.
Double Map is an application designed to track shuttle services within the campus. The campus can vary from universities, corporate companies, hospitals, and airports.
We conducted usability evaluation of DoubleMap with the live version of the web and mobile application.For usability evaluation, we followed various methods like a cognitive walkthrough, contextual inquiry, interviews, heuristic evaluation to collect data from for the evaluation. While applying various methods we delegated the roles of data logging, interviewing, note taking among ourselves.
This project aimed to design a way for elderly residents of Cambridge to access historical information captured by the Mill Road history team. Research included interviews with the team and elderly locals. Ideas were generated and narrowed to focus on interactions at bus stops. Digital prototypes were created with physical movements and readability for elderly users in mind. An expert evaluation provided positive feedback but testing with real elderly users was recommended to further validate the design.
This document provides an overview of usability evaluation techniques for formative testing. It defines usability and discusses the purpose of usability evaluation to identify problems, inform requirements, and optimize design early. A variety of formative techniques are described, including thinking aloud, heuristic evaluation, and paper prototyping. The document emphasizes that usability evaluation should have specific, measurable goals and provide both qualitative and quantitative data to analyze and interpret results to improve the design.
This document discusses usability and user experience design best practices for virtual reality. It begins with an overview of VR and its current state. It then outlines some key ergonomic requirements like minimizing latency and motion to prevent cyber sickness. Several interactive design patterns are presented, such as the reticle for object selection, fuse buttons, ground menus, dashboards for information display, and text display techniques. The document concludes by discussing usability testing methods in VR like those offered by the company Fishbowl.
This document discusses guerrilla usability testing techniques. It explains that guerrilla testing is an informal testing method that can be done with minimal equipment like a computer, moderator, and video recorder to capture user interactions. Some benefits of guerrilla testing are that it is low cost, provides qualitative insights, and can be done continuously throughout the development process. Examples highlighted include Microsoft's extensive usability testing for Halo 3 that helped identify and address user frustrations.
Talk for the Vancouver User Experience group on October 16, 2007 about the user experience of usability projects and how we've re-designed our process.
This lecture covers requirements specification and conceptual design techniques for human-computer interaction courses. It focuses on specification techniques for requirements and conceptual UI design. The lecture discusses analyzing user profiles, contexts, and tasks which includes identifying user characteristics, goals, tasks and actions. It provides examples of constructing detailed user profiles including personas, identifying task scenarios and use cases, and analyzing workflows.
The document summarizes Lecture 3 of the Human-Computer Interaction Course 2014 given by Lora Aroyo. It discusses interaction design concepts like design principles, affordances, constraints, mappings, feedback and visibility. It also outlines four psychological principles of user interaction and how they can be applied in design. Specific concepts like consistency, affordances, mappings, feedback and cultural associations are explained in detail along with examples. Design guidelines, standards and principles for optimizing the user experience are also presented.
This lecture covers various methods for prototyping and testing user interfaces, including paper prototyping, wireframing, and usability testing techniques like heuristic evaluation and cognitive walkthrough. Low-fidelity prototyping allows for early user feedback, while high-fidelity prototyping tests detailed tasks and processes. The lecture also discusses iterative design, with prototypes refined based on user testing to develop the final design.
Formative Usability Testing in Agile: Piloting New Techniques at AutodeskUserZoom
UX experts from Autodesk discuss new techniques of formative usability testing piloted by the AutoCAD UX group in their agile user-centered design process.
You can view the entire webinar here: http://goo.gl/C4uT9
Elizabeth Snowdon is a senior business/web analyst consultant with over 10 years of experience conducting usability testing. The document discusses what usability is, why it matters, types of usability studies, how to plan and conduct a usability test. Key points covered include identifying target users, developing tasks for testing, observing and collecting feedback from users, and analyzing findings to identify problems and improve designs through an iterative process.
Visualizing the Problem Domain for Spreadsheet Users: A Mental Model PerspectiveBennett Kankuzi
In this paper presentation, we introduce a new spreadsheet visualization tool as well as an empirical evaluation of its usability and of its effects on mental models of users. The tool translates traditional spreadsheet
formulas into problem domain narratives and highlights referenced cells. The tool was found to be easy to learn and helped the participants to locate more errors in spreadsheets. Furthermore, the tool increased the use of the domain mental model in error descriptions and appeared to improve the mapping between the spreadsheet model and the domain model. Full paper can be downloaded at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6883040 The paper was presented at the 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) which was held between 28th July, 2014 to 1st August, 2014 in Melbourne, Australia.
Human Computer Interaction - Heuristic Evaluationemmadmd
The document summarizes a report on a heuristic evaluation of the website Pinterest.com conducted by four students. It provides background on heuristic evaluation and discusses literature on the topic. The report describes the system specifications of Pinterest and results of evaluating the site based on usability heuristics. Issues found include difficulties studying collaborative use and high costs of heuristic evaluation. The summary provides an overview without copying significant content.
The document discusses methods for evaluating ontologies. It proposes developing objective metrics to evaluate ontologies based on three criteria: correctness, completeness, and utility. Correctness evaluates how well an ontology expresses its design objectives. Completeness evaluates how fully an ontology captures required semantic components. Utility combines correctness and completeness and evaluates an ontology's usefulness for its intended use case. Examples are provided to illustrate evaluating ontologies based on the proposed metrics. The goal is to develop standardized evaluation methods to facilitate ontology development and reuse across different domains.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Usability evaluation is aimed at finding usability problems in user interfaces to provide qualitative and quantitative data about user behavior. There are two main types of evaluations: formative evaluations conducted during design to gather feedback, and summative evaluations conducted after completion to identify problems. Common methods include heuristic evaluation where experts judge compliance with usability principles, cognitive walkthroughs to simulate user behavior, and usability testing where representative users perform tasks. Usability evaluations provide benefits for both technology improvements and business outcomes.
Usability evaluation methods (part 2) and performance metricsAndres Baravalle
This document provides an overview of usability evaluation methods and performance metrics. It discusses usability testing methods like usability testing, usability inspections, and usability inquiry. It also covers specific techniques like heuristic evaluations, cognitive walkthroughs, surveys, and contextual inquiry. The document then discusses different types of performance metrics that can be used to measure the user experience, including task success rates, levels of success, errors, efficiency, and learnability.
This document summarizes research conducted on an ISM app to support outfit choices. Methods included questionnaires, interviews, and usability testing with 40 respondents and 7 participants. Most participants were female ages 18-29 and considered fashion very or extremely important. The research found ISM fails to support outfit choices because it lacks condition-based searching to find outfits fitting specific weather or occasions. Suggested improvements included adding filters for weather and occasion, improving the search functionality to find options from a user's wardrobe more quickly, and fixing some technical issues like crashing.
Double Map is an application designed to track shuttle services within the campus. The campus can vary from universities, corporate companies, hospitals, and airports.
We conducted usability evaluation of DoubleMap with the live version of the web and mobile application.For usability evaluation, we followed various methods like a cognitive walkthrough, contextual inquiry, interviews, heuristic evaluation to collect data from for the evaluation. While applying various methods we delegated the roles of data logging, interviewing, note taking among ourselves.
This project aimed to design a way for elderly residents of Cambridge to access historical information captured by the Mill Road history team. Research included interviews with the team and elderly locals. Ideas were generated and narrowed to focus on interactions at bus stops. Digital prototypes were created with physical movements and readability for elderly users in mind. An expert evaluation provided positive feedback but testing with real elderly users was recommended to further validate the design.
This document provides an overview of usability evaluation techniques for formative testing. It defines usability and discusses the purpose of usability evaluation to identify problems, inform requirements, and optimize design early. A variety of formative techniques are described, including thinking aloud, heuristic evaluation, and paper prototyping. The document emphasizes that usability evaluation should have specific, measurable goals and provide both qualitative and quantitative data to analyze and interpret results to improve the design.
This document discusses usability and user experience design best practices for virtual reality. It begins with an overview of VR and its current state. It then outlines some key ergonomic requirements like minimizing latency and motion to prevent cyber sickness. Several interactive design patterns are presented, such as the reticle for object selection, fuse buttons, ground menus, dashboards for information display, and text display techniques. The document concludes by discussing usability testing methods in VR like those offered by the company Fishbowl.
This document discusses guerrilla usability testing techniques. It explains that guerrilla testing is an informal testing method that can be done with minimal equipment like a computer, moderator, and video recorder to capture user interactions. Some benefits of guerrilla testing are that it is low cost, provides qualitative insights, and can be done continuously throughout the development process. Examples highlighted include Microsoft's extensive usability testing for Halo 3 that helped identify and address user frustrations.
Talk for the Vancouver User Experience group on October 16, 2007 about the user experience of usability projects and how we've re-designed our process.
This lecture covers requirements specification and conceptual design techniques for human-computer interaction courses. It focuses on specification techniques for requirements and conceptual UI design. The lecture discusses analyzing user profiles, contexts, and tasks which includes identifying user characteristics, goals, tasks and actions. It provides examples of constructing detailed user profiles including personas, identifying task scenarios and use cases, and analyzing workflows.
The document summarizes Lecture 3 of the Human-Computer Interaction Course 2014 given by Lora Aroyo. It discusses interaction design concepts like design principles, affordances, constraints, mappings, feedback and visibility. It also outlines four psychological principles of user interaction and how they can be applied in design. Specific concepts like consistency, affordances, mappings, feedback and cultural associations are explained in detail along with examples. Design guidelines, standards and principles for optimizing the user experience are also presented.
This lecture covers various methods for prototyping and testing user interfaces, including paper prototyping, wireframing, and usability testing techniques like heuristic evaluation and cognitive walkthrough. Low-fidelity prototyping allows for early user feedback, while high-fidelity prototyping tests detailed tasks and processes. The lecture also discusses iterative design, with prototypes refined based on user testing to develop the final design.
Formative Usability Testing in Agile: Piloting New Techniques at AutodeskUserZoom
UX experts from Autodesk discuss new techniques of formative usability testing piloted by the AutoCAD UX group in their agile user-centered design process.
You can view the entire webinar here: http://goo.gl/C4uT9
Elizabeth Snowdon is a senior business/web analyst consultant with over 10 years of experience conducting usability testing. The document discusses what usability is, why it matters, types of usability studies, how to plan and conduct a usability test. Key points covered include identifying target users, developing tasks for testing, observing and collecting feedback from users, and analyzing findings to identify problems and improve designs through an iterative process.
Visualizing the Problem Domain for Spreadsheet Users: A Mental Model PerspectiveBennett Kankuzi
In this paper presentation, we introduce a new spreadsheet visualization tool as well as an empirical evaluation of its usability and of its effects on mental models of users. The tool translates traditional spreadsheet
formulas into problem domain narratives and highlights referenced cells. The tool was found to be easy to learn and helped the participants to locate more errors in spreadsheets. Furthermore, the tool increased the use of the domain mental model in error descriptions and appeared to improve the mapping between the spreadsheet model and the domain model. Full paper can be downloaded at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6883040 The paper was presented at the 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) which was held between 28th July, 2014 to 1st August, 2014 in Melbourne, Australia.
Human Computer Interaction - Heuristic Evaluationemmadmd
The document summarizes a report on a heuristic evaluation of the website Pinterest.com conducted by four students. It provides background on heuristic evaluation and discusses literature on the topic. The report describes the system specifications of Pinterest and results of evaluating the site based on usability heuristics. Issues found include difficulties studying collaborative use and high costs of heuristic evaluation. The summary provides an overview without copying significant content.
The document discusses methods for evaluating ontologies. It proposes developing objective metrics to evaluate ontologies based on three criteria: correctness, completeness, and utility. Correctness evaluates how well an ontology expresses its design objectives. Completeness evaluates how fully an ontology captures required semantic components. Utility combines correctness and completeness and evaluates an ontology's usefulness for its intended use case. Examples are provided to illustrate evaluating ontologies based on the proposed metrics. The goal is to develop standardized evaluation methods to facilitate ontology development and reuse across different domains.
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Increasing the rigor and efficiency of research through the use of qualitati...Merlien Institute
This document summarizes a paper presented at a conference on computer-aided qualitative research. The paper discusses how qualitative data analysis software, specifically NVivo, was used to analyze data from a study examining the creativity of instructional leaders in generating innovative classroom environments. The study collected data through observations, interviews, and student journals. NVivo helped manage, analyze, and code data from various sources. Using the software increased the rigor and efficiency of the research by systematically organizing transcripts, field notes, and other data sources.
Scalable Exploration of Relevance Prospects to Support Decision MakingKatrien Verbert
Presented at IntRS 2016 - Interfaces and Human Decision Making for Recommender Systems, workshop at RecSys 2016
Citation: Verbert, K., Seipp, K., He, C., Parra, D., Wongchokprasitti, C., & Brusilovsky, P. (2016). Scalable Exploration of Relevance Prospects to Support Decision Making. Proceedings of the Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with ACM Conference on Recommender Systems (RecSys 2016), Boston, MA, USA, September 16, 2016.
Experiments on Pattern-based Ontology Designevabl444
The document describes an experiment on using ontology design patterns (ODPs) to construct ontologies from textual requirements. It found that:
- Most participants perceived ODPs as useful for building better quality ontologies, though it did not necessarily speed up development.
- Ontologies constructed using ODPs showed improved coverage of tasks and modeling quality compared to those built without patterns.
- However, more support is still needed for easily finding, selecting, and relating relevant patterns during ontology engineering.
Chapter 9: Evaluation techniques
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
Practice research is a paradigm that studies practices as meaningful arrangements of actors, actions, documents and artifacts. It aims to generate knowledge that improves practices. Practice research is based on ontological assumptions that see practices as the primary units of study, and epistemological foundations of pragmatic, provisional knowledge co-created through action and dialogue between researchers and practitioners. It uses situational inquiries, action research, and design research to study local practices and generate abstract knowledge in the form of practical theories, models and methods that can be applied in general practices and contribute to the scientific body of knowledge.
Assessing Problem Solving In Expert Systems Using Human BenchmarkingClaire Webber
This document summarizes a study that assessed problem solving ability in an expert scheduling system called GATES by comparing its performance to that of humans on scheduling tasks. The study administered easy and difficult scheduling tasks to undergraduate students, graduate students, and air traffic controllers to represent different levels of problem solving ability. Process measures in the form of metacognitive questionnaires and think-aloud protocols were also used. The goal was to develop a "human benchmarking" scale to evaluate how the problem solving ability of GATES compared to humans. Results from previous studies found GATES outperformed community college students but a wider range of human participants was needed to better define the scale.
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz EvaluationsBELIV Workshop
This document discusses conceptual and practical challenges in evaluating information visualization (InfoViz) systems. It notes concerns that many InfoViz evaluation studies use simple outcome measures like task completion time without clarity on what constructs are being evaluated or strong theoretical motivation. As an example, it analyzes how the concept of "overview" is used inconsistently in InfoViz papers. It also examines how studies of fisheye interfaces have had mixed results and questions whether some components of how they work are as useful as thought. Overall, it argues InfoViz evaluation needs more theoretically grounded experiments that build on past work to better understand how systems support users.
This paper provides a short analytical critique of the white paper "An Examination of Software Engineering Work Practices" by Singer, Lethbridge, Vinson, and Anquetil. The critique argues that the methodology used in the study has biases and limitations. Specifically, it critiques the small sample size of studying one employee's activities over short sessions, and argues computer-based studies could provide more accurate data on software engineers' work practices. However, it acknowledges the value of the authors' contributions to research in this area. Ultimately, the critique concludes the arguments for dismissing other research methods and claims of success in developing a tool are debatable given weaknesses in the methodology.
Research method critique on problem solvingDana Dannawi
This document summarizes two research articles. For Article 1, it provides a brief overview of the purpose, methodology, results and lack of recommendations. For Article 2, it notes that while the purpose and introduction were clearly explained, some elements like the literature review and methodology were not fully described or defined. Both articles are critiqued on elements like validity, statistical analysis, discussion of findings and recommendations.
This document proposes evaluating the usability of VALET, a visual analytics software developed by Purdue University's VACCINE lab for law enforcement officials. The project aims to collect user data through surveys, interviews, and recording user actions during goal-directed tasks. This will provide insights into difficulties with the current interface to propose design improvements around information scent, cognitive task analysis, and GOMS modeling. Future work would compare a new interface design to the current one through additional user studies and expert reviews.
Towards the next generation of interactive and adaptive explanation methodsKatrien Verbert
This document summarizes a presentation given by Katrien Verbert on explainable artificial intelligence and interactive explanation methods. It discusses Verbert's research group at KU Leuven which focuses on areas like recommender systems, visualization, and intelligent user interfaces. The presentation provides an overview of explainable AI, discussing objectives like explaining model outcomes to increase trust and allowing user interaction with explanations. It describes various recommendation techniques and presents examples of explainable recommendation systems. The presentation discusses how personal user characteristics can impact the effects of explanations and outlines related user studies. Finally, it summarizes several of Verbert's application areas for explainable AI like education, analytics, agriculture, and healthcare, touching on methodologies and results.
Presentation held at the 10th Scandinavian Workshop on E-Government, Oslo, February 5-6, 2013.
The presentation was based on the discussion paper Social media in public sector innovation, available here: http://www.academia.edu/2496809/Social_media_in_public_sector_innovation
The document discusses outliers or "single-user problems" found during usability testing with only one participant. It notes that such problems are common, accounting for around 25-58% of all usability problems found. However, guidance on how to handle them is limited. The document reports on a survey of 89 usability practitioners that found varied practices for classifying or rejecting single-user problems. It provides recommendations for practitioners to establish procedures for evaluating single-user problems, consider sample size, check against guidelines, seek advice from others, and check if issues are artifacts of testing.
Sosiale medier og innovasjon i offentlig sektor webAsbjørn Følstad
Presentasjon på kurset Sosiale medier i offentlig sektor, NTNU master of management, Oslo, 27. september, 2012. Se kurssiden her:
http://tinyurl.com/czruxzs
Presentasjon ved seminar om sosiale medier ved Høgskolen Stord / Haugesund, 21. september 2011.
Presentasjonen er basert på en tilsvarende presentasjon ved konferansen Del & delta, 25. mai, 2011.
Usability evaluation in exclusive domains_presentationAsbjørn Følstad
The document discusses methods for usability evaluation when domain knowledge is needed but unavailable to evaluators. It reviews studies comparing evaluations done by domain experts versus usability experts. Domain experts identified more domain-specific issues and their findings were given higher priority by clients. Methods like cooperative usability testing that include dialogue with users allow access to users' domain knowledge and identify a broader range of usability issues compared to observation alone. Accessing user domain knowledge through evaluation methods pays off the most for usability evaluations in highly specialized and exclusive domains where the knowledge is otherwise unavailable.
Paper on the usefulness of accessing users domain knowledge in domains characterized by high levels of spezialization.
Presented at The first European workshop HCI evaluation and design, Limassol, Cyprus, April 9, 2011.
1. The document discusses the growing number of proposed user experience (UX) components and measures and whether this could lead to a lack of standardization as seen with satisfaction measures.
2. It suggests that UX research could study and use simple UX measures like single rating scales combined with free-text explanations to collect user feedback, and analyze it using ad-hoc models focused on key identified aspects.
3. An example is given of collecting user ratings and comments on conceptual mobile phone designs to identify potential UX issues to address, such as privacy, reliability, and utility.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes and Domino License Cost Reduction in the World of DLAU
Chi2012 analysis in practical usability evaluation web
1. Analysis in Practical Usability Evaluation:
A Survey Study
Asbjørn Følstad, SINTEF
Effie Lai-Chong Law, University of Leicester
Kasper Hornbæk, University of Copenhagen
CHI 2012
2. What is analysis?
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 2
3. Analysis superficially treated in textbooks
7,5 %
Share of pages in Dumas & Redish’ a
practical guide to usability testing
treating Tabulating and analyzing data
+ Recommending changes.
In contrast, preparations for a usability
test is covered in 46 % of the same
book.
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 3
4. Few - if any - studies on how practitioners do analysis
However, the research community has developed:
User action framework Instant data analysis
Analysis frameworks (Andre, Hartson, Williges, 2001) (Kjeldskov, Skov, Stage, 2004)
and processes SUPEX – structured UP extraction
(Cockton, Lavery, 1999)
Templates
Problem description (Lavery, Cockton, Atkinson, 1997)
Guidelines
formats (Capra, 2006)
Usability problem inspector
(Andre, Hartson, Williges, 2003)
Analysis tools
Morae plugin for problem description and grouping
(Howarth, Smith-Jackson, Hartson, 2009)
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 4
5. Few - if any - studies on how practitioners do analysis
… and the research and standards communities have discussed
the relation between evaluation and design
Context analysis
Evaluation User reqirements
Design
ISO 9241-210 – Human-centred design for interactive systems
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 5
6. A researcher perspective on analysis
The main output of an analysis in
(formative) usability evaluation is a
usability problem list to inform
later design work
Research-based processes,
methods and tools are needed to
support rigorous analysis
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 6
7. But is the research user-centred?
Do we really know
the practitioners?
Do we know how they
do usability evaluation
in general – and
analysis of evaluation
data in particular?
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 7
8. Survey to explore analysis practices
155 participants, mainly recruited
through SIGCHI and UPA chapters
Experienced participants: Median
usability work experience 5 years
Questions concerned the participants'
latest usability evaluation
Latest evaluation within last 6 months
- 62% within last 2 months
Target both usability testing and
inspection – questionnaires adapted
- 112 usability testing
- 43 usability inspection
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 8
9. Research questions
1. How is analysis supported?
2. How are usability problems
identified?
3. How do usability
practitioners collaborate in
analysis?
4. How is redesign integrated
into the evaluation process?
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 9
10. Findings
1. How is analysis supported? Working hours – for the entire
evaluation
2. How are usability problems identified?
3. How do usability practitioners
collaborate in analysis? Analysis resources
4. How is redesign integrated into the
evaluation process?
Tools used for analysis
Structured formats for problem
description
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study.
11. Findings
1. How is analysis supported? Working hours - for the entire
2. How are usability problems identified? evaluation?
3. How do usability practitioners
collaborate in analysis? Usability testing: 48 (median)
4. How is redesign integrated into the
evaluation process?
Usability inspection : 24 (median)
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study.
12. Findings
1. How is analysis supported? Analysis resources?
2. How are usability problems identified? UT UI_
3. How do usability practitioners
collaborate in analysis? Heuristics / guidelines 60% 76%
4. How is redesign integrated into the
evaluation process? Design patterns 41% 54%
Test participant opinion:
… usability problems 64%
… redesign suggestions 48%
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 12
13. Findings
Tools used during analysis?
(free text)
1. How is analysis supported?
Screen recording and analysis
2. How are usability problems identified?
software, e.g. Morae (11- all UT)
3. How do usability practitioners
collaborate in analysis?
Drawing and prototyping tools,
4. How is redesign integrated into the
evaluation process? e.g. Balsamiq, Axure (8 – UT and UI)
Plain screen recording, e.g.
Camtasia, SnagIt (5 – all UT)
Web analytics, e.g. Google analytics,
Seevolution (5 - all UI)
(Where are the tools from the last 20
years of research?)
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 13
14. Findings
Structured formats for
1. How is analysis supported?
problem description?
2. How are usability problems identified?
3. How do usability practitioners 55%: Problems described
collaborate in analysis?
according to our own format
4. How is redesign integrated into the
evaluation process?
41%: No format, just plain
prose
4%: Formats from
standards or literature
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 14
15. Findings
1. How is analysis supported? Evaluation deliverables containing
2. How are usability problems identified? redesign suggestions?
3. How do usability practitioners
collaborate in analysis? Time of making redesign
suggestions
4. How is redesign integrated
into the evaluation process? Sources of redesign suggestions
Means of redesign presentation
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 15
16. Findings
1. How is analysis supported?
Deliverable characterized as …
2. How are usability problems identified?
3. How do usability practitioners
UT UI_
collaborate in analysis? A set of redesign
suggestions motiva- 51% 53%
4. How is redesign integrated
into the evaluation process? ted from UPs
A set of UPs with some
redesign suggestions 46% 43%
A set of UPs with no
redesign suggestions 4% 5%
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 16
17. Findings
1. How is analysis supported?
When are redesign
2. How are usability problems identified?
suggestions made?
3. How do usability practitioners
collaborate in analysis?
4. How is redesign integrated 49%: First all UPs were identified,
into the evaluation process? then redesign suggestions were
made
46%: Some (or all) redesign
suggestions were made
immedeately upon identfying a UP
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 17
18. Findings
1. How is analysis supported?
2. How are usability problems identified? Sources of redesign
3. How do usability practitioners
collaborate in analysis?
suggestions
UT UI_
4. How is redesign integrated
into the evaluation process? Response to UPs 94% 74%
Non-optimal solution,
even though no UP 38% 47%
was observed
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 18
19. Findings
1. How is analysis supported?
Means of redesign
2. How are usability problems identified?
presentation
3. How do usability practitioners
collaborate in analysis?
UT UI_
4. How is redesign integrated Textual descriptions 68% 71%
into the evaluation process?
Annotated screen
shots 50% 55%
UI digital mock-ups 32% 47%
Sketching 29% 21%
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 19
20. A researcher perspective on analysis - revisited
The main output of an analysis in
(formative) usability evaluation is a
usability problem list to inform
later design work
Research-based processes,
methods and tools are needed to
support rigorous analysis
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 20
21. A researcher perspective on analysis - revisited
Researchers need The main output of an analysis in
to learn from (formative) usability evaluation is a
practitioners how usability problem list and a set of
evaluation and
design are related
redesign suggestions – the latter
– not vice versa often visually presented.
Research-based processes,
methods and tools are needed to
support rigorous analysis
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 21
22. A researcher perspective on analysis - revisited
Should we rather The main output of an analysis in
support home- (formative) usability evaluation is a
growing of analysis usability problem list and a set of
support, and align
with commercial redesign suggestions – the latter
tools? often visually presented.
Research-based processes,
methods and tools need to be
developed in response to
practitioners needs – as seen from
the practitioner perspective
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 22
23. Conclusion
A fast-paced presentation of a selection
of the survey findings
If what you heard interests you: There is
more to be found in the paper :-)
Thank you for your attention!
A. Følstad, E. L-C. Law, K. Hornbæk: Analysis in practical usability evaluation: a survey study. 23