Contextual Inquiry and Modeling Notes, Baobab Trust, March 2014Harry Hochheiser
The document discusses various techniques for collecting and analyzing qualitative user data, including contextual interviews, ethnography, and rapid ethnography. It provides details on how to conduct contextual interviews, capturing interview data, analyzing the data through coding and modeling techniques. The models include flow models, sequence models, physical models, and cultural models to understand workflows, steps, environments, and contexts of use. The document emphasizes iterative interpretation and validation with stakeholders to design systems that meet user needs.
Instrument development and psychometric validation 030222Roger Watson
This document discusses instrument development and validation. It covers questionnaire design, including response formats and standardizing questions. It also discusses establishing validity through content validity, including item-content validity index and scale-content validity index. Other topics covered include reliability, criterion validity, construct validity, and factorial validity. The document also discusses screening questionnaires, sensitivity and specificity, receiver operating characteristic curves, and how they are used to optimize diagnostic accuracy when developing screening tools.
The document outlines the key steps in the research process, including developing the research project, reviewing literature, creating a research proposal, conducting a pilot study, and project management. It discusses developing a research question and aims, reviewing background literature and gaps, designing methodology, and plans for analysis, reporting, and writing up results. Methods of searching literature systematically using databases and evaluating sources are also covered. The importance of a pilot study and a clear timetable for the research proposal and project are emphasized.
This document discusses key considerations for clinical research design such as having a clear research question, selecting an appropriate design that best answers the question, considering feasibility factors, ensuring the work is interesting, relevant, novel, and ethical. It provides examples of common research designs like randomized controlled trials, surveys, qualitative research, and systematic reviews. It highlights common mistakes like having an overly ambitious project or deciding on methods before the research question. The conclusion emphasizes having a clear research question to guide design, methods, and getting necessary support and approvals.
Webinar on editorial policies (14 Sept 2021) by Professor Aboul Ella HassanienAboul Ella Hassanien
The document discusses various topics relating to authorship policies for scientific journals, including definitions of authorship, standards for listing authors, requirements for author contribution statements, and roles of corresponding authors and co-authors. It provides guidance on issues like handling deceased or incapacitated authors, changes to authorship after submission, standards for listing co-first authors, and policies regarding predatory journals. The webinar aims to outline best practices and ethical guidelines for determining authorship to ensure proper attribution and accountability in scientific publishing.
This document provides an overview of library resources and services available to NHS staff to help practice evidence-based medicine. It discusses the library collections, databases, and training programs available. Key services include access to books, journals, databases through OpenAthens, reference management support, and a four part information skills training program covering induction, searching, current awareness, and critical appraisal. The training program teaches skills for finding and evaluating evidence using a systematic approach to answer clinical questions and apply results to practice.
Contextual Inquiry and Modeling Notes, Baobab Trust, March 2014Harry Hochheiser
The document discusses various techniques for collecting and analyzing qualitative user data, including contextual interviews, ethnography, and rapid ethnography. It provides details on how to conduct contextual interviews, capturing interview data, analyzing the data through coding and modeling techniques. The models include flow models, sequence models, physical models, and cultural models to understand workflows, steps, environments, and contexts of use. The document emphasizes iterative interpretation and validation with stakeholders to design systems that meet user needs.
Instrument development and psychometric validation 030222Roger Watson
This document discusses instrument development and validation. It covers questionnaire design, including response formats and standardizing questions. It also discusses establishing validity through content validity, including item-content validity index and scale-content validity index. Other topics covered include reliability, criterion validity, construct validity, and factorial validity. The document also discusses screening questionnaires, sensitivity and specificity, receiver operating characteristic curves, and how they are used to optimize diagnostic accuracy when developing screening tools.
The document outlines the key steps in the research process, including developing the research project, reviewing literature, creating a research proposal, conducting a pilot study, and project management. It discusses developing a research question and aims, reviewing background literature and gaps, designing methodology, and plans for analysis, reporting, and writing up results. Methods of searching literature systematically using databases and evaluating sources are also covered. The importance of a pilot study and a clear timetable for the research proposal and project are emphasized.
This document discusses key considerations for clinical research design such as having a clear research question, selecting an appropriate design that best answers the question, considering feasibility factors, ensuring the work is interesting, relevant, novel, and ethical. It provides examples of common research designs like randomized controlled trials, surveys, qualitative research, and systematic reviews. It highlights common mistakes like having an overly ambitious project or deciding on methods before the research question. The conclusion emphasizes having a clear research question to guide design, methods, and getting necessary support and approvals.
Webinar on editorial policies (14 Sept 2021) by Professor Aboul Ella HassanienAboul Ella Hassanien
The document discusses various topics relating to authorship policies for scientific journals, including definitions of authorship, standards for listing authors, requirements for author contribution statements, and roles of corresponding authors and co-authors. It provides guidance on issues like handling deceased or incapacitated authors, changes to authorship after submission, standards for listing co-first authors, and policies regarding predatory journals. The webinar aims to outline best practices and ethical guidelines for determining authorship to ensure proper attribution and accountability in scientific publishing.
This document provides an overview of library resources and services available to NHS staff to help practice evidence-based medicine. It discusses the library collections, databases, and training programs available. Key services include access to books, journals, databases through OpenAthens, reference management support, and a four part information skills training program covering induction, searching, current awareness, and critical appraisal. The training program teaches skills for finding and evaluating evidence using a systematic approach to answer clinical questions and apply results to practice.
This document provides guidance on writing research articles, protocols, dissertations, and theses. It discusses publishing research findings from a thesis to build an academic career. Key steps include selecting an appropriate journal based on impact factor and author guidelines, writing an abstract and cover letter, submitting the manuscript, and responding to peer reviews. The document also discusses developing a research question and conducting a literature review to focus the research and justify results.
This document outlines the content of a research methods course taught by Dr. Noora Al-Malki in the spring of 2014/2015. The course is 6 credit hours per week, covers both qualitative and quantitative research methods, and aims to help students write the methods section of their research proposals. Key topics covered include the components of research proposals and papers, literature reviews, questionnaire design, and qualitative methods like interviews. Quantitative methods like experiments and surveys are also discussed.
This document provides an overview of qualitative research and its applications in continuing education for healthcare professionals (CEHP). It discusses the qualitative approach, data collection methods like interviews, analysis techniques including coding, and reporting results. Qualitative research explores experiences and perceptions through open-ended questions to provide deep insights. It is well-suited for needs assessments, intervention development, and evaluation across CEHP phases. The document reviews online data collection tools, question types, interviewer behavior, and software to assist with coding, organization, and visualization of results.
Critically Analyzing Research ResourcesOxfordlibrary
This lecture discusses the importance of evidence-based practice for dental hygienists and outlines challenges to adopting research. It notes that few dental hygienists conduct research or regularly read studies. The lecture describes the typical structure of a research article and important aspects to consider when evaluating sources. Key challenges to using research included lack of education, difficulty accessing materials, and preference to defer to dentists. Adopting evidence-based decision making could help dental hygienists provide better care by learning from scientific literature and avoiding outdated practices.
Publishing Scientific Research and How to Write High-Impact Research Papersjjuhlrich
The document is a presentation about publishing scientific research and writing high-impact papers. It discusses John Uhlrich's background and role as an editor at Wiley-VCH. It provides tips for selecting journals, writing cover letters, responding to referee reports, and promoting published work. The presentation emphasizes communicating the importance and implications of research, comparing results to related work, and optimizing content for discovery online.
The document discusses case study methods in research. It defines a case study as a detailed analysis of a person, group or situation that is studied holistically using one or more methods. The document outlines the advantages of case studies in improving decision making and the disadvantages of lack of generalization and being time-consuming. It also discusses explanatory, exploratory and descriptive case study designs and provides steps for conducting a case study analysis, including thoroughly reading the case, defining the central issue, identifying constraints and alternatives, and developing an implementation plan.
Webinar on Dealing With Rejection and Publication Etiquette by Professor Abou...Aboul Ella Hassanien
This document discusses dealing with rejection of scientific papers and publication etiquette. It provides advice on how to respond productively to a rejected paper, including taking time to carefully review rejection feedback, revising the paper to address the issues raised, and potentially resubmitting to the same journal or another journal. The document emphasizes maintaining a professional demeanor and using rejection as an opportunity to improve one's work.
Taylor & Francis: Author and Researcher WorkshopSIBiUSP
Workshop para Autores e Pesquisadores 2015
Data: 08 de outubro de 2015
Horário: 10:30 - 14:30
Local: Auditório do INRAD - Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da USP - Av. Dr. Enéas de Carvalho Aguiar, s/nº – Rua 1 – Cerqueira César – São Paulo, SP.
This document provides an overview of quality improvement (QI) concepts and tools. It discusses the key dimensions of healthcare quality and defines QI. The QI journey is summarized as building willingness for change, understanding the current system, developing aims and change ideas, testing changes using the PDSA cycle, implementing successful changes, and spreading changes. Popular QI tools introduced include driver diagrams, process mapping, the Model for Improvement, statistical process control charts, and Plan-Do-Study-Act cycles. Tips for successful QI projects emphasize clear aims, manageable scope, leadership, engagement, data, measures, and sharing learning.
This document provides guidance on conducting patient and staff surveys at The Christie hospital. It covers the approval process, survey design, questionnaire design, information governance, and support available from the Quality Improvement, Clinical Audit and Effectiveness (QICA) team. Key points include:
- All surveys must be registered and approved by submitting a proposal form to the QICA team.
- Surveys should be necessary and avoid over-surveying patients who are under stress.
- Guidance is provided on patient selection, data collection methods, questionnaire design best practices, and information governance considerations like consent and anonymization.
- The QICA team can provide support and advice on all aspects of the survey process.
The document provides guidance on conducting patient and staff surveys at The Christie, outlining the approval process, survey design considerations, questionnaire design tips, information governance requirements, and support available from the Quality Improvement and Clinical Audit (QICA) team. Proper planning and following best practices are emphasized to ensure surveys are necessary, designed well, protect patient privacy, and can inform quality improvement efforts.
I find rejection — and even negative review comments associated with major revisions — very difficult. I have put months or years of my best work into a project, spent days or weeks writing it up, submitted it to a journal, pinned hopes on it, and waited for months for a response. And then they say it is not good enough! I can totally understand why you are feeling so unhappy. The aim of the seminar is to discuss the way to deal with a journal rejection and how to write a professional rebuttal letter.
This document outlines the process of implementing goals of care conversations in long term care settings using implementation frameworks. It discusses assessing barriers and facilitators to implementation using the TICD framework through interviews with stakeholders. Key determinants identified include individual health professionals' lack of knowledge about current practice and lack of organizational monitoring and feedback. Designing implementation interventions involves matching these determinants to strategies like audit and feedback to address gaps and promote adoption of goals of care conversations for patients. The document provides an example of using frameworks in a step-by-step process to guide successful implementation of an evidence-based practice.
The document discusses cognitive walkthroughs, which evaluate software usability for learning through exploration. A cognitive walkthrough examines each user action needed to complete a task and asks questions to determine if the interface will lead the user to take the right actions and understand their progress. The document provides details on conducting cognitive walkthroughs, including defining users, tasks, and interface details, as well as questions to ask and information to capture at each step. An example walkthrough of forwarding phone calls is included to illustrate the process.
This document discusses user experience design for healthcare systems. It emphasizes starting with clear goals and stakeholder input to guide design. Storyboards and paper prototypes are presented as effective early-stage tools to explore design ideas and get feedback. User testing of prototypes is an iterative process that informs refinement of goals and design. Design principles like consistency, error prevention and reducing memory load are advised to promote usability.
Using Surveys to Improve Your Library: Part 1 (Sept. 2018)ALATechSource
This document provides an overview of using surveys to improve libraries. It discusses the assessment lifecycle of planning, implementing, analyzing, and reacting to assessment data. Key aspects of surveys covered include when they are best used, sampling populations, survey planning considerations like timing and incentives, and validating and piloting surveys. The goal is to provide libraries with best practices for conducting effective surveys to gather meaningful feedback and drive continuous improvement.
This document provides guidance on writing research articles, protocols, dissertations, and theses. It discusses publishing research findings from a thesis to build an academic career. Key steps include selecting an appropriate journal based on impact factor and author guidelines, writing an abstract and cover letter, submitting the manuscript, and responding to peer reviews. The document also discusses developing a research question and conducting a literature review to focus the research and justify results.
This document outlines the content of a research methods course taught by Dr. Noora Al-Malki in the spring of 2014/2015. The course is 6 credit hours per week, covers both qualitative and quantitative research methods, and aims to help students write the methods section of their research proposals. Key topics covered include the components of research proposals and papers, literature reviews, questionnaire design, and qualitative methods like interviews. Quantitative methods like experiments and surveys are also discussed.
This document provides an overview of qualitative research and its applications in continuing education for healthcare professionals (CEHP). It discusses the qualitative approach, data collection methods like interviews, analysis techniques including coding, and reporting results. Qualitative research explores experiences and perceptions through open-ended questions to provide deep insights. It is well-suited for needs assessments, intervention development, and evaluation across CEHP phases. The document reviews online data collection tools, question types, interviewer behavior, and software to assist with coding, organization, and visualization of results.
Critically Analyzing Research ResourcesOxfordlibrary
This lecture discusses the importance of evidence-based practice for dental hygienists and outlines challenges to adopting research. It notes that few dental hygienists conduct research or regularly read studies. The lecture describes the typical structure of a research article and important aspects to consider when evaluating sources. Key challenges to using research included lack of education, difficulty accessing materials, and preference to defer to dentists. Adopting evidence-based decision making could help dental hygienists provide better care by learning from scientific literature and avoiding outdated practices.
Publishing Scientific Research and How to Write High-Impact Research Papersjjuhlrich
The document is a presentation about publishing scientific research and writing high-impact papers. It discusses John Uhlrich's background and role as an editor at Wiley-VCH. It provides tips for selecting journals, writing cover letters, responding to referee reports, and promoting published work. The presentation emphasizes communicating the importance and implications of research, comparing results to related work, and optimizing content for discovery online.
The document discusses case study methods in research. It defines a case study as a detailed analysis of a person, group or situation that is studied holistically using one or more methods. The document outlines the advantages of case studies in improving decision making and the disadvantages of lack of generalization and being time-consuming. It also discusses explanatory, exploratory and descriptive case study designs and provides steps for conducting a case study analysis, including thoroughly reading the case, defining the central issue, identifying constraints and alternatives, and developing an implementation plan.
Webinar on Dealing With Rejection and Publication Etiquette by Professor Abou...Aboul Ella Hassanien
This document discusses dealing with rejection of scientific papers and publication etiquette. It provides advice on how to respond productively to a rejected paper, including taking time to carefully review rejection feedback, revising the paper to address the issues raised, and potentially resubmitting to the same journal or another journal. The document emphasizes maintaining a professional demeanor and using rejection as an opportunity to improve one's work.
Taylor & Francis: Author and Researcher WorkshopSIBiUSP
Workshop para Autores e Pesquisadores 2015
Data: 08 de outubro de 2015
Horário: 10:30 - 14:30
Local: Auditório do INRAD - Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da USP - Av. Dr. Enéas de Carvalho Aguiar, s/nº – Rua 1 – Cerqueira César – São Paulo, SP.
This document provides an overview of quality improvement (QI) concepts and tools. It discusses the key dimensions of healthcare quality and defines QI. The QI journey is summarized as building willingness for change, understanding the current system, developing aims and change ideas, testing changes using the PDSA cycle, implementing successful changes, and spreading changes. Popular QI tools introduced include driver diagrams, process mapping, the Model for Improvement, statistical process control charts, and Plan-Do-Study-Act cycles. Tips for successful QI projects emphasize clear aims, manageable scope, leadership, engagement, data, measures, and sharing learning.
This document provides guidance on conducting patient and staff surveys at The Christie hospital. It covers the approval process, survey design, questionnaire design, information governance, and support available from the Quality Improvement, Clinical Audit and Effectiveness (QICA) team. Key points include:
- All surveys must be registered and approved by submitting a proposal form to the QICA team.
- Surveys should be necessary and avoid over-surveying patients who are under stress.
- Guidance is provided on patient selection, data collection methods, questionnaire design best practices, and information governance considerations like consent and anonymization.
- The QICA team can provide support and advice on all aspects of the survey process.
The document provides guidance on conducting patient and staff surveys at The Christie, outlining the approval process, survey design considerations, questionnaire design tips, information governance requirements, and support available from the Quality Improvement and Clinical Audit (QICA) team. Proper planning and following best practices are emphasized to ensure surveys are necessary, designed well, protect patient privacy, and can inform quality improvement efforts.
I find rejection — and even negative review comments associated with major revisions — very difficult. I have put months or years of my best work into a project, spent days or weeks writing it up, submitted it to a journal, pinned hopes on it, and waited for months for a response. And then they say it is not good enough! I can totally understand why you are feeling so unhappy. The aim of the seminar is to discuss the way to deal with a journal rejection and how to write a professional rebuttal letter.
This document outlines the process of implementing goals of care conversations in long term care settings using implementation frameworks. It discusses assessing barriers and facilitators to implementation using the TICD framework through interviews with stakeholders. Key determinants identified include individual health professionals' lack of knowledge about current practice and lack of organizational monitoring and feedback. Designing implementation interventions involves matching these determinants to strategies like audit and feedback to address gaps and promote adoption of goals of care conversations for patients. The document provides an example of using frameworks in a step-by-step process to guide successful implementation of an evidence-based practice.
The document discusses cognitive walkthroughs, which evaluate software usability for learning through exploration. A cognitive walkthrough examines each user action needed to complete a task and asks questions to determine if the interface will lead the user to take the right actions and understand their progress. The document provides details on conducting cognitive walkthroughs, including defining users, tasks, and interface details, as well as questions to ask and information to capture at each step. An example walkthrough of forwarding phone calls is included to illustrate the process.
This document discusses user experience design for healthcare systems. It emphasizes starting with clear goals and stakeholder input to guide design. Storyboards and paper prototypes are presented as effective early-stage tools to explore design ideas and get feedback. User testing of prototypes is an iterative process that informs refinement of goals and design. Design principles like consistency, error prevention and reducing memory load are advised to promote usability.
Using Surveys to Improve Your Library: Part 1 (Sept. 2018)ALATechSource
This document provides an overview of using surveys to improve libraries. It discusses the assessment lifecycle of planning, implementing, analyzing, and reacting to assessment data. Key aspects of surveys covered include when they are best used, sampling populations, survey planning considerations like timing and incentives, and validating and piloting surveys. The goal is to provide libraries with best practices for conducting effective surveys to gather meaningful feedback and drive continuous improvement.
The document describes various methods for modeling user data and experiences to inform design, including:
- Conducting interviews and creating models of key flows and sequences to identify opportunities for improvement
- Developing consolidated diagrams and models to eliminate redundancy and improve roles, tasks, and communication
- Using scenario-based design and creating hypothetical scenarios to refine designs
- Analyzing scenarios to identify pros and cons of different design features
- Creating prototypes in various forms like paper or video to validate designs with users through an iterative process.
- The doctor was asked which drug class they prefer to use in patients who fail on oral monotherapy: SGLT2 inhibitors or DPP4 inhibitors
- They explained that they typically prefer SGLT2 inhibitors due to their beneficial effects on cardiovascular and renal outcomes as well as weight loss
- However, they noted that DPP4 inhibitors are better tolerated with less side effects like genital infections
- In the end, they said the decision depends on the individual patient's profile, comorbidities, and preferences
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draftKyle S. Burger
This document provides an overview of open science practices to increase rigor and reproducibility in research. It begins with a discussion of current challenges to rigor, including an overemphasis on metrics, not publishing null findings, and a lack of replication. It then outlines several open science practices like pre-registration, open materials and methods, transparent statistics and data visualization, and open data. Benefits of these practices include reducing biases like p-hacking and increasing transparency, replication, and collaboration. Concerns include increased workload and losing proprietary advantages. Overall the presentation aims to promote discussion of adopting open science practices to strengthen the quality of research.
Translational Data Sharing: Informatics Challenges and OpportunitiesHarry Hochheiser
This document discusses challenges and opportunities in sharing translational research data. It describes several case studies of data sharing efforts, including FaceBase, GRADS, and the Monarch Initiative. The key challenges highlighted are a lack of standardized metadata, differences in how phenotypes are described across species and datasets, and the need for better tools to facilitate metadata creation and data integration across diverse sources.
This document provides guidance on analyzing qualitative data collected from interviews. It discusses challenges in interpreting large amounts of interview data and extracting useful insights. It recommends approaches like coding concepts, extracting themes, developing models to graphically depict relationships, and scenario development. Specific techniques covered include open coding, axial coding, grounded theory, and affinity diagrams. Checklists are provided for coding observations. Interpretation sessions with a team and storytelling using coded results and models are also recommended for analyzing qualitative inquiry data.
“Focus group interviews typically have five characteristics or features: (a) people, who (b) possess certain characteristics, (c) provide data (d) of a qualitative nature (e) in a focused discussion.”
-Focus Groups: A Practical Guide for Applied Research (Krueger)
This document discusses different types of research according to purpose:
1. Descriptive research aims to accurately describe a population, situation, or phenomenon. It answers what, where, when and how questions but not why questions. Survey, observation, and case studies are common descriptive research methods.
2. Exploratory research is used to investigate problems that are not clearly defined. It helps identify issues that can be the focus of future research. Primary methods like surveys, interviews, and focus groups are used.
3. Hypothesis testing involves stating a null and alternative hypothesis, collecting data to test the hypotheses, performing a statistical test, and deciding whether to reject or fail to reject the null hypothesis based on the results.
This document provides guidance on how to effectively ask questions to gather user feedback. It discusses identifying goals and assumptions, engaging the right participants, formulating good open-ended questions, using follow-up questions and considering question format. Effective listening is also covered, including remaining neutral, engaging with participants and allowing silence. The overall aim is to facilitate discussions that prepare teams for gathering insightful client and user feedback.
This document discusses qualitative and quantitative research methods for understanding user needs in human-computer interaction design. It explains that qualitative research, such as interviews and observations, are especially important early in the design process to understand user behaviors, needs, and contexts. Quantitative research like surveys can miss important details for design. The document provides guidance on conducting effective qualitative user interviews, including asking open-ended questions, following up, and getting a range of participant viewpoints.
Phase II Overview of Information Literacy Assessment Projectshannonstaley70
The document summarizes improvements made to a standardized assessment tool based on feedback from students, consultants, and library faculty. Key changes included:
1) Improving usability based on usability testing feedback, such as adding a clear menu and making it easier to select questions.
2) Conducting more cognitive interviews with students to improve question wording and clarity.
3) Adding new statistical analysis features like T-tests and correlation templates to provide more meaningful analysis of assessment data.
4) Hosting a focus group to discuss how to best use assessment data and get broader support for the tool.
Analyzing Qualitative Data for_ ResearchNirmalPoudel4
This document provides guidance on analyzing qualitative data collected through evaluations. It discusses that qualitative analysis involves identifying themes and patterns in non-numerical data sources like interviews and documents. The analysis can help understand how an intervention was implemented and its unexpected impacts. It emphasizes accurately capturing qualitative information, identifying common themes across data sources, and controlling for bias by having multiple analysts review the data.
Similar to Introduction to usability studies, presented to Baobab Health Trust (20)
This document discusses user stories and how they can be used for requirements gathering and product development. It defines user stories as short descriptions that capture a user's goal from their perspective. The document provides examples of user stories for a laboratory information management system (LIMS) and describes how to map user stories to visualize workflows and determine development priorities. It suggests using the mapped stories to create prototypes and designs for a new product.
Presentation on cognitive issues and usability, presented to Baobab Health Trust, 2015. Topics include usability measures, perception, cognition, mental models, etc.
Baobab spring 2015 usability and contextual inquiryHarry Hochheiser
The document discusses stakeholder analysis and user research methods for understanding users' needs and contexts. It provides an example of analyzing the needs of a translational scientist stakeholder working with cancer data. Key user research methods discussed include stakeholder analysis, observations, contextual interviews, participatory design, ethnography, and scenario-based design. The goal is to understand stakeholders, their tasks, tools, data needs, and current challenges to inform the design of new informatics systems and tools.
Notes on redesign of Baobab Health Trust Prescribing InterfaceHarry Hochheiser
Introductory sides for exercise in redesign of Baobab Health Trust's prescription interfaces for EMR modules. Presented to Baobab Health Trust, Lilongwe, Malawi, March 2014.
Modeling and Design Notes for HIV Testing and Counseling, Baobab HealthHarry Hochheiser
Notes on modeling and design based on interviews and observations for HIV Testing and Counseling. Presented to Baobab Health Trust, Lilongwe, Malawi, March 2014.
- The document discusses Harry Hochheiser's research in translational bioinformatics and challenges in data sharing.
- It describes the FaceBase project, which aims to compile biological data related to craniofacial development across multiple organisms and datasets.
- Effective data sharing is challenging due to the diversity of data types and projects involved; metadata and ontologies could help but have not been fully leveraged.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Communications Mining Series - Zero to Hero - Session 1
Introduction to usability studies, presented to Baobab Health Trust
1. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Usability Studies and Empirical Studies
Harry Hochheiser
University of Pittsburgh
Department of Biomedical Informatics
harryh@pitt.edu
+1 412 648 9300
Attribution-ShareAlike
CC BY-SA
3. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Beyond Inspections
Inspections won't tell you which problems users will face in
action
Might not identify mental models and confusions
..finding out where things go wrong.
4. Baobab Health, March 2014
Harry Hochheiser, harryh@pitt.edu
Edit Title
Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
No bright dividing line in process
Design
Fully-functional
Prototype
Paper
Prototype
Release
Usability Inspections
Usability Studies
Empirical User Studies, Case
Studies, Longitudinal Studies,
Acceptance Tests
Low
cost,
low
validity Higher
cost,
validity
5. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies: Goals
• Generally, to understand if the proposed design supports completion of
intended tasks
• Be specific -
• Tasks and users
• Define success
• User Satisfaction?
• Do users like the tool?
• What are the important metrics?
6. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies: Tasks
• Representative and specific
• What would users do?
• Realistic – given available time and resources
• Appropriate for assessment of goals
• Possibly some user-defined/suggested
• Particularly if participants were informants in earlier
requirements-gathering
7. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Which Tasks?
Bad: Give this a try?
Better: Try to send an email, find a contact, and file a
response
Still better:
Detailed scenario with multiple actions that required coordinated use of
diverse components of an application's functionality
Formative Usability Studies:
8. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies: Conditions
• Usability Lab
• Two-way mirrors/separate rooms
• Workspace
• Online?
• Often video and/or audio-recorded
• Screen-capture
• Logs and instrumented software
• Goal: Ecological Validity
9. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies:
Measures
• Key question to answer: “can users complete tasks”?
• Generally, lists of usability problems
• Description of difficulty
• Severity
• Task completion times – depending on methods
• Error rates?
• User Satisfaction
• Quantitative results for measuring success
• Not comparative
10. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies:
Methodology
• Define Scope
• Users complete tasks
• Researchers observe process
• What happens?
• What goes right? What goes wrong?
• Note difficulties, confusions?
• Record – audio/video, screen capture
11. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies:
Participants
• Somewhat representative of likely users
• Willing guinea-pigs
• Need folks who are patient, willing to deal with problems
• Well-motivated
• Compensated
• Eager to use the tool
• Small numbers – repeat until diminishing returns
• How many?
12. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Nielsen – why you only need to test with 5 users
http://www.useit.com/alertbox/20000319.html
Hwang & Salvendy (2010) – maybe need 10 +/- 2
Only 5 users – or maybe not
13. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Two approaches
• Observation
•Subject performs tasks, researchers observe
• Ecological validity, but no insight into users
• “Think aloud”
•User describes mental state and goals
14. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Think-Aloud Protocols
• User describes what they are doing and why as they try to complete a task
• Describe both goals and steps taken to achieve those goals.
• Observe
• Confusions – when steps taken don't lead to expected results
• Misinterpretations – when choices don't lead to expected outcomes
• Goal: identify both micro- and macro-level usability concerns
• Strong similarities with contextual inquiry, but..
• Focus specifically on tool
• Participant encouraged to narrat
• Evaluator generally doesn’t ask questions
15. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Caveats
• Think-aloud is harder than it might sound
• What is the role of the investigator?
• How much feedback to provide?
• Very Little
• What (if anything) do you say when the user runs into problems?
• Not much
• What if it's a system that you built?
• How to identify/describe a usability problem?
16. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Think-Aloud Protocols: A Comparison of Three Think-Aloud
Protocols for use in Testing Data-Dissemination Web Sites for
Usability Olmsted-Hawala, et al. 2010
"... it is recommended that rather than writing a vague statement such as 'we
had participants think aloud,' practitioners need to document their type of
TA protocol more completely, including the kind and frequency of probing.”
17. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Reporting Usability Problems
adapted from Mack & Montaniz, 1994
• Breakdowns in goal-directed behavior
• Correct action, noticeable effort
• To find
• To execute
• Confused by consequence
• Correct action, confusing outcome
• Incorrect action requires recovery
• Problem tangles
• Qualitative analysis by interface interactions
• Objects and actions
• Higher-level categorization of interface interactions
Gulf of Execution
Gulf of Evaluation
18. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Reporting Usability Problems
adapted from Mack & Montaniz, 1994
• Inferring possible causes of problems
• Problem reports
• Design-relevant descriptions
• Quantitative analysis of problems by severity
19. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies:
Analysis
• Challenge – identify problems at the right level of granularity?
• When does a series of related difficulties lead to a need for
redesign?
• What if these difficulties come from different tasks?
• When appropriate, relate usability observations back to contextual inquiry or
other earlier investigations
• Does the implementation fail to line up with the needs?
• Perhaps in some unforeseen manner?
20. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Formative Usability Studies:
Analysis
• Multiple observers
• Calculate agreement metrics?
• Use audio, video, transcripts to illustrate difficulties
• Particularly useful for demonstrating problems to implementation
folks
• Rate problem severity
• Which are show-stoppers and which are nuisances?
• Which require redesign vs. small changes?
• Must prioritize...
21. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Completion – Summative User Studies
• Demonstrate successful execution of system
• With respect to
• Alternative system – even if straw man
• Stated performance goals – Acceptance Tests
• Generally empirical
22. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Completion – Summative
Studies of systems in use
• Case studies
• Descriptions of individual deployments
• Qualitative
• Longitudinal study of ongoing use
• Collect data regarding impact
• Similar to case studies, but potentially more quantitative.
• Use observations and interviews to see what works?
23. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
After system is complete
More realistic conditions?
Acceptance tests
Usability tests aimed at measuring success
Does the tool do what the client wants
• 95% task completion rate within 3 minutes, etc.?
Client has clearer idea – not just “user friendly”
Summative Tests
24. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
What: Empirical Studies
• Quantitative measure of some aspect of successful system use
• Task completion time (faster is better)
• Error rate
• Learnability
• Retention
• User satisfaction...
• Quality of output?
25. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Tension in empirical studies
• Metrics that are easy to measure may not be most interesting
• Task completion time
• Error rate
• Great for repetitive data entry tasks, less so for complex tasks
• Analytics, writing...
26. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Empirical User Studies: Goals
• I have two interfaces – A and B.
• Which is better? and how much better?
• Want to determine if there is a measurable, consistent difference in
• Task completion times
• Error rates
• Learnability
• Memorability
• Satisfaction
27. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Running Example: Menu Structures
• Hierarchical Menu structures
• Multiple possibilities for any number of leaf nodes
• Broad/Shallow vs. Narrow/Deep
• which is faster?
28. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Hypothesis
• Testable Theory about the world
• Galileo: The rate at which falling items fall is independent of their weight
• Menus
• Users will be able to find items more quickly with broad/shallow trees
than with narrow/deep trees.
• Often stated as a “null hypothesis” that you expect will be disproven:
• There will be no difference in task performance time between broad/shallow
trees and narrow/deep trees.
29. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Background/Context
• Controlled experiments from cognitive psychology
• State a testable/falsifiable hypothesis
• Identify a small number of independent variables to manipulate
• hold all else constant
• choose dependent variables
• assign users to groups
• collect data
• statistically analyze & model
30. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Other goals
• Strive for
• removal of bias
• replicable results
• Generalizable theory that can inform future work
• or, demonstrable evidence of preference for one design over another.
31. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Empirical User Studies: Tasks
• Use variants of the design to complete some meaningful operation
• Usually relatively close-ended, well-defined
• Relatively clear success/failure
32. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Empirical User Studies: Conditions
• Lab-like?
• Simulated realistic conditions?
33. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Independent Variables
• What are you going to test?
• Condition that is “independent” of results
• independent of user's behaviors
• independent of what you're measuring.
• one of 2 (or 3 or 4) things you're comparing.
• can arise from subjects being classified into groups
• Examples
• Galileo: dropping a feather vs. bowling ball
• Menu structures – broad/shallow vs. narrow/deep
34. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Dependent variable
• Values that hypothesis test
• falling time
• task performance time, etc.
• May have more than one
• Goal: show that changes in independent variable lead to measurable, reliable
changes in dependent variables.
• With multiple independent variables, look for interactions
• Differences between interfaces increase with differences in task
complexity
35. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Controls
• In order to reliably say that independent variables are responsible for
changes in dependent variables, we must control for possible confounds
• Control – keep other possible factors constant for each condition/value of
independent variables
• types of users, contexts, network speeds, computing environments
• confound – uncontrolled factor that could lead to an alternate explanation
for the results
• What happens if you don’t control as much as possible?
• Confounds, not independent variables, may be the cause of changes in
dependent variables.
36. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Examples of Controls
• Galileo:
• windy day vs. not windy?
• Menus
• network speed/delays? (do everything on one machine)
• skills of users? (more on participant selection later)
• font size, display information, etc.?
37. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
• Related to controls
• Experimenter can introduce biases that might influence outcomes
• Instructions?
• Choice of participants?
• more on this in a moment
• Protocols
• prepare scripts ahead of time
• Learning Effects?
Bias
Thanks to Jinjuan Feng for figure
38. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Between-Groups vs. Within-Groups
Design
• How do you assign participants to conditions?
• All people do all tasks/cells?
• Within-groups – compare within groups of individuals.
• one group of test participants
• Certain people for certain cells?
• between groups – compare between groups of individuals
• 2 or more groups
• Mixed models
39. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Between Groups
• Pros
• Simpler design
• Avoid learning effect
• Don't have to worry about ordering
• Cons
• may need more participants
• to get enough data for statistical tests
• to avoid influence of some individuals.
40. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Within-Groups
• Pros:
• Can be more powerful statistically
• same person uses each of multiple interfaces
• Fewer Participants
• Cons
• Learning effects require appropriate randomization of tasks/
interfaces
• Fatigue is possible
41. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Mixed Models
• Elements of both
• 3 different interfaces
• Want to compare performance of different groups
• Docs vs. Nurses?
• Each interface a within-subject experiment
• Across professions is between-subjects.
42. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Other Challenges
• Ordering tasks?
• How many?
• Want to avoid fatigue, boredom, and expense of long sessions
• How many users?
• 20 or more?
• Variability among subjects
• May be unforeseen.
• Bi-modal distribution of education or computer experience?
• Training materials
• Run a pilot
43. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Procedure
• Users conduct tasks
• Measure
• record task completion times
• errors
• etc.
• Now what?
• Analyze data to see if there is support for the hypothesis
• alternatively, if the
44. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Hypothesis Testing
• Not about proof or disproof
• Instead, examine data
• Find likelihood that the data occurred randomly if the null
hypothesis is true
• If this is small, say that we have support for the hypothesis
45. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Data, Stats, and R
• Need to talk about
• data distributions
• statistical analyses
• to do hypothesis testing
• Tools:
• R - r-project.org
• R-Studio - rstudio.org
46. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Sampling
• Data sets come from some ideal universe
• all possible task performance times for a given menu selection
task
• Compare two samples with given means and deviations
• Are they really different? Or do they just appear different by
chance?
• Statistical testing gives us a p-value
• probability that differences are random chance
• low values are significant
47. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
The key questions
• Given two sets of measurements, or samples, did they come
from the same underlying source or distribution
• x = [29 33 89 56 86 85 7 84 67 78 59 28 10 76 11 12 97 61
66 9 40 95 90 4 31 18 24 48 45 82]
• y = [51 3 10 11 5 90 87 13 64 86 67 98 12 55 56 80 59 63 94
93 25 4 79 52 36 73 99 22 62 2]
• mean(x) = 50.67, sd(x)=31.01
• mean(y) = 51.7, sd(y) = 33.26
• are they from the same distribution?
48. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Boxplot
• Show quartiles
• Are they the same?
49. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
“Normal” distributions
• Given mean and standard deviation (measure of variation)
• 95% of area under curve within 2 standard deviations
• If you take many samples from a space
• Their averages will go to a normal distribution
• Statistical testing -> comparison of distributions.
50. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Histograms
Run a subset of a
population, 1000 times
get average of each subset
Normal distribution
51. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Hypothesis testing
• Test probability that there is no difference between two
distributions
• Possible errors
• Type 1 Error: α - reject null hypothesis when it is true
• believe there is a difference when there is none
• False positive
• Type 2 Error: β- accept null when false
• believe no difference when there is
• False Negative
52. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Significance Levels and Errors
• Highly significants ( p <0.001)
• Don't believe there is a difference unless it's really clear
• low chance of false positive – Type 1
• Greater chance of false of false negative /Type 2
• Less significant (p < 0.05)
• More ready to believe there is a difference
• More false positive/type 1 errors
• fewer type 2 errors
• Usually use p=0.05 as cut-off.
53. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Type 1 and Type 2 errors
Type 1 error
reject the null hypothesis when it is, in fact, true
Type 2 error
accept the null hypothesis when it is, in fact, false
Decision
Reality
56. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Results
Welch Two Sample t-test
data: x and y
t = -0.1245, df = 57.72, p-value = 0.9014
alternative hypothesis: true difference in means is not equal
to 0
95 percent confidence interval:
-17.65522 15.58855
sample estimates:
mean of x mean of y
50.66667 51.70000
57. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
xkcd on significance testing
http://xkcd.com/882/
59. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Correlation
• Attributing causality
• a correlation does not imply cause and effect
• cause may be due to a third “hidden” variable related to
both other variables
• drawing strong conclusion from small numbers
• unreliable with small groups
60. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Regression
Calculates a line of “best fit”
Use the value of one variable to predict the value of the other
r2=.67, p < 0.01
r=.82
61. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Be careful
http://xkcd.com/552/
62. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
User Modeling
Hourcade, et al. 2004
Predict performance characteristics?
Calculate index of difficulty
similar to MT = a + b log2
(A/W+1)
Linear regression to see how well it fits
63. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Longitudinal use
• Lab studies are artificial
• Many tools used over time.
• use and understanding evolve
• Longitudinal studies look at usage over time
• Expensive, but better data
• Techniques
• Interviews, usability tests with multiple sessions, continuous data
logging, Instrumented software, Diaries
64. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Case Studies
• In-depth work with small number of users
• Multiple sessions
• Describe scenarios
• Illustrate use of tool to accomplish goals
• Good for novel designs, expert users
• Formative evaluation – can be used to gather requirements
• Summative – show validity of idea
• Possibly less compelling than usability evaluations.
65. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Informed Consent
• Research must be done in a way that protects participants
• Principles
• Respect for persons
• Beneficence – minimize possible harms, maximize possible benefits
• Justice – costs and benefits should not be limited to certain
populations
• Institutional Review Board (IRB) – approves experiments
and requires signatures on “informed consent” form.
• Crucial for responsible research
66. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Other Metrics
What if task completion time is not the most important
metric?
Insight?
67. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Automated Usability Testing
Possible for defined criteria
Text complexity?
Accessibility
WCAG
Section 508
Example: wave.webaim.org.
68. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Log File Analysis
• Use clickstream and usage data to study actual use
• Which parts of the system are people using?
• Which are they not using?
• Are they going in circles?
• Are they having problems?
• Rich data, but hard to interpret
• particularly without observations or interviews to
provide context.
69. Baobab Health, March 2014Harry Hochheiser, harryh@pitt.edu
Shortcomings of User Studies
What happens in the lab may not be reflected in real use
Deployment/post-mortem, etc.
Case studies, qualitative work
How can we meaningfully evaluate a system in use
… when deployment presents a significant expense...