The document discusses various survey reduction techniques to reduce administration time and increase engagement while maintaining data quality. It recommends making surveys seem short by streamlining the entire experience. Specific techniques include reducing instructions, using archival demographics, strategic placement of eligibility questions, and skip/branch logic. Scale reduction can be done through item analysis, factor analysis, and relating items to external criteria. Unobtrusive observation and one-item measures are also discussed as alternatives to surveys. Trade-offs of reduced surveys include potential lower construct coverage and less information obtained per respondent.
Questionnaire Design Business Researchssanand_1985
The document discusses various aspects of designing effective questionnaires for research purposes. It covers determining what questions to ask and how to phrase them, the best sequence for questions, optimal questionnaire layouts, the importance of pretesting and revising questionnaires, and special considerations for designing questionnaires for global markets. The key decisions in questionnaire design involve determining the relevant questions to ask, how to phrase questions clearly and without bias, the best order of questions, and choosing a layout and format that will best serve the research objectives. Extensive pretesting and revision is important to ensure the questionnaire gathers the intended information without issues.
This document discusses various methods for collecting data during the system analysis stage of a project: interviews, questionnaires, observation, record searching, and document analysis. It provides details on planning and conducting interviews, designing questionnaires, performing structured observation, using record searching to obtain quantitative information, and analyzing documents to understand how information is organized in a system. The goal of collecting data is to understand the current system, identify problems and user needs, and gather facts to help develop solutions.
This document discusses various qualitative research methods used in marketing research, including focus groups, depth interviews, and projective techniques. It provides details on how to plan and conduct focus groups, including ideal group size and composition. Key qualifications for moderators are listed, along with advantages and disadvantages of different qualitative research methods. Variations of focus groups and projective techniques are also outlined.
This document outlines four methods for fact finding: observation, examination of documents, questionnaires, and interviews. It provides details on each method, including advantages and disadvantages. The key methods are observation to see what actually happens, examining documents like forms and reports used in the current system, distributing questionnaires to gather information from many people, and conducting interviews to ask follow-up questions and clarify misunderstandings. A recommended fact-finding strategy is to start by learning from existing materials, then observe the system, distribute questionnaires, conduct interviews, and follow up to verify the collected facts.
This document provides an overview of UX research methods. It begins with an introduction to big thinking in UX and discusses common biases in customer research such as confirmation bias and framing effect. The document then defines terms like market research, user research, and UX research. It provides examples of case studies and describes various methodologies for conducting UX research like contextual inquiry, diary studies, card sorting, and usability testing. Details are given for each methodology including when to use it, how to conduct it, types of data collected, and example tools. The document concludes with a section on innovation game techniques.
eMba ii rm unit-3.2 questionnaire design aRai University
1. Dainik Bhaskar conducted extensive market research before launching its Gujarati newspaper Divya Bhaskar. It surveyed almost the entire population of Ahmedabad to understand readers' preferences.
2. The census-level market research uncovered valuable insights about what readers wanted. Divya Bhaskar tailored its offerings like content, size, and pricing based on these findings. As a result, it became the top newspaper on day one, overtaking the existing leader.
3. A census provides complete information about the target market without sampling errors. For a new market entry, Dainik Bhaskar felt a census would be more accurate than a sample survey to design an optimal product and launch strategy.
Survey Methodology and Questionnaire Design Theory Part IQualtrics
Do you know what's going on in your respondents' heads as they take your survey? How can you design your questionnaire to collect better data? Understanding the answers to these questions can help you design surveys that collect high quality insights you can depend on.
Dave Vannette, principal research scientist at Qualtrics, shares his best hacks for designing surveys that will help you get quality data. In this presentation, Dave also highlights what your respondents are thinking when they take your surveys, and how your survey design can affect the responses you collect.
The document discusses best practices for conducting surveys and writing questionnaires. It covers different methods of survey administration like face-to-face interviews, telephone interviews, and online questionnaires. It also discusses question formats, including open-ended and closed-ended questions, as well as tips for writing clear, unbiased questions to get accurate responses.
Questionnaire Design Business Researchssanand_1985
The document discusses various aspects of designing effective questionnaires for research purposes. It covers determining what questions to ask and how to phrase them, the best sequence for questions, optimal questionnaire layouts, the importance of pretesting and revising questionnaires, and special considerations for designing questionnaires for global markets. The key decisions in questionnaire design involve determining the relevant questions to ask, how to phrase questions clearly and without bias, the best order of questions, and choosing a layout and format that will best serve the research objectives. Extensive pretesting and revision is important to ensure the questionnaire gathers the intended information without issues.
This document discusses various methods for collecting data during the system analysis stage of a project: interviews, questionnaires, observation, record searching, and document analysis. It provides details on planning and conducting interviews, designing questionnaires, performing structured observation, using record searching to obtain quantitative information, and analyzing documents to understand how information is organized in a system. The goal of collecting data is to understand the current system, identify problems and user needs, and gather facts to help develop solutions.
This document discusses various qualitative research methods used in marketing research, including focus groups, depth interviews, and projective techniques. It provides details on how to plan and conduct focus groups, including ideal group size and composition. Key qualifications for moderators are listed, along with advantages and disadvantages of different qualitative research methods. Variations of focus groups and projective techniques are also outlined.
This document outlines four methods for fact finding: observation, examination of documents, questionnaires, and interviews. It provides details on each method, including advantages and disadvantages. The key methods are observation to see what actually happens, examining documents like forms and reports used in the current system, distributing questionnaires to gather information from many people, and conducting interviews to ask follow-up questions and clarify misunderstandings. A recommended fact-finding strategy is to start by learning from existing materials, then observe the system, distribute questionnaires, conduct interviews, and follow up to verify the collected facts.
This document provides an overview of UX research methods. It begins with an introduction to big thinking in UX and discusses common biases in customer research such as confirmation bias and framing effect. The document then defines terms like market research, user research, and UX research. It provides examples of case studies and describes various methodologies for conducting UX research like contextual inquiry, diary studies, card sorting, and usability testing. Details are given for each methodology including when to use it, how to conduct it, types of data collected, and example tools. The document concludes with a section on innovation game techniques.
eMba ii rm unit-3.2 questionnaire design aRai University
1. Dainik Bhaskar conducted extensive market research before launching its Gujarati newspaper Divya Bhaskar. It surveyed almost the entire population of Ahmedabad to understand readers' preferences.
2. The census-level market research uncovered valuable insights about what readers wanted. Divya Bhaskar tailored its offerings like content, size, and pricing based on these findings. As a result, it became the top newspaper on day one, overtaking the existing leader.
3. A census provides complete information about the target market without sampling errors. For a new market entry, Dainik Bhaskar felt a census would be more accurate than a sample survey to design an optimal product and launch strategy.
Survey Methodology and Questionnaire Design Theory Part IQualtrics
Do you know what's going on in your respondents' heads as they take your survey? How can you design your questionnaire to collect better data? Understanding the answers to these questions can help you design surveys that collect high quality insights you can depend on.
Dave Vannette, principal research scientist at Qualtrics, shares his best hacks for designing surveys that will help you get quality data. In this presentation, Dave also highlights what your respondents are thinking when they take your surveys, and how your survey design can affect the responses you collect.
The document discusses best practices for conducting surveys and writing questionnaires. It covers different methods of survey administration like face-to-face interviews, telephone interviews, and online questionnaires. It also discusses question formats, including open-ended and closed-ended questions, as well as tips for writing clear, unbiased questions to get accurate responses.
Qualitative research techniques involve collecting unstructured data to understand motivations and perspectives. Common techniques include focus groups, depth interviews, and projective techniques. Focus groups involve moderated group discussions to generate ideas and understand needs, attitudes, and perceptions. They provide synergism, spontaneity, and cost savings but lack representativeness. Qualitative research complements quantitative research by explaining results.
Surveys are an efficient way to gather information from users. They can range from simple to complex and use closed or open-ended questions. Online survey tools make administration and analysis easier. Good survey design involves determining goals, sample size, question wording free of bias, logical flow, and testing. Analysis requires looking at trends in raw data. Reporting should include an executive summary and presentation of data. Examples show poor question wording can introduce bias or be convoluted.
The document discusses case study methods in research. It defines a case study as a detailed analysis of a person, group or situation that is studied holistically using one or more methods. The document outlines the advantages of case studies in improving decision making and the disadvantages of lack of generalization and being time-consuming. It also discusses explanatory, exploratory and descriptive case study designs and provides steps for conducting a case study analysis, including thoroughly reading the case, defining the central issue, identifying constraints and alternatives, and developing an implementation plan.
This document provides guidance on developing and conducting surveys. It discusses when to use surveys and outlines key steps in the survey process, including determining the purpose and intended users, developing survey items and response formats, reviewing items, pilot testing, administration, analysis and communication of results. The goal is to help users obtain useful information through systematic and well-designed surveys. Professional assistance is recommended, as surveys require expertise in areas like sampling and statistical analysis.
This document discusses the design and use of questionnaires for research purposes. It explains that a questionnaire is a set of standardized questions used to collect statistical information from a specific demographic to achieve research objectives. Proper questionnaire design ensures the data is comparable across respondents, while improper design can lead to incomplete or inaccurate data collection. The document outlines different types of questionnaire structures and questions, provides guidance on questionnaire construction and testing, and discusses methods to improve response rates. Key advantages of the questionnaire method include low cost, large sample coverage, and ability to collect repetitive information over time or large areas.
Fact finding involves collecting data and information through various techniques to analyze existing systems and identify requirements, including sampling documentation, research, observation, questionnaires, interviews, prototyping, and joint requirements planning. Some key techniques are interviews to gather facts, opinions, and requirements from users; prototyping to create small working models for gathering early-stage requirements; and joint requirements planning as a structured group meeting to identify problems, objectives, and requirements through participation from various stakeholders.
The document discusses various fact-finding techniques used by systems analysts to identify requirements, including sampling existing documentation, observation, questionnaires, interviews, prototyping, and joint requirements planning. It describes the difference between functional and non-functional requirements, and the importance of properly identifying and managing requirements to avoid cost overruns, delays, user dissatisfaction, and other issues. The fact-finding process involves problem analysis, requirements discovery, documentation, and ongoing management of requirements as needs change over the project lifecycle.
Step Up Your Survey Research - Dawn of the Data Age Lecture SeriesLuciano Pesci, PhD
Most surveys are terrible. From poorly designed questions, to incoherent survey flow, to useless results, it’s no wonder data-driven organizations have so little faith in survey research. But this isn’t the fault of the tool, it’s because most surveys are built without adhering to some basic best practices, which once fixed can transform any survey from a zero to a hero. This lecture will show you how to create data-science quality surveys that provide unique and immediately actionable insight about your customers, competitors, and marketplace.
This Lecture Will:
-EXPLAIN THE DATA SCIENCE APPROACH TO SURVEY LAYOUT AND QUESTION DESIGN.
-HOW TO INCREASE RESPONSE AND COMPLETION RATES THROUGH ITERATIVE TESTING.
-LINKING SURVEY RESULTS TO OTHER DATA SOURCES TO ENRICH YOUR ANALYSIS.
You can watch this lecture here: https://youtu.be/WuBenXuVzqc
This document discusses evidence-based decision making in organizations. It begins by defining evidence-based practice as the conscientious, explicit, and judicious use of the best available evidence from multiple sources to increase the likelihood of a favorable outcome. It then addresses some common myths about what evidence-based practice is and isn't. The document examines the classic argument for why organizations need evidence-based decision making, which is that it can help overcome biases, fads, and failures in decision making. However, it also notes potential challenges to this argument, such as whether decision making is truly dysfunctional or whether leaders feel successful without evidence-based practices. It concludes by considering alternative ways to promote evidence-based management beyond the classic "why
A presentation outlining what primary research is and how to conduct and analyze it. The presentation compares primary and secondary research. It walks the audience through selecting research objectives and methods, how to draft a study, and how to recruit appropriate respondents. It discusses interview techniques and provides some basics on analyzing data and drawing conclusions. This presentation is aimed at start-ups and entrepreneurs looking to conduct their own research on modest budgets and timelines.
Tools campus workshop 17.3.11 bapp wbs3835 qual rPaula Nottingham
The document provides information on various inquiry tools for collecting data, including observations, surveys, interviews, and analyzing collected data. It discusses how observations can be used to record behaviors and events. Surveys are used to gather data from a wide range of respondents through questionnaires. Interviews allow researchers to ask follow-up questions and probe responses. Proper sampling, consent, and data analysis are important considerations for these tools.
This document outlines various primary research methods that could be used for a project on non-wovens and composite materials for automotive parts. It discusses questionnaires, interviews, focus groups, surveys, observations, prototypes/experiments, and online research. The author indicates they will use questionnaires, focus groups, and prototypes as their main methods through triangulation. Questionnaires will gather technical data from experts, focus groups will provide different perspectives, and prototypes will allow testing ideas and identifying errors. Personal reflection on methods and a pilot study are also recommended to strengthen the research.
Survey Methodology and Questionnaire Design Theory Part IIQualtrics
This document provides best practices for questionnaire design, including response options, question wording, and question order. Some key points:
- Open-ended questions are preferred when the possible answers are unknown, but have drawbacks like more time and coding required.
- Use 7-point scales for bipolar constructs and 5-point scales for unipolar. Include middle alternatives and branching for more nuanced responses.
- Construct-specific scales are better than generic scales. Label all scale points for clarity.
- Carefully word questions to be simple, direct, specific, and avoid bias. Pre-test questions.
- Consider using existing validated questions if applicable, but pre-test to ensure
This document discusses various methods for evaluating the usability of systems, including both analytic methods conducted by experts and empirical methods involving observations of and surveys with users. Empirical evaluations aim to draw valid conclusions about real-world usage but can be challenging due to issues with the representativeness of test users, the realism of test contexts and tasks, and whether collected data truly reflects real impacts. Field studies observe users in realistic contexts but are time-consuming, while lab studies allow more control but also reduce realism. Interviews rely on subjective user memory and perspective. Statistics like t-tests and ANOVAs can be used to analyze empirical data and determine statistical significance.
1) Managers are criticized for incompetence and mismanagement, leading to calls for more accountability and transparency in decision-making.
2) Only about half of what is learned is still considered valid after 7 years, so reliance on past experience can spread outdated or incorrect information.
3) Research shows that managers are often incorrect in their beliefs about effective practices, scoring on average only 35-57% correct on true/false questions.
4) The main reason evidence-based management is needed is because of "bounded rationality" - managers rely on intuitive thinking which is prone to biases that can lead to flawed decisions without objective evidence to counteract them.
The document discusses the problem solving or engineering design process. It begins with defining the problem clearly and gathering background information. Then it describes developing alternative solutions, selecting the best solution, creating a prototype, evaluating the solution, and communicating the results. The process is iterative and may involve redesigning and improving the solution. Finally, it notes similarities between the scientific method and engineering design process.
The document discusses techniques for assessing nonresponse bias in surveys, called N-BIAS methods. It describes 8 techniques: archival analysis, follow-up survey of non-respondents, wave analysis, passive non-response analysis, active non-respondent pre-study analysis, interest level analysis, worst case resistance analysis, and benchmarking. The techniques are used to detect, estimate, and potentially compensate for nonresponse bias in order to provide a clearer picture of its extent, though options for eliminating it are limited.
The document summarizes strategies for encouraging responses in online surveys. It discusses the tailored design method proposed by Dillman and the leverage-saliency theory of Groves. Key response enhancing techniques include pre-notification, incentives, personalization, question order, survey length, and reminders. Combining multiple techniques can incrementally increase response rates by a few percentage points each. The effectiveness of techniques depends on respondent characteristics and survey context.
This document discusses several promising techniques for future data collection, including visual surveys, audio/video interviewing, virtual worlds, web scraping, social network mapping, embedded polls on sites like Facebook, and collecting data from mobile devices. It provides examples of platforms that can be used to implement each technique and highlights advantages like reduced costs and wider reach, as well as disadvantages like technical requirements.
Qualitative research techniques involve collecting unstructured data to understand motivations and perspectives. Common techniques include focus groups, depth interviews, and projective techniques. Focus groups involve moderated group discussions to generate ideas and understand needs, attitudes, and perceptions. They provide synergism, spontaneity, and cost savings but lack representativeness. Qualitative research complements quantitative research by explaining results.
Surveys are an efficient way to gather information from users. They can range from simple to complex and use closed or open-ended questions. Online survey tools make administration and analysis easier. Good survey design involves determining goals, sample size, question wording free of bias, logical flow, and testing. Analysis requires looking at trends in raw data. Reporting should include an executive summary and presentation of data. Examples show poor question wording can introduce bias or be convoluted.
The document discusses case study methods in research. It defines a case study as a detailed analysis of a person, group or situation that is studied holistically using one or more methods. The document outlines the advantages of case studies in improving decision making and the disadvantages of lack of generalization and being time-consuming. It also discusses explanatory, exploratory and descriptive case study designs and provides steps for conducting a case study analysis, including thoroughly reading the case, defining the central issue, identifying constraints and alternatives, and developing an implementation plan.
This document provides guidance on developing and conducting surveys. It discusses when to use surveys and outlines key steps in the survey process, including determining the purpose and intended users, developing survey items and response formats, reviewing items, pilot testing, administration, analysis and communication of results. The goal is to help users obtain useful information through systematic and well-designed surveys. Professional assistance is recommended, as surveys require expertise in areas like sampling and statistical analysis.
This document discusses the design and use of questionnaires for research purposes. It explains that a questionnaire is a set of standardized questions used to collect statistical information from a specific demographic to achieve research objectives. Proper questionnaire design ensures the data is comparable across respondents, while improper design can lead to incomplete or inaccurate data collection. The document outlines different types of questionnaire structures and questions, provides guidance on questionnaire construction and testing, and discusses methods to improve response rates. Key advantages of the questionnaire method include low cost, large sample coverage, and ability to collect repetitive information over time or large areas.
Fact finding involves collecting data and information through various techniques to analyze existing systems and identify requirements, including sampling documentation, research, observation, questionnaires, interviews, prototyping, and joint requirements planning. Some key techniques are interviews to gather facts, opinions, and requirements from users; prototyping to create small working models for gathering early-stage requirements; and joint requirements planning as a structured group meeting to identify problems, objectives, and requirements through participation from various stakeholders.
The document discusses various fact-finding techniques used by systems analysts to identify requirements, including sampling existing documentation, observation, questionnaires, interviews, prototyping, and joint requirements planning. It describes the difference between functional and non-functional requirements, and the importance of properly identifying and managing requirements to avoid cost overruns, delays, user dissatisfaction, and other issues. The fact-finding process involves problem analysis, requirements discovery, documentation, and ongoing management of requirements as needs change over the project lifecycle.
Step Up Your Survey Research - Dawn of the Data Age Lecture SeriesLuciano Pesci, PhD
Most surveys are terrible. From poorly designed questions, to incoherent survey flow, to useless results, it’s no wonder data-driven organizations have so little faith in survey research. But this isn’t the fault of the tool, it’s because most surveys are built without adhering to some basic best practices, which once fixed can transform any survey from a zero to a hero. This lecture will show you how to create data-science quality surveys that provide unique and immediately actionable insight about your customers, competitors, and marketplace.
This Lecture Will:
-EXPLAIN THE DATA SCIENCE APPROACH TO SURVEY LAYOUT AND QUESTION DESIGN.
-HOW TO INCREASE RESPONSE AND COMPLETION RATES THROUGH ITERATIVE TESTING.
-LINKING SURVEY RESULTS TO OTHER DATA SOURCES TO ENRICH YOUR ANALYSIS.
You can watch this lecture here: https://youtu.be/WuBenXuVzqc
This document discusses evidence-based decision making in organizations. It begins by defining evidence-based practice as the conscientious, explicit, and judicious use of the best available evidence from multiple sources to increase the likelihood of a favorable outcome. It then addresses some common myths about what evidence-based practice is and isn't. The document examines the classic argument for why organizations need evidence-based decision making, which is that it can help overcome biases, fads, and failures in decision making. However, it also notes potential challenges to this argument, such as whether decision making is truly dysfunctional or whether leaders feel successful without evidence-based practices. It concludes by considering alternative ways to promote evidence-based management beyond the classic "why
A presentation outlining what primary research is and how to conduct and analyze it. The presentation compares primary and secondary research. It walks the audience through selecting research objectives and methods, how to draft a study, and how to recruit appropriate respondents. It discusses interview techniques and provides some basics on analyzing data and drawing conclusions. This presentation is aimed at start-ups and entrepreneurs looking to conduct their own research on modest budgets and timelines.
Tools campus workshop 17.3.11 bapp wbs3835 qual rPaula Nottingham
The document provides information on various inquiry tools for collecting data, including observations, surveys, interviews, and analyzing collected data. It discusses how observations can be used to record behaviors and events. Surveys are used to gather data from a wide range of respondents through questionnaires. Interviews allow researchers to ask follow-up questions and probe responses. Proper sampling, consent, and data analysis are important considerations for these tools.
This document outlines various primary research methods that could be used for a project on non-wovens and composite materials for automotive parts. It discusses questionnaires, interviews, focus groups, surveys, observations, prototypes/experiments, and online research. The author indicates they will use questionnaires, focus groups, and prototypes as their main methods through triangulation. Questionnaires will gather technical data from experts, focus groups will provide different perspectives, and prototypes will allow testing ideas and identifying errors. Personal reflection on methods and a pilot study are also recommended to strengthen the research.
Survey Methodology and Questionnaire Design Theory Part IIQualtrics
This document provides best practices for questionnaire design, including response options, question wording, and question order. Some key points:
- Open-ended questions are preferred when the possible answers are unknown, but have drawbacks like more time and coding required.
- Use 7-point scales for bipolar constructs and 5-point scales for unipolar. Include middle alternatives and branching for more nuanced responses.
- Construct-specific scales are better than generic scales. Label all scale points for clarity.
- Carefully word questions to be simple, direct, specific, and avoid bias. Pre-test questions.
- Consider using existing validated questions if applicable, but pre-test to ensure
This document discusses various methods for evaluating the usability of systems, including both analytic methods conducted by experts and empirical methods involving observations of and surveys with users. Empirical evaluations aim to draw valid conclusions about real-world usage but can be challenging due to issues with the representativeness of test users, the realism of test contexts and tasks, and whether collected data truly reflects real impacts. Field studies observe users in realistic contexts but are time-consuming, while lab studies allow more control but also reduce realism. Interviews rely on subjective user memory and perspective. Statistics like t-tests and ANOVAs can be used to analyze empirical data and determine statistical significance.
1) Managers are criticized for incompetence and mismanagement, leading to calls for more accountability and transparency in decision-making.
2) Only about half of what is learned is still considered valid after 7 years, so reliance on past experience can spread outdated or incorrect information.
3) Research shows that managers are often incorrect in their beliefs about effective practices, scoring on average only 35-57% correct on true/false questions.
4) The main reason evidence-based management is needed is because of "bounded rationality" - managers rely on intuitive thinking which is prone to biases that can lead to flawed decisions without objective evidence to counteract them.
The document discusses the problem solving or engineering design process. It begins with defining the problem clearly and gathering background information. Then it describes developing alternative solutions, selecting the best solution, creating a prototype, evaluating the solution, and communicating the results. The process is iterative and may involve redesigning and improving the solution. Finally, it notes similarities between the scientific method and engineering design process.
The document discusses techniques for assessing nonresponse bias in surveys, called N-BIAS methods. It describes 8 techniques: archival analysis, follow-up survey of non-respondents, wave analysis, passive non-response analysis, active non-respondent pre-study analysis, interest level analysis, worst case resistance analysis, and benchmarking. The techniques are used to detect, estimate, and potentially compensate for nonresponse bias in order to provide a clearer picture of its extent, though options for eliminating it are limited.
The document summarizes strategies for encouraging responses in online surveys. It discusses the tailored design method proposed by Dillman and the leverage-saliency theory of Groves. Key response enhancing techniques include pre-notification, incentives, personalization, question order, survey length, and reminders. Combining multiple techniques can incrementally increase response rates by a few percentage points each. The effectiveness of techniques depends on respondent characteristics and survey context.
This document discusses several promising techniques for future data collection, including visual surveys, audio/video interviewing, virtual worlds, web scraping, social network mapping, embedded polls on sites like Facebook, and collecting data from mobile devices. It provides examples of platforms that can be used to implement each technique and highlights advantages like reduced costs and wider reach, as well as disadvantages like technical requirements.
This document discusses the promise and challenges of developing a unitary doctoral curriculum across information schools. While a unitary curriculum could promote coherence, reduce chaos, and establish a common identity, there are also concerns. Specifically, the interdisciplinarity of information fields makes unification difficult, and individual schools may prioritize differentiation over coherence. The document also examines approaches to defining goals for doctoral programs, including constructing a prototype graduate and focusing on scientific research that improves life. A scientist-practitioner model is proposed, emphasizing evidence-based practices, application of information phenomena, and professional skills.
This document discusses moving data between Excel and R. It explains that R maintains a current working directory to simplify reading and saving files. It also discusses using the clipboard to copy small chunks of data between Excel and R, including variable names. The best option is to put the clipboard data into a dataframe. Dataframes are explained as lists of vectors that can hold different data types. The document demonstrates importing and exporting CSV files between Excel and R and using the data interchange format to exchange files between the two programs. It suggests tasks for demonstrating mastery of these skills, such as importing/exporting CSV files and using dataframes.
The document discusses key principles for effective survey design according to Dillman's Tailored Design Method. It emphasizes increasing the benefits of participation through rewards, validation, and interest while decreasing barriers through clear, concise, and respectful language and questions. Fostering trust is also important through tokens of appreciation, legitimacy of the study and sponsor, and ensuring the task feels important and worthwhile.
Carma internet research module getting started with question proSyracuse University
This document provides instructions for getting started with the QuestionPro survey platform. It outlines the sign up and survey creation process, including how to [1] create an account, [2] navigate to the home screen and click "Create Survey", [3] follow the wizard to name and design the survey, and [4] add and format questions and response options. It also notes that the free version is limited but universities can get site licenses, and it describes how to preview, distribute and share surveys with others for feedback.
The document demonstrates how to analyze movie box office data using R. Key steps include:
1. Loading the data and checking its structure and variables.
2. Creating a histogram of the DAY_NUM variable to visualize its distribution.
3. Converting factors to numbers and aggregating the daily box office amounts by movie.
3. Creating a bar plot of the total box office amounts by movie to identify the highest-grossing films. Issues encountered during the process are also discussed.
Jeff Stanton discusses discovery informatics for analyzing large datasets. He notes that the amount of data is growing rapidly but IT spending is not keeping pace. Traditional data exploration methods are insufficient for "big data." Emerging alternatives are needed for creating and analyzing large datasets. Stanton provides examples of big data in retail and jobs in the field. He also discusses costs of data reformatting and search failures. The document explores using crowdsourcing, natural language processing, and visualization tools to translate and validate psychological assessment items across languages and cultures.
The document provides guidance on designing effective questionnaires. It emphasizes that questionnaires must have well-defined objectives in order to ask relevant questions and draw meaningful conclusions from the responses. Questions should follow logically from clear objectives. It also stresses that both open-ended and closed-format questions each have advantages, and the type of questions used should depend on the specific information needed. Demographic questions can help analyze response patterns among different groups. Overall, carefully considering objectives, question types, and question wording is essential for creating a questionnaire that efficiently gathers high-quality data.
Questionnaires 6 steps for research method.Namo Kim
The document summarizes the six key steps to developing and administering an effective questionnaire: 1) Determine your questions, 2) Draft questionnaire items, 3) Sequence the items, 4) Design the questionnaire, 5) Pilot-test the questionnaire, and 6) Develop a strategy for data collection and analysis. It provides details on each step, including how to write different types of questions, organize sections, and test and distribute the questionnaire. The overall aim is to systematically gather accurate information from respondents through a standardized self-reporting tool.
Edu 702 group presentation (questionnaire) 2Dhiya Lara
The document provides information on preparing and administering a questionnaire for research. It discusses considerations for instrument selection including validity, reliability, and usability. It defines what a questionnaire is and provides tips for getting started, introduction, formatting questions, and common question types like Likert scales, ratings, rankings, and open-ended. It also covers piloting the questionnaire, considerations, advantages, disadvantages, and preparing the collected data for analysis.
Chapter 10 pandemic planPandemic planFalls under .docxbartholomeocoombs
Chapter 10: pandemic plan
Pandemic plan
Falls under business continuity plan
Appears gradually
Runs for several months
What is a pandemic?
Infectious disease
Disease that can affect business
Loss of customers
Reduction in travel
Writing a plan
Round up a team
Tie the plan to the BIA
Review contractual obligation
Pandemic risk assessment
Employee to employee contact
Employee to customer contact
Contact with infected items
Contact from travel
Impact on raw materials
Impact on customer demand
Pandemic techniques
Social distancing
Sanitation
Communications
Timing
The plan
Triggering
Triggering the plan
Finding
Finding the latest information
Communicating plan
Before it strikes
Meetings
Updates
Reports
During pandemic
Local sites
After the pandemic
Role of the manager
Review policies
Attendance policy
Trained substitutes
Company travel
Consider working alternate shifts
Technology can help
Using VPN
Virtual meetings
Test your plan
Test via table-top exercise
Implement exercises/drill to test plan
summary
Pandemic is like an ocean wave
Social distancing is important
Use technology
Establish good communication plans
BUS334: Data Collection Assignment Marking Guide (35%)
Element Fail (below 50%) Pass (50-59%) Credit (60-69%) Distinction (70-79%)
High Distinction (80-
100%)
Summary
[5 marks]
No summary is included. The summary is unclear,
poorly written and does
not provide the full
picture of student’s
research.
The summary is somewhat
unclear. It does not
provide the full picture of
student’s research.
The summary is clear,
provides a comprehensive
picture of student’s
research and is well-
written.
The summary is clear. It
provides a comprehensive
picture of student’s research
and is of an excellent
standard.
Result section
[5 marks]
Results are not
presented.
Results are presented to a
minimum level which does
not provide any insight into
the data.
Results are presented to a
good level, but could be
improved. Overall, the result
section provide some insight
into the data.
Results are presented to a
very good level. Overall, the
result section provide good
and clear insight into the
data.
Results are presented to an
excellent level. Overall, the
result section provide full, clear
and detailed insight into the
data.
Conclusion
/Recommendation
section
[5 marks]
No recommendations are
provided.
Recommendations are not
well considered, or lack
detail and justification.
Recommendations are
provided, but not all are well
justified.
Recommendations are
mostly well considered, they
are derived from data and
well justified / explained.
Recommendations are well
considered, they all are derived
from the data and are
explained / justified in a clear
and compelling way.
Data & Structure
[5 marks]
Parts of the report are
missing. .
There are two main approaches to user evaluation - qualitative and quantitative. Qualitative involves observing users and asking open-ended questions to understand their perspectives, while quantitative focuses on collecting numeric data through closed questions and analysis. Both approaches have benefits and challenges. Effective evaluation combines multiple techniques like playtesting, think-aloud protocols, and card sorting to iteratively improve the user experience based on direct feedback.
Lesson 5a_Surveys and Measurement 2023.pptxGowshikaSekar
This document provides information on surveys and measurement. It discusses different modes of survey administration including personally administered surveys which have a high response rate but are confined to a local area, and mailed/internet surveys which have no geographic boundaries but lower response rates. The document also covers practical issues in survey design such as question order and wording. It describes commonly used item formats like Likert scales and open-ended questions. Issues with surveys like social desirability bias and low response rates are also addressed. The document concludes with discussing how to establish the reliability and validity of measures through techniques like test-retest, parallel forms, split-half, and assessing content, convergent, and discriminant validity.
The document outlines the objectives and content of a survey design workshop. It discusses key topics like questionnaire design, levels of measurement, sampling, and implementation issues. The workshop aims to help participants understand rigorous survey planning, common survey methods, questionnaire design best practices, and critically reviewing example surveys.
This document provides guidance on developing a questionnaire for research. It discusses important considerations in instrument design such as validity, reliability, and usability. Common question formats like Likert scales, rankings, and open-ended questions are described along with examples. The importance of pilot testing the questionnaire is emphasized to identify issues before full distribution. Overall guidelines are provided such as keeping the questionnaire short, using clear language, and leaving space for comments.
Edu 702 group presentation (questionnaire)Azura Zaki
This document provides guidance on developing a questionnaire for research. It discusses important considerations in instrument design such as validity, reliability, and usability. Common question formats like Likert scales, rankings, and open-ended questions are described along with examples. The importance of pilot testing the questionnaire and revising based on feedback is emphasized. Overall guidelines are provided such as keeping the questionnaire short, using clear language, and leaving space for comments.
This document discusses conducting needs assessments for interactive learning systems. It provides objectives for understanding needs assessment methods, key issues addressed, and effective presentation of results. Needs assessments identify important goals and target audiences for a proposed product. Traditional needs assessment approaches are outlined, including determining purposes and identifying sources to understand what is happening versus what should be happening. Effective needs assessments for interactive learning systems focus on key questions and use rapid prototyping to refine product requirements based on user testing.
Part 4 of the inquiry plan involves developing inquiry questions to explore the expertise of others in the field of interest. Part 5 involves planning how to gain ethics approval and people's consent to be interviewed. The tools in Part 6 will be used to gather data for the inquiry in Module 3.
The proposal/plan for Module 2 involves outlining the context, questions, aims, literature review, tools, consent processes, analysis approach, schedule and conclusion. It is submitted along with completing the Employer Support, Ethics Release and Award Title forms, as well as a critical reflection on what was learned.
SURVEY USAGE AND FINDING CORRELATIONS Survey and Correlat.docxmabelf3
SURVEY USAGE AND
FINDING CORRELATIONS
Survey and Correlation Research Designs
https://my.visme.co/render/1454630688/www.erau.edu
Slide 1 Transcript
Surveys and correlations are usually found in qualitative approaches, although some specific forms may be in quantitative designs, as well. In this module, types of surveys,
development of items used, and overall survey design and administration will be the subject of interest. The second component of the module is about correlation analysis. Included
in both descriptive and inferential statistics, correlations describe the relationship between two paired variables. Surveys and correlations are regarded as nonexperimental research
and will serve to cover that category among the three research designs introduced earlier in the course.
Survey Designs
Determine incidence,
frequency, distributed
characteristics
Opinions, attitudes,
previous experience
Self report
Sample taken from
larger population
Census taken from
smaller population
Summarized as percentages,
frequencies, indices
Particular point in time,
like a snapshot
Categories of surveys
Instrumentation
(interviews, questionnaires)
Span of time
cross-sectional
longitudinal
(trend, cohort, panel,
follow-up)
Slide 3 Transcript
A survey is a study designed to determine the incidence, frequency, and distribution of certain characteristics in a population. Some of the characteristics might include opinions,
attitudes, or previous experiences using self-report responses. Surveys are either sample surveys or census surveys. Usually, a sample is taken from a population, often called a
descriptive or normative survey. Census surveys are usually conducted when a population is relatively small and readily accessible. Once the responses are tabulated they are
summarized in percentages, frequencies, or indices. Surveys often are depicted as snapshots, or moments captured in a particular frame of time. There are other types of surveys,
e.g., geological or inventory types, but here we are focusing on those done with humans. Surveys can be categorized by instrumentation or by span of time involved. Instrumentation
designs include interviews or questionnaires either of which might be conducted orally or in writing. The span of time needed to complete a survey might be cross-sectional for
information at single period in time to compare variables, or longitudinal with multiple data collection points over an extended period to examine changes. A key aspect, then, is the
number of times the survey is administered. Cross-sectional surveys have the advantage of providing data relatively quickly. However, cross-sectional surveys do not provide a broad
perspective to inform decisions about reliably changing systems, or to understand trends over time.
Longitudinal surveys, though, require an extended period of time to study an issue. Longitudinal surveys are categorized into four types – trend surveys address a .
HEALTHCARE RESEARCH METHODS: Primary Studies: Developing a Questionnaire - Su...Dr. Khaled OUANES
This document provides an overview of developing and designing questionnaires for primary healthcare research studies. It discusses determining questionnaire content and categories of questions, choosing between open-ended and close-ended question types, examples of question formats, wording questions clearly, ordering questions appropriately, laying out and formatting the questionnaire, translating and validating the questionnaire through pilot testing, and training interviewers to administer the questionnaire consistently. The goal is to systematically develop a valid tool to gather accurate information from study participants.
The document discusses key aspects of data collection and analysis for monitoring and evaluation projects. It covers topics such as qualities of good data, data collection methods including questionnaires, sampling methods, and data analysis techniques. Specifically, it emphasizes that collecting adequate, timely and relevant data is essential for evaluation and that questionnaires must be designed carefully to obtain accurate information and address all relevant variables. It also highlights the importance of representative sampling to make reliable estimates about target populations.
The document discusses the creation and administration of surveys. It notes that surveys are used to collect standardized information from participants for research purposes. There are four key parts to a survey: the invitation, introduction, question types, and close. Surveys allow researchers to obtain a large amount of data quickly and inexpensively, but poor construction can impact results and responses may not be accurate. The document provides tips on survey length, pre-testing, and consistent scales, and examples of possible survey topics before guiding the reader through creating a survey using Google Forms.
Usability Primer - for Alberta Municipal Webmasters Working GroupNormanMendoza
Presentation provided on December 1, 2006. References:
“A Practical Guide to Usability Testing” by Joseph S. Dumas and Janice C. Redish
The Elements of User Experience, diagram by Jesse James Garrett
Experience Research Best Practices - UX Meet Up Boston 2013 - Dan BerlinMad*Pow
The document provides guidance on best practices for experience research. It discusses understanding research goals, choosing appropriate research methods, gathering qualitative data through tasks, moderator guides, note taking, and organizing findings. The key points are: understand business goals and user needs to define research goals; use a methods chart to evaluate options based on goals, timeline, budget and other constraints; and properly document studies through moderator guides, notes grids, and findings sheets to facilitate analysis.
Research is an important step in preparing an advocacy campaign. Careful, objective research educates supporters about causes and effects of problems. The document discusses various research methods like surveys, interviews, focus groups, and secondary data collection. It also covers topics like sampling, designing survey questions, analyzing qualitative and quantitative data, and presenting research findings to different audiences.
Similar to Carma internet research module: Survey reduction (20)
Why R? A Brief Introduction to the Open Source Statistics PlatformSyracuse University
This document discusses the statistical programming language R. It describes R as an open source platform for statistics, data management, and graphics. It notes that R comprises a core program plus thousands of add-in packages. It then compares R to other popular statistical software packages and notes that R is more popular and used by more analysts. Finally, it highlights some advantages of R, including its emphasis on reproducibility through coding data transformations.
This document provides instructions for installing and using R-Studio. It describes R-Studio as an integrated development environment for R with four panes - code, console, workspace, and file/plots. It outlines downloading and installing R-Studio after first installing R. It then demonstrates creating a simple MyMode function to calculate the mode, and improving it through multiple iterations to properly handle duplicate values and return the correct mode. The document encourages testing the function on sample data and trying to "break" it to find flaws.
A companion slide deck for this chapter:
Stanton, J. M. (2013). Data Mining: A Practical Introduction for Organizational Researchers. In Cortina, J. M., & Landis, R. S., Modern Research Methods for the Study of Behavior in Organizations. New York: Routledge Academic.
External pressures like changing demographics and increasing student debt have created challenges for universities. Effective strategic processes require clear priorities aligned with stakeholders' values. Strategy lies at the leadership core by balancing constituencies' conflicting demands. Strategic planning models include linear, adaptive, and interpretive approaches. The linear method scans environments and pursues objectives. Adaptive strategy continuously adapts through experimentation. Interpretive strategy aligns mission and goals through symbols. Universities that strategically communicated culture changes through symbols were more resilient during financial difficulties.
The document outlines the steps for developing a valid scale for use in web surveys, including defining the construct, generating items, pilot testing, refining the scale, and validating it with other measures. Key aspects include using subject matter experts, evaluating items statistically and conceptually, demonstrating the scale's nomological network, and publishing validation evidence. The goal is to create a concise yet reliable and valid scale for measuring constructs online.
The document discusses visual design considerations for survey design, including:
1) Goals of maximizing reward for participants while minimizing costs, reducing errors and biases.
2) Cognitive and visual processing steps respondents go through.
3) Ensuring clarity through proper use of formatting, fonts, colors, spacing and screen sizing for different devices.
4) Considering audiences that may have limited bandwidth.
This document discusses three topics related to cutting edge research that industrial-organizational psychologists could be doing more of. First, there is relevant scholarship published in conference proceedings that could also be published in journals. Second, alternative employment arrangements like temporary work and contracting are a fast growing area. Third, social media provides new opportunities for data collection that can complement surveys.
This document compares two integrated development environments (IDEs) for the R programming language: R-Studio and Rcmdr. R-Studio is a more powerful and flexible IDE that provides direct access to R code and facilitates interactions with R through its graphical interface. Rcmdr is simpler and more user-friendly, focusing on statistical analysis through buttons and menus. Both allow viewing data, but neither support data editing. The document provides guidelines for choosing between them and notes additional R IDEs under development.
This document provides an introduction to advanced data analytics using R. It outlines the key steps in an analytics process: [1] understanding the domain; [2] obtaining and cleaning data; [3] reducing, transforming, and visualizing the data; [4] choosing analytical approaches; and [5] communicating results. As a first example, it analyzes a public dataset on ice cream consumption using R commands to summarize, visualize with histograms and boxplots, and explore relationships between variables like income, temperature, and consumption over time. The document demonstrates how to interpret these analyses and leverage additional tools in R to further understand the data.
This document provides an introduction to advanced data analytics. It discusses [1] how organizations lose millions annually due to inefficient use of data, [2] the sources and types of big data being generated, and [3] the multi-disciplinary nature of data analytics, drawing on fields like database technology, statistics, machine learning, and visualization. The key steps of analytics projects are outlined, including understanding the domain, preprocessing data, reducing and transforming it, selecting analytical approaches, communicating results, and deploying and evaluating new systems.
This document provides instructions for installing R and R-Studio, two programs for performing advanced data analytics. It explains that R can be downloaded from its website for Linux, MacOS, and Windows, while R-Studio can also be downloaded from its website for those operating systems. The document then demonstrates how to use R-Studio, which displays the workspace, console, and ability to show graphics and other information across multiple tabs. It includes example R commands to help orient users and demonstrates assigning a value to a variable to show mastery of the basics.
This document discusses using the open-source statistical software R to analyze security-related information from Twitter. It provides instructions on installing relevant R packages for accessing the Twitter API and searching for tweets containing specific keywords or hashtags. As an example, it searches for tweets with the hashtag "#exploit" and visualizes the results in a heatmap to show when during the day the tweets were posted.
This document discusses the role of data scientists in analyzing large and complex datasets to help answer critical questions. It notes that over 95% of digital data is unstructured and organizations lose millions annually due to inefficient use of information. Data scientists can help transform this data into usable knowledge by developing expertise in both data management and specific domains. They work with infrastructure experts and domain experts to analyze "big data" and solve grand challenges across many fields.
Fred Oswald and Jeff Stanton are experts in reducing response burden in surveys through various statistical and technological methods. Their goals are to reduce administration time and costs, increase response rates, and decrease fatigue while maintaining reliability. Their approaches include reducing instructions, removing redundant items, distributing items across subgroups, and using automation. Their research examines determining efficient item assignments, evaluating when precision is lost by reducing content, analyzing item relationships, and stakeholder reactions. Their expertise involves examining tools to inform practical survey development goals.
This document discusses nonresponse bias in surveys and methods for assessing its impact. It begins by explaining why low response rates can undermine survey validity and introduces techniques researchers have used to increase response rates over time. However, it argues that response rate alone is not a good indicator of bias; more important is understanding if and how nonrespondents differ from respondents. The document then presents the Nonresponse Bias Impact Assessment (N-BIAS) framework, which involves using multiple techniques like archival analysis, follow-ups, wave analysis, and others to evaluate nonresponse bias in a given study.
This document discusses internet data collection methods and sampling techniques used for internet-based research. It covers topics like defining the universe and population, developing samples, probability and non-probability sampling, sources of internet samples, criticisms of internet samples, and preferred mixed mode sampling strategies. The document consists of lecture notes from a course on internet data collection methods, providing information on key concepts and considerations for sampling in internet research.
1) The document describes various internet research methods including surveys, experiments, interviews, focus groups, document analysis, and mixed methods designs.
2) Specific examples include a single-wave survey, two-wave panel design, quasi-experiment comparing two conditions, time series design with repeated measures, and a linked dyads design.
3) Considerations for internet research methods include timing, identifiers to link data sources, and challenges of experimental control.
- The document discusses the role and education needs of eScience professionals, who work with scientists and engineers to manage large datasets and facilitate collaboration using new technologies.
- Interviews and analysis found that eScience professionals need skills in areas like data collection, management and analysis, IT implementation, and facilitating communication and collaboration across disciplines. They also need knowledge of science/engineering domains and IT/informatics.
- The document recommends that eScience professionals obtain at least a bachelor's degree in a science or engineering field, plus additional education in information management or an MS in information science/management.
This document discusses methods for detecting bad or fraudulent data in online studies. It identifies sources of data problems such as technical errors, missing data, and response fraud. Specific detection techniques are presented, including duplicate detection, univariate and multivariate outlier analysis, and autocorrelation analysis to identify unusual response patterns. Common missing data mitigation strategies like imputation are also covered. Examples of Excel functions for analyzing and working with data are provided.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
2. Primary Goal: Reduce Administration Time Secondary goals Reduce perceived administration time Increase the engagement of the respondent with the experience of completing instrument lock in interest and excitement from the start Reduce the extent of missing and erroneous data due to carelessness, rushing, test forms that are hard to use, etc. Increase the respondents’ ease of experience (maybe even enjoyment!) so that they will persist to the end AND that they will respond again next year (or whenever the next survey comes out) Conclusions? Make the survey SEEM as short and compact as possible Streamline the WHOLE EXPERIENCE from the first call for participation all the way to the end of the final page of the instrument Focus test-reduction efforts on the easy stuff before diving into the nitty-gritty statistical stuff 2
3. 3 Please choose the option that most closely fits how youdescribe yourself. Please select only one of the two options: Female [] Male []
4. Instruction Reduction Fewer than 4% of respondents make use of printed instructions:Novick and Ward (2006, ACM-SIGDOC) Comprehension of instructions only influences novice performance on surveys: Catrambone(1990; HCI) Instructions on average are written five grade levels above average grade level of respondent; 23% of respondents failed to understand at least one element of instructions: Spandorferet al. (1993; Annals of EM) Unless you are working with a special/unusual population, you can assume that respondents know how to complete Likert scales and other common response formats without instructions Most people don’t read instructions anyway. When they do, the instructions often don’t help them respond any better! If your response format is so novel that people require instructions, then you have a substantial burden to pilot test, in order to ensure that people comprehend the instructions and respond appropriately. Otherwise, do not take the risk! 4
8. Respondents should feel like demographics are not serving to identify them in their survey responses.
9. You could offer respondents two choices: match (or automatically fill in) some/all demographic data using the code number provided in your invitation email (or on a paper letter); they fill in the demographic data (on web-based surveys, a reveal can branch respondents to the demographics page) 6
10. Eligibility If a survey has eligibility requirements, the screening questions should be placed at the earliest possible point in the survey. (eligibility requirements can appear in instructions, but this should not be the sole method of screening out ineligible respondents) Skip Logic Skip logic actually shortens the survey by setting aside questions for which the respondent is ineligible. Branching Branching may not shorten, but can improve the user experience by offering questions specifically focused to the respondent’s demographic or reported experience. 7 Illustration credit: Vovici.com Eligibility, Skip Logic, and Branching
11. Ever answer a survey where you knew that your answer would predict how many questions you would have to answer after that? e.g., “How many hotel chains have you been to in the last year?” If users can predict that their eligibility, the survey skip logic, or survey branching will lead to longer responses, more complex responses, or more difficult or tedious responses, they may: Abandon the survey Backup and change their answer to the conditional with less work (if the interface permits it). Branch design should try not to imply what the user would have experienced in another branch. Paths through the survey should avoid causing considerably more work for some respondents than for others. 8 Implications: Eligibility, Skip Logic, and Branching
12. Panel Designs and/or Multiple Administration Panel designs measure the same respondents on multiple occasions. Typically either predictors are gathered at an early point in time, and outcomes gathered at a later point in time, or both predictors and outcomes are measured at every time point. (There are variations on these two themes). Panel designs are based on maturation and/or intervention processes that require the passage of time. Examples: career aspirations over time, person-organization fit over time, training before/after Minimally, panel designs can help mitigate (though not solve) the problem of common method bias; e.g., responding to a criterion at time 2, respondents tend to forget how they responded at time 1. 9
13. Panel Designs and/or Multiple Administration Survey designers can apply the logic of panel designs to their own surveys: Sometimes, you have to collect a large number of variables (no measure shortening), and it is impractical to do so in a single administration. Generally speaking: Better to have a many short, pleasant survey administrations with a cumulative “work time lost” of an hour vs. long and grinding one hour-long survey. The former can get you happier and less fatigued respondents and better data, hopefully. In the limit, consider the implications of a “Today’s Poll” approach to measuring climate, stress, satisfaction, or other attitudinal variables: One question per day, every day…. 10
14. Unobtrusive Behavioral Observation Surveys appear convenient and relatively inexpensive in and of themselves…however, the cumulative work time lost across all respondents may be quite large. Methods that assess social variables through observations of overt behavior rather than self report can provide indications of stress, satisfaction, organizational citizenship, intent to quit, and other psychologically and organizationally relevant variables. Examples Cigarette breaks over time (frequency, # of incumbents per day); Garbage (weight of trash before/after a recycling program); Social media usage (tweets, blog posts, Facebook); Wear of floor tiles Absenteeism or tardiness records; Incumbent, team and department production quality and quantity measures 11
15. Unobtrusive Behavioral Observation Most unobtrusive observations must be conducted over time: Establish a baseline for the behavior. Examine subsequent time periods to examine changes/trends over time. Generally, much more labor intensive data collection than surveys. Results should be cross-validated with other types of evidence. 12
16. Scale Reduction and One-item Measures Standard scale construction calls for “sampling the construct domain” with items that tap into different aspects of the construct with items that refer to various content areas. Scales with more items can include a larger sample of the behaviors or topics relevant to the construct. 13 RELEVANT measuring what you want measure Construct Domain Item Content CONTAMINATED measuring what you don’t want to measure DEFICIENT not measuring what you want to measure
17. Scale Reduction and One-item Measures When fewer items are used, by necessity they must be either more general in wording to obtain full coverage (hopefully) more narrow to focus on a subset of behaviors/topics Internal consistency reliability reinforces this trade-off: As the number of items gets smaller, inter-item correlation must rise to maintain a given level of internal consistency. However, scales with fewer than 3-5 items rarely achieve acceptable internal consistency without simply becoming alternative wordings of the same questions. Discussion: How many of you have taken a measure where you were being asked the same question again and again? Your reactions? Why was this done? The one-item solution: A one-item measure usually “covers” a construct only if is highly non-specific. A one item measure has a measurable reliability (see Wanous & Hudy; ORM, 2001), but the concept of internal consistency is meaningless. Discuss: A one-item knowledge measure vs. a one-item job satisfaction measure. 14
18. One-item Measure Literature Research using single item measures of each of the five JDI job satisfaction facets and found correlations between .60 and .72 to the full length versions of the JDI scalesNagy (2002) Review of single-item graphical representation scales; so called “faces” scales Patrician (2004) Single item graphic scale for organizational identificationShamir & Kark (2004) Research finding that single item job satisfaction scales systematically overestimate workers’ job satisfactionOshagbemi(1999) Single item measures work best on “homogeneous” constructsLoo (2002) 15
19. Scale Reduction:Technical Considerations Items can be struck from a scale based on three different sets of qualities: 1. Internal item qualities refer to properties of items that can be assessed in reference to other items on the scale or the scale's summated scores. 2. External item qualities refer to connections between the scale (or its individual items) and other constructs or indicators. 3. Judgmental item qualities refer to those issues that require subjective judgment and/or are difficult to assess in isolation of the context in which the scale is administered Literature review suggests that the most widely used method for item selection in scale reduction is some form of internal consistency maximization Corrected item-total correlations provide diagnostic information about internal consistency. In scale reduction efforts, item-total correlations have been employed as a basis for retaining items for a shortened scale version Factor analysis is another technique that, when used for scale reduction, can lead to increased internal consistency, assuming one chooses items that load strongly on a dominant factor 16
20. Scale Reduction II Despite their prevalence, there are important limitations of scale reduction techniques that maximize internal consistency. Choosing items to maximize internal consistency leads to item sets highly redundant in appearance, narrow in content, and potentially low in validity High internal consistency often signifies a failure to adequately sample content from all parts of the construct domain To obtain high values of coefficient alpha, a scale developer need only write a set of items that paraphrase each other or are antonyms of one other. One can expect an equivalent result (i.e., high redundancy) from using the analogous approach in scale reduction, that is, excluding all items but those highly similar in content. 17
21. Scale Reduction III IRT provides an alternative strategy for scale reduction that does not focus on maximizing internal consistency. One should retain items that are highly discriminating (i.e., moderate to large values of a) and one should attempt to include items with a range of item thresholds (i.e., b) that adequately cover the expected range of the trait in measured individuals IRT analysis for scale reduction can be complex and does not provide a definitive answer to the question of which items to retain; rather, it provides evidence for which items might work well together to cover the trait range Relating items to external criteria provides a viable alternative to internal consistency and other internal qualities Because correlations vary across different samples, instruments, and administration contexts, an item that predicts an external criterion best in one sample may not do so in another. Choosing items to maximize a relation with an external criterion runs the risk of a decrease in discriminant validity between the measures of the two constructs. 18
22. Scale Reduction IV The overarching goal of any scale reduction project should be to closely replicate the pattern of relations established within the construct's nomologicalnetwork. In evaluating any given item's relations with external criteria, one should seek moderate correlations with a variety of related scales (i.e., convergent validity) and low correlations with a variety of unrelated measures Researchers may also need to examine other criteria beyond statistical relations to determine which items should remain in an abbreviated scale. Clarity of expression, its relevance to a particular respondent population, the semantic redundancy of an item's content with other items, the perceived invasiveness of an item, and an item's "face" validity. Items lacking apparent relevance, or that are highly redundant with other items on the scale, may be viewed negatively by respondents. To the extent that judgmental qualities can be used to select items with face validity, both the reactions of constituencies and the motivation of respondents maybe enhanced Simple strategy for retention that does not require IRT analysis: Stepwise regression Rank ordered item inclusion in an "optimal" reduced-length scale that accounts for a nearly maximal proportion of variance in its own full-length summated scale score. Order of entry into the stepwise regression is a rank order proxy indicating item goodness Empirical results show that this method performs as well as a brute force combinatorial scan of item combinations; method can also be combined with human judgment to pick items from among the top ranked items (but not in strict ranking order) 19
25. the higher chance that one or more constructs will perform poorly if the measures are not well established/developed
26. less information might be obtained about each respondent and their score on a given construct
27. have to sell its meaningfulness to decision makers who will act on the results20
28. Bibliography Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74, 478-494. Catrambone, R. (1990). Specific versus general procedures in instructions. Human-Computer Interaction, 5, 49-93. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: Wiley. Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The Mini-IPIP scales: Tiny-yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18, 192-203. Emons, W. H. M., Sijtsma, K., & Meijer, R. R. (2007). On the consistency of classification using short scales. Psychological Methods, 12, 105-12. Girard, T. A., & Christiansen, B. K. (2008). Clarifying problems and offering solutions for correlated error when assessing the validity of selected-subtest short forms. Psychological Assessment, 20, 76-8. Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21, 967-988. Levy, P. (1968). Short-form tests: A methodological review. Psychological Bulletin, 6, 410-416. Loo, R. (2002). A caveat on using single-item versus multiple-item scales. Journal of Managerial Psychology, 17, 68-75. Lord, F. M. (1965). A strong true-score theory, with applications. Psychometrika, 3, 239-27. Nagy, M. S. (2002). Using a single item approach to measure facet job satisfaction. Journal of Occupational and Organizational Psychology, 75, 77-86. Novick, D. G., & Ward, K. (2006). Why don't people read the manual? Paper presented at the SIGDOC '06 Proceedings of the 24th Annual ACM International Conference on Design of Communication. Oshagbemi, T. (1999). Overall job satisfaction: how good are single versus multiple-item measures? Journal of Managerial Psychology, 14, 388-403. Patrician, P. A. (2004). Single-item graphic representational scales. Nursing Research, 53, 347-352. Shamir, B., & Kark, R. (2004). A single item graphic scale for the measurement of organizational identification. Journal of Occupational and Organizational Psychology, 77, 115-123. 21
29. Bibliography (Continued) Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short form development. Psychological Assessment, 12, 102-111. S pandorfer, J. M., Karras, D. J., Hughes, L. A., & Caputo, C. (1995). Comprehension of discharge instructions by patients in an urban emergency department. Annals of Emergency Medicine, 25, 71-74. Stanton, J. M., Sinar, E., Balzer, W. K., Smith, P. C., (2002). Issues and strategies for reducing the length of self-report scale. Personnel Psychology, 55, 167-194. Wanous, J. P., & Hudy, M. J. (2001). Single-item reliability: A replication and extension. Organizational Research Methods, 4, 361-375. Widaman, K. F., Little, T. D., Preacher, K. J., Sawalani, G. M. (2011). On creating and using short forms of scales in secondary research. In K. H. Trzesniewski, M. B. Donnellan, & R. E. Lucas (Eds.). Secondary data analysis: An introduction for psychologists (pp. 39-61). Washington, DC: American Psychological Association. 22