This document provides guidance on conducting qualitative research. It discusses key aspects of the research process such as developing a conceptual framework, determining what and who to study, collecting data through methods like interviews and observation, and analyzing the data through techniques such as coding and creating displays. The document emphasizes generating conclusions that consider alternative explanations and testing findings for reliability and generalizability.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
STARCANADA 2013 Keynote: Lightning Strikes the KeynotesTechWell
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing—Jon Bach, Michael Bolton, Fiona Charles, Janet Gregory, Paul Holland, Griffin Jones, Keith Klain, Gerard Meszaros, and Nate Oster—will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
STARCANADA 2013 Keynote: Lightning Strikes the KeynotesTechWell
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing—Jon Bach, Michael Bolton, Fiona Charles, Janet Gregory, Paul Holland, Griffin Jones, Keith Klain, Gerard Meszaros, and Nate Oster—will step up to the podium and give you their best shot of lightning. Get ten keynote presentations for the price of one—and have some fun at the same time.
Theories in Empirical Software EngineeringDaniel Mendez
Slides from the International Advanced School on Empirical Software Engineering 2015, held as part of the Empirical Software Engineering International Week in Beijing. The slides are posted with the permission of the main organiser Roel Wieringa.
Selecting Empirical Methods for Software EngineeringDaniel Cukier
Presentation on how to write good Master and PhD dissertations.
Empirical Methods, Software Engineering, science, computer science, software, methods, positivism, epistemology, onthology, construtivism, critical theory, pragmatism, case study, research action, ethnography
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Empirical Software Engineering for Software Environments - University of Cali...Marco Aurelio Gerosa
Second class of the Software Environment course. In this class, we discuss how to use Empirical Software Engineering techniques to support the construction and evaluation of software tools.
Presentation given at the CASE Communications, Marketing & Technology Conference in Boston on April 15, 2009.
Learn the tools of the trade for do-it-yourself research for little or no money. This session will teach you how to conduct focus groups, surveys, usability tests and more.
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
Fabian Scarano - Preparing Your Team for the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Preparing Your Team for the Future by Fabian Scarano. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Theories in Empirical Software EngineeringDaniel Mendez
Slides from the International Advanced School on Empirical Software Engineering 2015, held as part of the Empirical Software Engineering International Week in Beijing. The slides are posted with the permission of the main organiser Roel Wieringa.
Selecting Empirical Methods for Software EngineeringDaniel Cukier
Presentation on how to write good Master and PhD dissertations.
Empirical Methods, Software Engineering, science, computer science, software, methods, positivism, epistemology, onthology, construtivism, critical theory, pragmatism, case study, research action, ethnography
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Empirical Software Engineering for Software Environments - University of Cali...Marco Aurelio Gerosa
Second class of the Software Environment course. In this class, we discuss how to use Empirical Software Engineering techniques to support the construction and evaluation of software tools.
Presentation given at the CASE Communications, Marketing & Technology Conference in Boston on April 15, 2009.
Learn the tools of the trade for do-it-yourself research for little or no money. This session will teach you how to conduct focus groups, surveys, usability tests and more.
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
Fabian Scarano - Preparing Your Team for the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Preparing Your Team for the Future by Fabian Scarano. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.it/.
http://www.ivanomalavolta.com
Learn about, the problem solving method, problem definition, generating solutions, analysing and selecting solutions, planning your next steps, recording lessons learned,
UX Burlington 2017: Exploratory Research in UX DesignSarah Fathallah
Presentation given at the 2017 UX Burlington conference, on the topic of "Exploratory Research in UX Design."
Exploratory research focuses on gaining a deep understanding of the lives of the end users and the contexts in which they use certain products and services. At its core, it’s about challenging and exploring the problem space, before venturing into the solution space. Using real-life examples of digital tools that help people access affordable housing or register to vote, this talk will explore the different tools used for exploratory research, including ethnographic interviews, contextual inquiry, and co-creation activities and prompts. This talk will leave the audience with a better understanding of the types of insights that exploratory research generates, and how they can complement the findings of evaluative or comparative research.
CUE Forum presented at JALT 2008 (Tokyo, Japan). Gives an overview of research design issues for Second Language Acquisition. For further details, visit jaltcue-sig.org
One of the most persistent factors limiting the impact of user research in business is that projects often stop with a catalog of findings and implications rather than generating opportunities that directly enable the findings. We’ve long heard the lament, “Well, we got this report, and it just sat there. We didn’t know what to do with it.”
Ongoing acceptance of (and demand for) user research has increased the ranks of practitioners of all stripes who feel comfortable conducting research. But analysis and synthesis is a more slippery skill set, and we see how easy it is for teams to ignore (more out of frustration than anything malicious) data that doesn’t immediately seem actionable. This session describes a process to take control over synthesis and ideation by breaking it down into a manageable framework.
In this session, you'll:
Learn how to move from data to insights to opportunities.
Get techniques for generating ideas and strategies across a broad scope of business and design concerns.
Explore how to prioritize findings and create new opportunities.
Reading academic papers is one of the most important parts of scientific research. However, junior graduate students may spend a lot of time learning how to read papers efficiently and effectively. In this talk, I will discuss some basic issues and introduce useful websites/tools/tips for paper reading.
Similar to Session 2 into to qualitative research intro (20)
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
2. SUMMARIZING SO FAR
2
Project is hard work
But you can show skills and have fun
With enough effort you will pass!
Take responsibility
Prepares for bachelor assignment
in ‘safe’ environment (group/time limit)
Also please help us
Email first to student assistant
Initial communication in English
How’s it going?
What did you experience as most difficult?
“No man left behind”
4. More than just ‘problem definition’
A recognition that
Problems are not just ‘found’ during analysis, they’re also designed -
by you!
Your ability to solve a problem is directly affected by how well you
design it in the first place
If you can’t solve the problem, then change the problem you’re solving
8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
4
PROBLEM FRAMING
5. Designed to be actionable in the first place
Suggest what steps/tools/approach might be required to address the
problems
Use whatever language, jargon (formal or informal) makes most sense
for those involved
Exciting, compellingly worded
Upon reading it, should be a problem you want to know the answer to
Relevance at least as important as rigor
I.e., ‘usefulness’ as important as ‘precision’
8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
5
GOOD PROBLEM DEFINITIONS
6. Statements you make raise questions in the listener's mind
Fail to answer those questions—presentation perceived as incomplete
Answer questions that were not asked—presentation perceived as
redundant
Achieve a balance and credibility and impact rise dramatically
Ask only the questions you can answer, and
Answer only the questions you ask
8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
6
KEY IS BALANCE BETWEEN
QUESTIONS ASKED AND ANSWERED
7. 8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
7
PROBLEM FRAMING SETS STARTING POINT
AND DETERMINES SCOPE
8. 8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
8
KEY CUSTOMER PROBLEM
EXAMPLES
9. You know what you want: answer to Key Customer Problem
What did others find? Theory!
But most problems cannot be answered by use of theory alone, e.g.:
No literature available
Literature too broad
Different context (industry, country, time, etc.)
Solution: make your own ‘theory’ by empirical study
8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
9
EMPIRICAL RESEARCH DESIGN
HOW TO ANSWER KEY CUSTOMER PROBLEM?
10. Quantitative: emphasis on statistical
testing of assumptions
Qualitative: emphasis on analyzing
behaviors, events and artefacts
Design research: emphasis on
developing a useful artefact
Mixed methods
Combinations of above
10
DIFFERENT DESIGNS
12. How would you measure customer demand?
Last year students did field experiment
Sell with different stories
Positive frame: prevention
Negative frame: danger
Control group: neutral
Which one sold more?
8-7-2014Voettekst: aanpassen via 'Invoegen' ('Beeld' voor Office 2003 of
eerder) kies vervolgens 'Koptekst en voettekst'
12
EXAMPLE: DEMAND FOR FIRE EXTINGUISHERS
13. Methods Benefits Possible drawbacks
Quantitative Clear cut testing/
analyses; hypothesis
testing (confirm/reject)
Design needs to be perfect up
front; sample size; self-report;
causality; oversimplification
(proxies / forced answers)
Qualitative Aim to understand;
open minded
Analyses complicated; matter of
plausibility: always multiple
interpretations possible
Design You ‘deliver’ something Full cycle difficult, often only
prototype testing
Mixed methods Best of both qual and
quant
Almost double the work
13
MAIN BENEFITS/ DRAWBACKS
15. Selection: which population, which respondents?
Sample: how many respondents necessary? What type of sampling?
Measurement: constructing survey instrument, use validated scales,
Databases? Self-report data / common method bias?
Collection: post or online? Dillman method?
Analysis: what type of statistics?
T-test, ANOVA, exploratory factor analysis, regression, etc.
NOIR
Highly recommended reading
Andy Field – Discovering Statistics using SPSS (Sage)
15
KEY CHOICES
17. WHY QUALITATIVE RESEARCH?
Much more work than quantitative research if done well
Cost more time
Cost more effort:
more ‘messy’ process
Why bother?
17
18. Qualitative Research can provide meaningful findings.
Features
Intense contact with the field
Holistic view of the context
Gather data from the inside
Isolation of certain themes
Understand account for and act on people’s behavior
Many interpretations possible
Little standardized instruments
Mostly in words
18
SCIENTIFIC REASONS (MILES & HUBERMAN, 1994)
19. CASSELL & SYMON (2004)
Non-exhaustive list of 30 (!) different methods
Interviews, electronic interviews, life histories, critical incident technique,
repertory grids, cognitive mapping, twenty statements test, research
diaries, stories, pictoral representation, group methods, participant
observation, analytic induction, critical research, hermeneutic
understanding, discourse analysis, talk-in-action/conversation analysis,
attributional coding, grounded theory, template text analysis, data
matrices, preserving/sharing/reusing, documents, ethnography, case
study, soft systems, action research, co-research, future conference
19
20. OWN QUALITATIVE EXPERIENCE
Semi-structured interview
(Informal) unstructured interviews
Structured interviews
Diaries
Documentary data
Group interview
(Non)participant observation
Action research
20
21. Selection & sample: who do you study and why?
Measurement: which questions do you ask?
Data collection: how do you ask?
Data analyses: how do you analyze?
21
RESEARCH PROCESS
22. SELECTION & SAMPLE
Who do you talk to? What about? Where to go? What do you look at?
Maximum variation or similar cases?
Selection on dependent variable
Events with system disturbing potential (Barley & Tolbert, 1997)
Multiple cases: replication and extension?
Gaining access
Snowballing
22
24. MEASUREMENT
How do you ‘measure’?
Interviews (open, semi-structured, structured)
Focus groups (=small group interview)
Open survey (!)
Field study ((non-)participant observation, action research)
Documents (minutes, annual reports, manuals, protocols, etc.)
Diaries
Etc.
24
25. SEMI-STRUCTURED INTERVIEWS
Most common form of qualitative research
Get a snapshot, could repeat over multiple waves
People are not familiar with you: social desirable answers
Audiotape!
25
30. Contact summary sheet
• One-page document to summarize a field contact
Case analysis meetings
• Meeting with peers to discuss your research progress
Interim case summaries
• Summarizing your research progress
30
SOME STRUCTURE BEFORE START
31. • Good way to analyze data (not the only way)
• Start from the data
• Difficult to see the larger picture
• Start from theory
• Difficult to find new things
• In practice always somewhere in the middle
• Other way is coding for recurring important themes over entire text
• You determine what is ‘important’
31
TABLES
32. Data coding
• Assigning tags / codes to pieces of your data.
• Makes analysis easier / faster
Vignette
• Example of your research
• Narrative structure
• Exemplifies typical series of steps
Pre-structured Case
• Structure your research beforehand
32
EARLY ANALYSIS STEPS
34. Clear, concise displaying data is crucial
for drawing conclusions in Qualitative
research.
Building a display format is relatively
easy
Matrix displays vs. Networks displays
• Matrix works best when focussing on
variables
• Networks show the process better
34
WITHIN CASE DISPLAYS
SHOWING ONE CASE
Clear data
display
Better
conclusions
35. Main general structures
1. Partially ordered displays- Not to many ordering
2. Time-Ordered display- Ordering on time
3. Role-Ordered displays- Ordering on people’s (in-) formal roles
4. Conceptually ordered displays- Ordering on concepts / variables
Structures for explaining causality
1. Case Dynamics Matrix- Displaying a set of forces for change
2. Causal Network- Representation of important (in-) dependent variables
But: You have to test the “causal predictions” of these causal structures!
35
WITHIN CASE DISPLAYS
SHOWING ONE CASE
38. Previous slides were about one case / research
We can also use this for multiple cases
• It makes our research more generalizable
• It deepens our understanding
More complex then a single case
Same categories:
1. Partially ordered displays
2. Time-Ordered display
3. Role-Ordered displays
4. Conceptually ordered displays
38
CROSS-CASE DISPLAYS
SHOWING SEVERAL CASES TOGETHER
39. When explaining causality
Understand all cases
Do not plainly aggregate
your data
Avoid ‘throwing away’ data
Use both variable- and
process-oriented structures
Cluster cases in
“explanatory families”
39
CROSS-CASE DISPLAYS
SHOWING SEVERAL CASES TOGETHER
41. Miles and Hubermann (1994) give a lot to take into account:
41
GENERATING CONCLUSIONS AND MEANING
A LOT OF REQUIREMENTS!
Note patterns / themes Create general categories
See plausibility Use factoring
Use categories / clusters Note relations between variables
Use metaphors Find intervening variables
Use quantitative data (numbers) Build logical reasoning
Making contrasts/ comparisons Make conceptual / theoretical
coherence
Subdivide your variables
Miles & Huberman (1994) p.245-261
42. And for testing / confirming your findings
42
GENERATING MEANINGFUL CONCLUSIONS
CORE ISSUES
Check for representativeness Look for negative evidence
Check researcher bias Make if-then tests
Use triangulation Be careful with causal relations
(could be spurious)
Weigh bits of evidence Replicate a finding
Check meaning of outlying data Check alternative explanations
Use extreme cases Get feedback on conclusions
Build on surprises
Miles & Huberman (1994) p.262-275
44. The way of organizing data can improve your research / report
Miles and Huberman state there are 4 categories here:
1. Is your research objective and confirmable?
2. Is your research methodology reliable / stable?
3. Do your findings make sense for this particular problem?
4. Are your findings generalizable to a larger set of problems?
Key here is to carefully document your progress and methods, so
that you connect with your ‘audiences’.
44
WRAPPING UP
WHAT DID WE SEE?
46. 1. Build a conceptual framework
A graphical display of your research
2. Formulating research questions
3. Determine your case
Determine your unit of analysis
4. Sampling
Determine what you study and when
46
SUMMARY RESEARCH PROCESS (1/2)
REDUCING DATA IN ADVANCE
47. 5. Instrumentation
How will you get information for
answering your questions?
6. Linking qualitative and quantitative
data (=mixed methods)
This enables:
a) Deeper analysis
b) Confirmation of qualitative data
c) New lines of thinking
47
SUMMARY RESEARCH PROCESS (2/2)
REDUCING DATA IN ADVANCE