Grounded theory and machine learning methods have more similarities than initially expected. Both approaches involve modeling theories or descriptions up from the data through an iterative process of constant comparison between the emerging theory/description and the data. They also both involve modeling down from a priori premises by applying theorized categories or relationships to the data and refining them based on how well they fit the data. A key difference is that grounded theory aims to develop theory without prematurely imposing categories, while machine learning often involves applying theorized categories or relationships to data from the beginning.
This presentation includes academic material on what constitutes a contribution in academic research. It is the result of inputs from several researchers - see presentation sources for more details and follow-up reading.
This presentation includes academic material on what constitutes a contribution in academic research. It is the result of inputs from several researchers - see presentation sources for more details and follow-up reading.
Creswell (2014) noted that qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem. The article embodies a critical analysis of chapters one to twelve of Stake (2010). In chapter one, Qualitative research: How things work is seen as qualitative, is based on a comprehensive aim seeking to answer the questions why and how. It analyzes actions and interactions, taking into account the intentions of the actors. An analytic perspective on the interpretation of the Person as an instrument is the thrust of chapter two. Chapter three examines the experiential understanding: Most qualitative study is experiential, in this chapter stake (2010) discusses two common research approaches, qualitative and quantitative methods. Chapter four Stating the Problem: Questioning How This Thing Works. Chapter five deals with the Methods-Gatherings Data, while chapter six illuminates the Review of Literature: Zooming to See the Problem. In chapter seven, the author implores the evidence: Bolstering Judgment and Reconnoitering. Chapter eight propels Analysis and Synthesis: How Things Work. Chapter nine acts as a mirror that invites the researcher to examine their action research and Self-Evaluation: Finding our Own How our Place Works. Finally, in chapters ten to twelve, the author compels Storytelling: Illustrating How Things Work, Writing the Final Report: An Iterative Convergence, and Advocacy and Ethics: Making Things Work Better. This work is expected to guide future researchers in developing their research in qualitative research.
An Architecture for the Automated Detection of Textual Indicators of ReflectionThomas Ullmann
Presented at the 1st European Workshop on Awareness and Reflection in Learning Networks. In conjunction with the EC-TEL 2011 conferece, Palermo, Italy.
Proceedings online at: http://ceur-ws.org/Vol-790/
Comparing Automatically Detected Reflective Texts with Human JudgementsThomas Ullmann
Slides from my presentation at the Awareness and Reflection in Technology-Enhanced Learning Workshop at the EC-TEL 2012 Conference. For more information about the workshop and the presentation please visit http://teleurope.eu/artel12.
Research Logics: A pictorial overview of two perspectivesAnnette Markham
Not all qualitative research comes from the same paradigm. Here, we lay out two different perspectives. The more traditional positivist approach to qualitative research and the more interpretive emergent approach. Pictorial images don't provide a complete picture, but these images should be provocative.
Creswell (2014) noted that qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem. The article embodies a critical analysis of chapters one to twelve of Stake (2010). In chapter one, Qualitative research: How things work is seen as qualitative, is based on a comprehensive aim seeking to answer the questions why and how. It analyzes actions and interactions, taking into account the intentions of the actors. An analytic perspective on the interpretation of the Person as an instrument is the thrust of chapter two. Chapter three examines the experiential understanding: Most qualitative study is experiential, in this chapter stake (2010) discusses two common research approaches, qualitative and quantitative methods. Chapter four Stating the Problem: Questioning How This Thing Works. Chapter five deals with the Methods-Gatherings Data, while chapter six illuminates the Review of Literature: Zooming to See the Problem. In chapter seven, the author implores the evidence: Bolstering Judgment and Reconnoitering. Chapter eight propels Analysis and Synthesis: How Things Work. Chapter nine acts as a mirror that invites the researcher to examine their action research and Self-Evaluation: Finding our Own How our Place Works. Finally, in chapters ten to twelve, the author compels Storytelling: Illustrating How Things Work, Writing the Final Report: An Iterative Convergence, and Advocacy and Ethics: Making Things Work Better. This work is expected to guide future researchers in developing their research in qualitative research.
An Architecture for the Automated Detection of Textual Indicators of ReflectionThomas Ullmann
Presented at the 1st European Workshop on Awareness and Reflection in Learning Networks. In conjunction with the EC-TEL 2011 conferece, Palermo, Italy.
Proceedings online at: http://ceur-ws.org/Vol-790/
Comparing Automatically Detected Reflective Texts with Human JudgementsThomas Ullmann
Slides from my presentation at the Awareness and Reflection in Technology-Enhanced Learning Workshop at the EC-TEL 2012 Conference. For more information about the workshop and the presentation please visit http://teleurope.eu/artel12.
Research Logics: A pictorial overview of two perspectivesAnnette Markham
Not all qualitative research comes from the same paradigm. Here, we lay out two different perspectives. The more traditional positivist approach to qualitative research and the more interpretive emergent approach. Pictorial images don't provide a complete picture, but these images should be provocative.
Adapting Test Teams to Organizational Power StructuresTechWell
Scapegoats, spin-doctors, white knights, and sycophants—have you found your test team playing these roles? Organizations, both large and small, often have distinct cultures and power structures with significant but insidious impact on how individual testers and teams are expected to operate. Sometimes the difference between doing what sponsors and stakeholders request and doing what is really needed becomes blurred. John Hazel helps you learn how to recognize the cultural characteristics of different types of software development teams, and how they drive expectations for the test team. Understand the decision-making dynamics and the perceived value of information across different organizational power structures, and the pitfalls that await unwary test teams. Develop strategies to adapt your team’s approach away from compliance and execution and toward discovery and dialogue. John shares his experiences across a spectrum of power structures, field-tested methods, and tools to help your team tailor their testing practice to add value while maintaining objectivity and impact.
Coding qualitative data for non-researchersKelley Howell
We were pleasantly surprised by the success of a Net Promoter Survey. Thus, our good problem to have was: a lot more qualitative data to sift through than we expected. Our contingency plan was to gather product managers, interns, and analysts and teach them how to code (label) qualitative data. We did this by running two "war room" session. We grabbed our laptops and tackled the coding all together in two day-long sustained sessions.
Research methodology (Philosophies and paradigms) in ArabicAmgad Badewi
Explaining research philosophies and paradigms. Explaining the ontology, epistemology and of different research paradigms. In addition, explaining how to innovate in research using pragmatic research. Finally, explaining Grounded Theory at the end of it.
Epistemic networks for Epistemic CommitmentsSimon Knight
The ways in which people seek and process information are fundamentally epistemic in nature. Existing epistemic cognition research has tended towards characterizing this fundamental relationship as cognitive or belief-based in nature. This paper builds on recent calls for a shift towards activity-oriented perspectives on epistemic cognition and proposes a new theory of ‘epistemic commitments’. An additional contribution of this paper comes from an analytic approach to this recast construct of epistemic commitments through the use of Epistemic Network Analysis (ENA) to explore connections between particular modes of epistemic commitment. Illustrative examples are drawn from existing research data on children’s epistemic talk when engaged in collaborative information seeking tasks. A brief description of earlier analysis of this data is given alongside a newly conducted ENA to demonstrate the potential for such an approach.
Paper at: http://oro.open.ac.uk/39254/
Dr Calzada delivered a lecture regarding Mixed Methods and Triangulation as a complex way in which research combines qualitative and quantitative sequential or concurrent approach.
Making Sense of It All: Analyzing Qualitative DataGeorge Hayhoe
Qualitative methodologies are becoming increasingly important in our discipline. Because they are based on techniques that technical communicators commonly use, everyone in the profession finds these methods familiar and understandable.
This workshop will draw on that familiarity and comprehension to show practitioners how to analyze and interpret the data collected from interviews, focus groups, open-ended questionnaires, and communication artifacts. The workshop is based on simple, proven methods that produce meaningful results that can be used to inform decisions about product design and delivery.
First, the moderators will review examples of qualitative methods and data. Then, the moderators will explain how to organize data for analysis. Finally, the moderators will describe Content Analysis, a technique for analyzing and interpreting the data.
With this background, participants will work in teams to analyze and interpret data using Content Analysis. Then, the teams will report the results of their analysis and interpretation.
Co proposers in crowdfunding (muller et al. 2016)Michael Muller
Social Ties in Organizational Crowdfunding: Benefits of Team-Authored Proposals
Michael Muller, Mary Keough, John Wafer, Werner Geyer,
Alberto Alvarez Saez, David Leip, and Cara Viktorov
Social ties have been hypothesized to help people to gain
support in achieving collaborative goals. We test this
hypothesis in a study of organizational crowdfunding (or
“crowdfunding behind the firewall”). 201 projects were
proposed for peer-crowdfunding in a large international
corporation. The crowdfunding website allowed people to
join a project as Co-Proposers. We analyzed the funding
success of 114 projects as a function of the number of
(Co-)Proposers. Projects that had more co-proposers were
more likely to reach their funding targets. Using data from
an organizational social-networking service, we show how
employees’ social ties were associated with these success
patterns. Our results have implications for theories of
collaboration in social networks, and the design of
crowdfunding websites.
CSCW 2016 Conference
Lurking as trait or situational disposition: Lurking and contributing in ente...Michael Muller
This CSCW 2012 short-paper tests hypotheses from three theories to account for behaviors of 200,000+ people in 8600+ online enterprise communities in IBM. We find little support for theories based on binary traits (either or lurker OR a contributor) or for social learning (legitimate peripheral participation). We propose a theory of (a) general disposition to engage (through either or both of lurking and contributing) and (b) personal decision regarding the method of engagement, depending on factors such as job-role, topic-interest, or social commitment to other participants.
Usage Of Enterprise File Sharing Service Muller Chi 2010Michael Muller
We conducted a principle components analysis of users' actions in an enterprise file-sharing service. We describe four factors, their attributes with respect to social action and awareness, and their implications for design.
Return On Contribution (ROC) ECSCW 2009 Muller Et AlMichael Muller
We desribe Return On Contribution (ROC), a social metric for social software. ROC can be used to characterize social software at the level of (a) an application, (b) types of contributions, (c) particular contributions, and (d) particular contributors (where permitted by privacy rules). Our work also highlights the importance of "lurkers" or "non-public participants" in social software. ROC can be applied across diverse types of social software and forms of participation.
Information Curators in an Enterprise File-Sharing ServiceMichael Muller
We describe an emergent role in an enterprise social file-sharing service, in which users create collections of files for use by themselves or other users. We call these users "information curators."
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
1. Developing Data-Driven Theories via Grounded
Theory Method and via Machine Learning
1
Michael Muller, ShionGuha*,
Matthew Davis,Werner Geyer,
Sadat Shami
IBM Research and IBM
* Returning to Cornell University at the end of the summer
2. Working with Theory
• Approaches to the use of theory in HCI and CSCW
• This paper is not about a theory
Approach Characterization Validation and Next steps
Hypothesis testing Top-down evaluation Generalization
Induction Bottom-up rich description Comparison
Abduction Develop new theory Cycles of description, analysis,
modification
• This paper is not about a theory
– Grounded theory is not a theory
• It is a collection of methods for developing a theory
– Machine learning is not a theory
• It is a collection of methods for developing a theory or a description or a
prediction
• What is surprising: the Conundrum
– Grounded theory methods and machine-learning methods
seem to have much more in common than expected
2
3. Outline
• Introduction
Conundrum: Convergence of Grounded Theory and Machine Learning?
– Sketch of Grounded Theory (GT)
– Sketch of Machine Learning (ML)
– What this talk is not about
• Conundrum
– Examples– Examples
• Two similarities and One Dissimilarity
– Modeling “up” from the data
– Modeling “down” from a priori premises
– Rigor
• Restating the Conundrum
– A call to question
– A call to action
3
4. • Combination of an open mind with rigor
• One way to approach a new domain
– … or a domain without a dominant organizing theory
• Intermeshing of data collection, theorizing, evaluating,
reflecting, iterating
– Collect some data
– Make a preliminary theory before data collection is complete
Sketch of Grounded Theory
– Make a preliminary theory before data collection is complete
– Critique the developing theory, test it, change it, improve it
– Using methods that have proven heuristically useful over time
• Guided, in part, by abductive reasoning
Theory
about data
Theory
about data
4
Theory
about data
constant
comparisonData about
theory
Data about
theory
Data about
theory
5. “Grounded theory methods consist of simultaneous data
collection and analysis, with each informing and focusing the
other throughout the research process. As grounded theorists,
we begin our analysis early to help us focus further data
collection. In turn, we use these focused data to refine our
emerging analyses. Grounded theory entails developing
increasingly abstract ideas about research participants’increasingly abstract ideas about research participants’
meanings, actions, and worlds and seeking specific data to fill
out, refine, and check the emerging conceptual categories...”
(Charmaz, 2006)
5
6. “Machine learning is the construction and study of algorithms
that can learn from and make predictions on data … such
algorithms operate by building a model from example inputs in
order to make data-driven predictions or decisions rather than
following strictly static program instructions.”following strictly static program instructions.”
- (Bishop, 2006)
6
7. Sketch of Machine Learning
• Unsupervised learning
– Often exploratory and less “rigorous”
– Often no pre-determined hypothesis but want to play with data
– Often no ideas about relationships between variables
– Examples: clustering
• Supervised learning
– We have some ideas about dependent and independent variables– We have some ideas about dependent and independent variables
– We often have some ideas about possible hypotheses
– We want to predict or ascertain causal relationships between variables
– Examples: classification and regression
7
9. Surprising Convergences in Ways of Thinking and Knowing
Bottom-Up Inquiry
• GroundedTheory Method
– Initially unorganized data
– Constant comparison of
theory and data
– Descriptive theory is built
Top-Down Inquiry
• GroundedTheory Method
– Apply coding families to
make theoretical sense of
data
– Constant comparison of– Descriptive theory is built
from data up into theory
• Machine Learning
– Initially unorganized data
– Iterative development of
classifications or relations
– Descriptive classifications are
built from data up into theory
– Constant comparison of
theory and data
• Machine Learning
– Apply theorized categories
and test for fit of data
– Iterative refinement of
classifications or relations
9
10. Example A: Machine Learning about Persons
(Michelle Zhou)
10
http://www.slideshare.net/MichelleZhou1/system-u-computational-discovery-of-personality-traits-from-social-media-for-individualized-experience
11. Example B: Grounded Theory about Persons
11
Clarke, Adele & Star, Susan Leigh (2008). The social worlds framework: A theory/methods package. In Edward
Hackett, Olga Amsterdamska, Michael Lynch & JudyWajcman (Eds.), The handbook of science and technology
studies (pp.113-139). Cambridge, Massachusetts: The MIT Press.
Mathar, Tom (2008). Making a mess with situational analysis? Forum: Qualitatiive Social Research
Sozialforschung 9(2), Art. 4.
12. Example B: Grounded Theory: Codes to Classify People
“ [W]ith the inclusion of theoretical concepts of the primary study
such as typologies it is even possible to use an inductive procedure.
For example, provided that category schemas have the same
heuristic function as a huge "filing box" with broad, and not "a priori"
theory-loaded categories, then their use for secondary analysis does
not have to conflict with open coding in the process of the
development of in-vivo categories.” (Medjedović and Witzel, 2006)
12
13. More Detailed Examination of Methods
• We’ve seen a few examples. Is there more to this convergence
than those examples?
Grounded Theory Machine Learning
– Deriving
categories
from data
– Applying
a priori
Discovery of Codes and
Categories
Labeling and Exploring
Applying Codes to Data Training and Testinga priori
categories
to data
– Rigor
13
Applying Codes to Data Training and Testing
Abductive Logic Validating and Predicting
15. How to Use the Affect of Surprise in Data and Theory
15
Muller, M. (2014). Curiosity, creativity, and surprise as analytic tools: Grounded theory method. In J. Olson and W.A.
Kellogg (Eds.), Ways of knowing in HCI. Springer.
16. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused
Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
1
Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
16
17. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
17
18. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•…
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
18
19. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused, client-driven
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•…
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
19
20. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused, client-driven
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•…
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
20
21. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused, client-driven
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•…
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
21
22. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused, client-driven
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•…
Required structures?
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-me collaborative configurations
22
•…
23. An Imagined Inquiry into Organizational Work Practices
• A new(ish) domain – how to start?
– Choose a “site” == a person or persons
in a role? a job title? Not sure yet
– Open codes – individual, group, team
– Open codes – time-pressured,
quality-focused, client-driven
• Begin to integrate our tentative knowledge
– Axial code – Collaboration-preference
Collaboration-preference
•Individual
•Group
•Team
•(other collaborations)?
(Required structures)?
Value-priority
•Time-pressured
•Quality-focused
•…
1
– Axial code – Collaboration-preference
– Axial code – Value-priority
• But we’ve also heard about
– Communities of practice
– Centers of excellence
– Networks (?)
– Councils (?)
If these are collections of employees, how do they map onto groups, teams?
– We’re still being surprised. Let’s find out more!
– Talk with people in these new-to-us collaborative configurations
23
•…
24. Problem for “Preference”: Individuals in multiple roles
• More interviewing…
– Each employee can be in multiple
collaborations
– … and can have a different role in each
– It’s not a matter of “collaboration-preference”
Collaboration-preference
•Individual
•Group
•Team
•(other collaborations)?
(Required structures)?
Value-priority
•Time-pressured
•Quality-focused
•…
Collaboration style?
2
24
•…
25. Problem for “Preference”: Individuals in multiple roles
• More interviewing…
– Each employee can be in multiple
collaborations
– … and can have a different role in each
– It’s not a matter of “collaboration-preference”
• Are there different types of
collaborations, each of which has its
own distinct relationships?
Collaboration-preference
•Individual
•Group
•Team
•(other collaborations)?
(Required structures)?
Value-priority
•Time-pressured
•Quality-focused
•…
Collaboration style?
2
own distinct relationships?
– Re-read our interview transcripts
– Re-visit our memos
– Collect more interview data (or other types of data?)
25
•…
26. Problem for “Preference”: Individuals in multiple roles
• More interviewing…
– Each employee can be in multiple
collaborations
– … and can have a different role in each
– It’s not a matter of “collaboration-preference”
• Are there different types of
collaborations, each of which has its
own distinct relationship?
Collaboration-preference
•Individual
•Group
•Team
•Community of practice
•Center of excellence
•Council
•Network
•…
(Required structures)?
Collaborationrole
Collaboration configurations?
2
own distinct relationship?
– Re-read our interview transcripts
– Re-visit our memos
– Collect more interview data (or other types of data?)
• Teams and groups appear to be in different genres
– Return to our earlier observation
that there are also communities, centers,
councils, networks…
– And each genre seems to entail a different
set of relationships
26
(Required structures)?
Value-priority
•Time-pressured
•Quality-focused
•…
27. Discovering Codes Summary
• We started with an unexamined, quasi-essentialist notion that
individuals had preferred ways of collaborating
• We then discovered that at least some people had multiple
collaborative relations, with different structures
• We eventually understood that the manner of collaborating
was more a matter of the collaboration structures, which
required (?) or offered (?) different collaboration rolesrequired (?) or offered (?) different collaboration roles
• Additional questions, if we decide that we want our grounded
theory analysis to go in these directions
– Are structures and their roles required? offered?
– Do the attributes of individual employees matter? Do people have
preferred collaboration roles? Do their preferences influence what
types of collaboration structures they join?
– What other types of collaboration structures are there?
– …
27
33. “The Abstraction of the New”
Starr (2007): “Codes allow us to know about the field we study,
and yet carry the abstraction of the new… When this process is
repeated, and constantly compared across spaces and across
data… this is known as theoretical sampling… Theoretical
sampling stretches the codes, forcing other sorts of knowledge
of the object… taking a code and moving it through the data…
fractur[ing] both code and data.”fractur[ing] both code and data.”
33
34. “The Abstraction of the New”
Hernandez (2009): “ ‘Substantive codes conceptualize the
empirical substance of the area of research. Theoretical codes
conceptualize how the substantive codes may relate to each
other as hypotheses to be integrated into the theory’ (Glaser,
1978). Substantive codes break down (fracture the data) while
theoretical codes ‘weave the fractured story back together
again’” (Glaser, 1978, p. 72)...again’” (Glaser, 1978, p. 72)...
34
39. Glaser’s Approach to Coding and Theory
“Over the past three decades, Glaser has identified many
theoretical codes and theoretical coding families that can
emerge in grounded theory: 18 in Theoretical Sensitivity (Glaser,
1978), 9 in Doing Grounded Theory (Glaser, 1998), and 23 in
Theoretical Coding (Glaser, 2005).
…. When more than one theoretical code can fit the data, then
the researcher must make a choice but this decision will bethe researcher must make a choice but this decision will be
‘grounded in one of the many useful fits’ (Glaser, 1978). ”
(Hernandez, 2009)
39
40. Glaser’s Approach to Coding and Theory
“Glaser… provides… 40 theoretical coding families (Glaser 1978;
1998; 2005), and he admits that the list is far from exhaustive…
[A] selection of recommended theoretical texts for the
identification of the widest possible range of theoretical codes
would be helpful for users of Glaser’s GT.” (Christiansen, 2008)
40
41. Coding Structures Summary
• The foundational text (Discovery, Glaser and Strauss, 1967)
contains the seeds of two distinct a priori ways of structuring
an inquiry:
– General theory of action (The Paradigm) (Strauss and Corbin, 1990)
– Coding families (Glaser, 1978, 1998, 2005)
• Not all of the coding families or phases of action will apply in
every case. Analysis finds which ones provide goodevery case. Analysis finds which ones provide good
descriptive fit.
• For our purposes, coding families appear to be similar to
potential predictor dimensions or dummy variables in a
supervised machine learning paradigm, which must also be
tested for fit.
41
43. Philosophy of Machine Learning
• Unsupervised learning – There is a set of inputs that need to
be divided into groups in some meaningful way. We don’t
know anything about these groups a-priori but want some
sense of grouping based on some other attributes.
• Supervised learning – We have a set of inputs and know their
level of measurement (nominal, ordinal, interval or ratio). We
want to align some other unseen inputs into a model that willwant to align some other unseen inputs into a model that will
produce an output based on the level of measurement
(classification for nominal or ordinal variables and regression
for interval or ratio variables). This is often considered
prediction.
• Both approaches help us build theoretical knowledge from a
set of data.
43
53. What is Rigor in Grounded Theory Method?
• Constant comparison of theory and data, of data and data
• Abductive logic
– How could my nascent theory be wrong? (consider multiple, competing
informal hypotheses)
– What is the strongest test that could disconfirm what I think is going on?
– Go back to the data I already have
– Choose the next “site” to test for disconfirmation– Choose the next “site” to test for disconfirmation
• What is a “site”?
– Person with theoretically-relevant attributes
– Team in the appropriate department or geography
or discipline
– Community that differs from previously-studied
communities in a theoretically-important way
– Organization or enterprise with significant
contrasts to those that I have already studied
53
54. Constant Comparison Constant Questioning
“Consistent with the logic of grounded theory, theoretical
sampling is emergent. Your developing ideas shape what you do
and the questions you pose while theoretical sampling.”
(Charmaz, 2006)
54
56. Modeling up from the Data
• Often considered “data-driven” or inductive modeling
• We have a giant set of data – we scour said dataset
with GT or ML and we produce results
• Often these set of results are considered and iterated together
to develop novel theory
• The process is similar. Iteration and Re-iteration.
• E.g.,
– ML: Topic Modeling
– GT: Deriving descriptive codes, leading to theoretical codes, from data
56
57. Modeling Down from a priori Premises
• We start with well defined hypothesis.
• We collect data
• We apply a GT coding family or ML predictor (e.g., a
classification) on this data
• We accept or reject our (description or prediction) to make an
inference
• This inference is the backbone of developing novel theory
• Again, the process is similar. Code and confirm.
• E.g.,
– ML: Regression/classification with hypothesis; test for fit
– GT: Apply coding families; test for fit
57
58. Learning from the Conundrum?
• Despite differences in
– Basic premises
– Methods of inquiry and inference
– Figures of merit
– Criteria for rigor
– Claims of distinctiveness
– ...– ...
• We see many overlaps between ML and GT
– Are we describing basic human ways of knowing and of inferring?
• There are a number of proposals for methodological dialogues
between “big data” and “small data”, or between
“computation” and “inference”
– Does this presentation suggest, not a dialogue, but a fusion?
58