This document discusses qualitative analysis approaches. It begins by outlining inductive analysis, where evaluators interpret raw data to discover concepts and themes from the bottom up. Deductive analysis is also covered, where data is analyzed according to prior assumptions from the top down. The document then provides steps for basic inductive and deductive text analysis. Examples of applying each approach are given. Issues in qualitative analysis like subjectivity and generalizability are also mentioned. In the end, references are provided to support the information discussed.
This document describes an analytics exemplar, which provides a framework for developing analytics projects. It lists 7 elements that should be addressed: 1) the research questions, 2) relevant theories/methodologies, 3) required data, 4) analysis tools, 5) the workflow to analyze data, 6) available visualizations, and 7) potential insights. It then provides examples for each element, such as analyzing social networks to understand mentor roles, using NodeXL to process social network data from forums, and visualizing insights on network graphs. Other example workflows examine reflections in forum posts and analyzing academic writing.
Preliminary Findings: A Comparative Study of User- and Cataloger-Assigned Sub...Hannah Marshall
This study compared subject terms assigned by catalogers to subject terms assigned by users to images in an online collection. The study found:
1. There was little correspondence between existing cataloger terms and user terms, with only 8.5% being literal matches.
2. Users assigned more primary terms describing literal content (74%) while catalogers used more secondary (3%) and tertiary (16%) interpretive terms.
3. Providing users a framework for analysis did not significantly change their assigned terms compared to a control group.
4. Users assigned more non-subject terms (37%) than catalogers (5%), often describing physical attributes of 3D works.
The conclusions were that
Capturing the Ineffable: Collecting, Analysing, and Automating Web Document ...Davide Ceolin
This document discusses a study aimed at capturing and automating assessments of web document quality. The study collected quality assessments from experts on documents about vaccinations. Assessors rated dimensions like accuracy, completeness, and neutrality. The assessments showed consistency among users and predictability of up to 89% accuracy using machine learning. Key findings were that assessment consistency depended more on the assessment task than subjectivity, and that document features alone did not predict quality well. Future work involves expanding the dataset and analyses to further develop automated quality assessment.
how good quality qualitative data analysis (QDA) can help you identify impacts of your
programs to better meet your objectives and the needs of the community
the steps involved in undertaking basic QDA, including repeated reading, analysis and
interpretation
the value of involving others in the QDA process
the difference between description and interpretation
the value of seeking feedback on your analysis and using triangulation to increase thetrustworthiness of findings
This document provides an overview of the key concepts and history of phenomenology. It discusses:
- Edmund Husserl originally developed phenomenology in the early 1900s to investigate structures of consciousness and essences.
- Phenomenology aims to describe phenomena as directly experienced before turning to analysis, theories or explanations.
- Major thinkers discussed include Husserl, Heidegger, the influence on Russian formalism, and criticisms from Terry Eagleton.
- Phenomenology influenced fields like sociology, literary theory, and examines concepts like the natural attitude vs phenomenological reduction.
During this webinar, Dr. Lani will discuss qualitative analyses for dissertation Chapter 4. Special emphasis will be given to Phenomenological, Case study, and Grounded theory approaches.
This document describes an analytics exemplar, which provides a framework for developing analytics projects. It lists 7 elements that should be addressed: 1) the research questions, 2) relevant theories/methodologies, 3) required data, 4) analysis tools, 5) the workflow to analyze data, 6) available visualizations, and 7) potential insights. It then provides examples for each element, such as analyzing social networks to understand mentor roles, using NodeXL to process social network data from forums, and visualizing insights on network graphs. Other example workflows examine reflections in forum posts and analyzing academic writing.
Preliminary Findings: A Comparative Study of User- and Cataloger-Assigned Sub...Hannah Marshall
This study compared subject terms assigned by catalogers to subject terms assigned by users to images in an online collection. The study found:
1. There was little correspondence between existing cataloger terms and user terms, with only 8.5% being literal matches.
2. Users assigned more primary terms describing literal content (74%) while catalogers used more secondary (3%) and tertiary (16%) interpretive terms.
3. Providing users a framework for analysis did not significantly change their assigned terms compared to a control group.
4. Users assigned more non-subject terms (37%) than catalogers (5%), often describing physical attributes of 3D works.
The conclusions were that
Capturing the Ineffable: Collecting, Analysing, and Automating Web Document ...Davide Ceolin
This document discusses a study aimed at capturing and automating assessments of web document quality. The study collected quality assessments from experts on documents about vaccinations. Assessors rated dimensions like accuracy, completeness, and neutrality. The assessments showed consistency among users and predictability of up to 89% accuracy using machine learning. Key findings were that assessment consistency depended more on the assessment task than subjectivity, and that document features alone did not predict quality well. Future work involves expanding the dataset and analyses to further develop automated quality assessment.
how good quality qualitative data analysis (QDA) can help you identify impacts of your
programs to better meet your objectives and the needs of the community
the steps involved in undertaking basic QDA, including repeated reading, analysis and
interpretation
the value of involving others in the QDA process
the difference between description and interpretation
the value of seeking feedback on your analysis and using triangulation to increase thetrustworthiness of findings
This document provides an overview of the key concepts and history of phenomenology. It discusses:
- Edmund Husserl originally developed phenomenology in the early 1900s to investigate structures of consciousness and essences.
- Phenomenology aims to describe phenomena as directly experienced before turning to analysis, theories or explanations.
- Major thinkers discussed include Husserl, Heidegger, the influence on Russian formalism, and criticisms from Terry Eagleton.
- Phenomenology influenced fields like sociology, literary theory, and examines concepts like the natural attitude vs phenomenological reduction.
During this webinar, Dr. Lani will discuss qualitative analyses for dissertation Chapter 4. Special emphasis will be given to Phenomenological, Case study, and Grounded theory approaches.
This document provides guidance on qualitative data analysis methods, including:
- The process of immersion in qualitative data through repeated reading/listening to become familiar with the content.
- Coding qualitative data by applying abstract representations or labels to segments of data that are relevant to the research question.
- Developing codes that are data-derived (based on the explicit content) or researcher-derived (conceptual interpretations).
- Using analytical memos and diaries to document the analysis process, including emerging codes, themes, and interpretations.
- Identifying themes by examining codes for patterns and relationships that answer the research question. Themes capture broader meanings than codes.
Near Real-Time Data Analysis With FlyData FlyData Inc.
This document describes our products. FlyData makes it easy to load data automatically and continuously to Amazon Redshift. You can also refer to our HP ( http://flydata.com/ ) for more information.
This presentation summarizes how the presenter would analyze and present findings from 5 chat interviews conducted with a student regarding a university module's support in developing research skills. The presenter would collate the data by checking reliability, removing personal details, and transferring the data to a usable format. They would analyze the data by categorizing comments, carefully coding them, and potentially mapping relationships. Findings would be presented by establishing the research's validity and reliability, directly answering the research question, and extracting quotes to support conclusions. Both strengths like providing depth and weaknesses like potential bias are acknowledged.
1. Qualitative data analysis involves coding texts to identify patterns, which turns qualitative data into quantitative codes. The purpose is to produce findings by analyzing data, interpreting patterns, and presenting conclusions.
2. Analyzing qualitative data is challenging due to the massive amounts of information collected. The process involves reducing the volume of data, identifying significant patterns, and developing a framework to communicate what the data reveals.
3. Rigorous analysis depends on gathering high-quality data, the credibility of the researcher, and a philosophical belief in qualitative inquiry. Common stages of analysis include familiarization, coding, identifying themes, re-coding, developing categories, exploring relationships, and reporting findings.
The document discusses qualitative research methods and analysis. It describes common qualitative data collection techniques like interviews and observation. It explains that qualitative research aims to understand meaning, context, processes, and reasoning from the participant's perspective. The document contrasts qualitative and quantitative approaches, noting that qualitative research relies on words rather than numbers and uses inductive rather than deductive reasoning. It also outlines common techniques for analyzing qualitative data, including open coding, systematic coding, and affinity diagramming.
This document provides an overview of qualitative data analysis techniques. It discusses how qualitative analysis differs from quantitative analysis in that the data is textual rather than numerical. Qualitative analysis is inductive and focuses on understanding participants' perspectives through an emic lens. The analysis is iterative and progressive, with the researcher continually refining their focus based on initial interpretations of the data. There is no single correct way to analyze qualitative data, as it involves both science and art. Techniques include coding, categorizing, examining relationships, and using computer assistance programs, while ensuring reflexivity and getting critical feedback.
This document discusses data collection and analysis tasks. It defines these tasks as gathering real-world information to study a problem and come up with a solution. It describes two types of data collection tasks - collecting original data through surveys, interviews, or sensors, and finding existing data sets online or elsewhere. It provides examples of data collection tasks in various subjects and describes tools that can be used, including sensors, survey tools, and instant response systems.
This document provides an overview of qualitative data analysis. It discusses that qualitative data analysis involves coding texts, identifying patterns, and reducing qualitative data into quantitative codes. It also outlines several stages of qualitative analysis including familiarization with data, transcription, organization, coding, identifying themes, recoding, developing categories, exploring relationships between categories, and developing theories. Finally, it discusses challenges of qualitative analysis including placing raw data into logical categories and communicating interpretations to others.
Qualitative data analysis: many approaches to understand user insightsAgnieszka Szóstek
The fifth lecture at HITLab, Canterbury University in New Zealand was all about how important it is to run a proper analysis of the qualitative data. We discussed the value in looking at data from individual (phenomenological) perspective versus combined (reductionist) perspective. But we agreed that regardless of the chosen approach it is crucial to look at the data from more than just one perspective to be sure the interpretation is not biased by researcher's on view of the world.
Qualitative data analysis is often a tough job and many researchers find it difficult to get comprehensive presentation on the topic. This seminar is an attempt to fulfil that purpose.
The document discusses the process of collecting qualitative data through various methods such as observations, interviews, documents, and audiovisual materials. It provides details on purposeful sampling strategies, gaining access to research sites and participants, developing data collection forms like interview protocols, and ethical considerations in qualitative data collection. The key steps and advantages and disadvantages of different qualitative data collection methods are also outlined.
Data analysis – qualitative data presentation 2Azura Zaki
The document discusses qualitative data analysis techniques such as coding, developing themes from qualitative data, and conducting content analysis. It provides examples of coding processes like developing initial codes and focused coding, as well as summarizing data and identifying themes and relationships across data sources. Qualitative data collection techniques mentioned include observation, interviews, and analyzing documents.
The document outlines 8 steps for qualitative data analysis: 1) transcribe all data, 2) organize the data, 3) code the first set of field notes, 4) note personal reflections, 5) sort and sift through materials to identify patterns, themes, and relationships, 6) identify patterns and processes and test them in further data collection, 7) elaborate a small set of generalizations covering consistencies, 8) examine generalizations in relation to formal theories and constructs.
The document provides information about analyzing and interpreting data through various graphs and calculations. It defines terms like mean, median, mode, and range. It explains how to calculate the mean, median, mode, and range of a data set. It also defines and compares different types of graphs like bar graphs, circle graphs, line graphs, line plot graphs, pictographs, and Venn diagrams. Finally, it provides some practice websites for interpreting data.
Sampling Methods in Qualitative and Quantitative ResearchSam Ladner
This document discusses different types of sampling methods used in qualitative and quantitative research. It outlines the different assumptions researchers make regarding sampling in qualitative versus quantitative studies. A variety of sampling techniques are described for different research contexts such as ethnographic fieldwork, interviews, and content analysis.
The role of new information and communication technologies in information and...Christina Pikas
This document summarizes Christina Pikas' dissertation defense on her study of how new information and communication technologies, such as blogs and Twitter, support scientific communication. Her conceptual framework examines communication in science across four elements: communication partners, purposes, message content, and communication channel. Her empirical study analyzed geoscience researchers' blog posts and tweets to understand how the technologies support functions like dissemination, discourse, and learning. She found blogs facilitate in-depth writing while Twitter enables faster interactions. The framework and study contribute to understanding how new technologies fit within scientific work and communication.
Digging into assessment data: Tips, tricks, and tools of the trade.Lynn Connaway
Hofschire, L., & Connaway, L. S. (2018). Digging into assessment data: Tips, tricks, and tools of the trade. Part 2 in 3-part webinar series, Evaluating and sharing your library's impact, presented by OCLC Research WebJunction, August 14, 2018.
The Analysis and Interpretation of Qualitative Data Analysis.pdfsdfghj21
The document provides guidance for conducting qualitative interviews and analyzing the data for a research assignment on social change. It discusses best practices for interviews, including selecting an appropriate location and asking open-ended questions. It also outlines the assignment, which involves interviewing a peer, coding and analyzing the interview alongside other sources, and summarizing the meaning of social change based on the findings. Trustworthiness of the research process is also emphasized.
This document provides guidance on qualitative data analysis methods, including:
- The process of immersion in qualitative data through repeated reading/listening to become familiar with the content.
- Coding qualitative data by applying abstract representations or labels to segments of data that are relevant to the research question.
- Developing codes that are data-derived (based on the explicit content) or researcher-derived (conceptual interpretations).
- Using analytical memos and diaries to document the analysis process, including emerging codes, themes, and interpretations.
- Identifying themes by examining codes for patterns and relationships that answer the research question. Themes capture broader meanings than codes.
Near Real-Time Data Analysis With FlyData FlyData Inc.
This document describes our products. FlyData makes it easy to load data automatically and continuously to Amazon Redshift. You can also refer to our HP ( http://flydata.com/ ) for more information.
This presentation summarizes how the presenter would analyze and present findings from 5 chat interviews conducted with a student regarding a university module's support in developing research skills. The presenter would collate the data by checking reliability, removing personal details, and transferring the data to a usable format. They would analyze the data by categorizing comments, carefully coding them, and potentially mapping relationships. Findings would be presented by establishing the research's validity and reliability, directly answering the research question, and extracting quotes to support conclusions. Both strengths like providing depth and weaknesses like potential bias are acknowledged.
1. Qualitative data analysis involves coding texts to identify patterns, which turns qualitative data into quantitative codes. The purpose is to produce findings by analyzing data, interpreting patterns, and presenting conclusions.
2. Analyzing qualitative data is challenging due to the massive amounts of information collected. The process involves reducing the volume of data, identifying significant patterns, and developing a framework to communicate what the data reveals.
3. Rigorous analysis depends on gathering high-quality data, the credibility of the researcher, and a philosophical belief in qualitative inquiry. Common stages of analysis include familiarization, coding, identifying themes, re-coding, developing categories, exploring relationships, and reporting findings.
The document discusses qualitative research methods and analysis. It describes common qualitative data collection techniques like interviews and observation. It explains that qualitative research aims to understand meaning, context, processes, and reasoning from the participant's perspective. The document contrasts qualitative and quantitative approaches, noting that qualitative research relies on words rather than numbers and uses inductive rather than deductive reasoning. It also outlines common techniques for analyzing qualitative data, including open coding, systematic coding, and affinity diagramming.
This document provides an overview of qualitative data analysis techniques. It discusses how qualitative analysis differs from quantitative analysis in that the data is textual rather than numerical. Qualitative analysis is inductive and focuses on understanding participants' perspectives through an emic lens. The analysis is iterative and progressive, with the researcher continually refining their focus based on initial interpretations of the data. There is no single correct way to analyze qualitative data, as it involves both science and art. Techniques include coding, categorizing, examining relationships, and using computer assistance programs, while ensuring reflexivity and getting critical feedback.
This document discusses data collection and analysis tasks. It defines these tasks as gathering real-world information to study a problem and come up with a solution. It describes two types of data collection tasks - collecting original data through surveys, interviews, or sensors, and finding existing data sets online or elsewhere. It provides examples of data collection tasks in various subjects and describes tools that can be used, including sensors, survey tools, and instant response systems.
This document provides an overview of qualitative data analysis. It discusses that qualitative data analysis involves coding texts, identifying patterns, and reducing qualitative data into quantitative codes. It also outlines several stages of qualitative analysis including familiarization with data, transcription, organization, coding, identifying themes, recoding, developing categories, exploring relationships between categories, and developing theories. Finally, it discusses challenges of qualitative analysis including placing raw data into logical categories and communicating interpretations to others.
Qualitative data analysis: many approaches to understand user insightsAgnieszka Szóstek
The fifth lecture at HITLab, Canterbury University in New Zealand was all about how important it is to run a proper analysis of the qualitative data. We discussed the value in looking at data from individual (phenomenological) perspective versus combined (reductionist) perspective. But we agreed that regardless of the chosen approach it is crucial to look at the data from more than just one perspective to be sure the interpretation is not biased by researcher's on view of the world.
Qualitative data analysis is often a tough job and many researchers find it difficult to get comprehensive presentation on the topic. This seminar is an attempt to fulfil that purpose.
The document discusses the process of collecting qualitative data through various methods such as observations, interviews, documents, and audiovisual materials. It provides details on purposeful sampling strategies, gaining access to research sites and participants, developing data collection forms like interview protocols, and ethical considerations in qualitative data collection. The key steps and advantages and disadvantages of different qualitative data collection methods are also outlined.
Data analysis – qualitative data presentation 2Azura Zaki
The document discusses qualitative data analysis techniques such as coding, developing themes from qualitative data, and conducting content analysis. It provides examples of coding processes like developing initial codes and focused coding, as well as summarizing data and identifying themes and relationships across data sources. Qualitative data collection techniques mentioned include observation, interviews, and analyzing documents.
The document outlines 8 steps for qualitative data analysis: 1) transcribe all data, 2) organize the data, 3) code the first set of field notes, 4) note personal reflections, 5) sort and sift through materials to identify patterns, themes, and relationships, 6) identify patterns and processes and test them in further data collection, 7) elaborate a small set of generalizations covering consistencies, 8) examine generalizations in relation to formal theories and constructs.
The document provides information about analyzing and interpreting data through various graphs and calculations. It defines terms like mean, median, mode, and range. It explains how to calculate the mean, median, mode, and range of a data set. It also defines and compares different types of graphs like bar graphs, circle graphs, line graphs, line plot graphs, pictographs, and Venn diagrams. Finally, it provides some practice websites for interpreting data.
Sampling Methods in Qualitative and Quantitative ResearchSam Ladner
This document discusses different types of sampling methods used in qualitative and quantitative research. It outlines the different assumptions researchers make regarding sampling in qualitative versus quantitative studies. A variety of sampling techniques are described for different research contexts such as ethnographic fieldwork, interviews, and content analysis.
The role of new information and communication technologies in information and...Christina Pikas
This document summarizes Christina Pikas' dissertation defense on her study of how new information and communication technologies, such as blogs and Twitter, support scientific communication. Her conceptual framework examines communication in science across four elements: communication partners, purposes, message content, and communication channel. Her empirical study analyzed geoscience researchers' blog posts and tweets to understand how the technologies support functions like dissemination, discourse, and learning. She found blogs facilitate in-depth writing while Twitter enables faster interactions. The framework and study contribute to understanding how new technologies fit within scientific work and communication.
Digging into assessment data: Tips, tricks, and tools of the trade.Lynn Connaway
Hofschire, L., & Connaway, L. S. (2018). Digging into assessment data: Tips, tricks, and tools of the trade. Part 2 in 3-part webinar series, Evaluating and sharing your library's impact, presented by OCLC Research WebJunction, August 14, 2018.
The Analysis and Interpretation of Qualitative Data Analysis.pdfsdfghj21
The document provides guidance for conducting qualitative interviews and analyzing the data for a research assignment on social change. It discusses best practices for interviews, including selecting an appropriate location and asking open-ended questions. It also outlines the assignment, which involves interviewing a peer, coding and analyzing the interview alongside other sources, and summarizing the meaning of social change based on the findings. Trustworthiness of the research process is also emphasized.
Research seminar lecture_10_analysing_qualitative_dataDaria Bogdanova
This document provides an overview of qualitative data analysis. It discusses that qualitative data includes non-numeric texts, documents, visual and verbal data. Qualitative data collection methods include interviews, questionnaires, focus groups and observations. The analysis involves coding and categorizing the data to identify patterns and develop theories. The iterative process includes reading, memoing, describing, coding, categorizing and interpreting the data. Software can help organize the data during analysis. The goal is to gain an understanding and meaning from the data.
OBJECTIVES:
To understand the importance of publication and its challenges.
To increase the visibility and accessibility of published papers.
To increase the chance of getting publications cited.
To disseminate the publication by using “Research Tools” effectively.
To increase the chance of research collaboration.
Using Qualitative Methods for Library Evaluation: An Interactive WorkshopOCLC
Connaway, Lynn Silipigni, and Marie L. Radford. 2016. "Using Qualitative Methods for Library Evaluation: An Interactive Workshop." Presented at the Libraries in the Digital Age (LIDA) Conference, Zadar, Croatia, June 14.
Using Qualitative Methods for Library Evaluation: An Interactive WorkshopLynn Connaway
Connaway, Lynn Silipigni, and Marie L. Radford. 2016. "Using Qualitative Methods for Library Evaluation: An Interactive Workshop." Presented at the Libraries in the Digital Age (LIDA) Conference, Zadar, Croatia, June 14.
Cutting the Commute: Assess Authentically and Still Arrive on TimeToni Carter
While the importance of assessment for student learning is widely recognized, instructors are often reluctant to sacrifice valuable class time for this activity. Learn how one university’s library instruction program is using in-class student worksheets and other hands-on activities to integrate authentic assessment into classroom instruction. By applying rubrics to active learning exercises that are already part of the curriculum, instructors gather valuable data about student progress in attaining key information literacy skills. Classroom learning activities and tasks include identifying keywords and developing synonyms for database searches, articulating differences between popular and scholarly sources, and differentiating and locating cited sources.
Big Qualitative Data, Big Team, Little Time - A Path to PublicationQSR International
This webinar will describe the project and research question, as well as the design, data management, and analysis process of using NVivo to analyze data with a large coding team with no NVivo experience.
UCISA Learning Anaytics Pre-Conference WorkshopMike Moore
UCISA Learning Analytics Pre-Conference Workshop
Mike Moore - Sr. Advisory Consultant - Analytics
Desire2Learn, Inc.
UCISA Conference 2014, Brighton, UK
Presented Mar 26, 2014
R.M Evaluation Program complete research.pptxtalhaaziz78
The document outlines 12 steps for conducting a qualitative program evaluation: 1) define purpose and scope, 2) review program goals, 3) identify stakeholders, 4) identify available time and resources, 5) revisit evaluation purpose, 6) decide if the evaluation will be in-house or contracted, 7) specify the evaluation design, 8) create a data collection plan, 9) determine sampling and recruitment strategies, 10) summarize and analyze data, 11) disseminate information, and 12) provide feedback for program improvement. Qualitative evaluation is used to understand why a program works or not and its unintended consequences by considering small, purposefully selected samples. Ensuring credibility, transferability, dependability, and confirmability strengthen
This document provides information about an assessment workshop being held at Towson University. The workshop will cover using rubrics for information literacy assessment. Participants will learn about rubric design, norming rubrics by practicing scoring sample student work, and evaluating rubrics. The workshop aims to help librarians at Towson University develop rubrics to assess student learning and use assessment data to improve instruction practices. Sample rubrics from the RAILS project will be used during the norming exercise to help participants learn how to reliably score student work using rubrics.
Dr. John A. Hoehn gave a presentation on February 24, 2014 about completing a dissertation. The presentation covered the dissertation timeline and process, resources for each phase, and future topics like web surveys and netnography. It provided an overview of the learning goals, which included understanding the dissertation timeline and process, developing a master plan based on research questions, and learning digital tools for each phase. The presentation utilized an audience response system and branching presentation.
This chapter discusses analyzing qualitative data for a doctoral dissertation or thesis. It identifies common challenges like not fully analyzing data and being biased. It recommends first mapping interview questions to research questions, then open coding themes and selectively coding across multiple readings. Researchers should reflect on how their background influences interpretations and distinguish results from findings. The chapter provides tips on clearly presenting findings and considering validity, and suggests resources for further guidance on qualitative analysis.
Qualitative research methodology and an introduction to NLP. There is also an example of how to use a pre-trained model to perform sentiment analysis on user feedback. A Google Colab Notebook is provided in the slides.
Dr Bardini and Cassandra Jessee from YouthPower hosted a workshop on Measuring Positive Youth Development (PYD) at the 8th AfrEA International Conference in Kampala, Ghana.
This document provides an overview of a presentation on developing a successful dissertation. It includes an agenda with topics, activities, and times. The learning goals are to understand the dissertation timeline and process, develop a master plan based on research questions, and learn digital tools for each phase. Teaching methods include an audience response system and branching presentation. The document then discusses transforming the typically linear dissertation process into an iterative, multimedia, cloud-based process. It provides details on the major phases of developing a dissertation, including expected activities, decisions, deliverables, and tips.
Similar to Who's Afraid of Qualitative Analysis? (20)
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
13. 13
Photo by Matt. Create. - Creative Commons Attribution-NonCommercial-ShareAlike License https://www.flickr.com/photos/76583692@N00 Created with Haiku Deck
14. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
14
15. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
15
16. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
Emergent, “bottom up”
16
17. Doing Qualitative
Basic Text Analysis: Inductive
Use data to discover concepts, themes, or models
Evaluator as interpreter; highly involved
Emergent, “bottom up”
Qualitative outcome: key themes or categories
relevant to evaluation/research questions
17
26. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
26
27. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
• Research journal
27
28. Doing Qualitative
Step. 1. Collect and organize your raw data
Considerations:
• Number of collection points
• Transcription
• Audit trail
• Research journal
• Participant key/aliases/anonymity
28
29. Doing Qualitative
Step. 1. Collect and organize your raw data
End results:
• Clean, anonymized data files
• Transcription files
• Audit trail
• Participant key
• Research journal (including protocols for all of
the above)
29
36. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
36
37. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
37
38. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
A-priori; “top down”
38
39. Doing Qualitative
Basic Text Analysis: Deductive
Data is analyzed according to prior assumptions
Evaluator is “independent” from data
A-priori; “top down”
Quantitative outcome: metrics relevant to
evaluation/research objectives
39
40. Doing Qualitative
Application: Deductive Analysis
• Category comparison, comparison over time
• Answers to survey questions across participants
• Answers to interview questions across participants
• Analyzing webinar chat pods
• Social media: hashtag use in Twitter,
Facebook/LinkedIn audience engagement
40
41. Doing Qualitative
Basic Deductive Analysis: 5 Steps
1. Develop data categories.
2. Clearly define those categories.
3.Read through all raw data and apply categories.
4. Count.
5. Narrative and visual analysis.
41
42. Doing Qualitative
Chat Pod Engagement Metrics
21
0
17
10
5
0 5 10 15 20 25
Unique participant to participant exchanges
Participant questions
Resources shared by MFLN
Resources shared by participants
Unique chat pod participants
42
43. The fine print….
Only DCO viewers can participate in the chat pod; percentage of chat pod participants based on total
number of DCO viewers and total number of unique participants.
Resources shared by participants include shared links, authors, studies, books, etc.; demonstrates high-level
engagement because participants are contributing to the co-construction of knowledge during webinar.
Resources shared by MFLN include links, peer-reviewed studies and books, etc., from both MFLN and non-
MFLN authors; demonstrates direct CA engagement with participants by further supporting and
contextualizing knowledge construction by situating webinar presentation within the larger disciplinary area.
Participant questions are those listed in the chat pod; demonstrates intent to pursue two-way engagement in
webinar and therefore high-level engagement.
Unique participant to participant exchanges are those in which chat pod participants respond directly to one
another’s comments; demonstrates high-level engagement through realized reactive (two-way) and interactive
(dependent) discourse patterns.
Chat pod text related to webinar content is not captured as an engagement measure due to its discursive
category as declarative (one-way) communication. (It is noted, however, that declarative text is still
understood to indicate webinar engagement, and MFLN encourages and values such participant
engagement.)
Chat pod text related to technical issues and/or CEUs is not included in MFLN evaluation.
43
48. References
Davies, C. A. (2008). Reflexive ethnography: A guide to researching selves
and others (2nd Ed.). New York and London: Routledge.
Denzin, N. K., and Lincoln, Y. S. (2011). The Sage Handbook of
Qualitative Research (4th Ed.). Thousand Oaks, Calif: Sage.
Patton, M. Q. (2014). Qualitative research & evaluation methods (4th
Ed.). Thousand Oaks, Calif.: Sage.
Richardson, L., and St. Pierre, E. A. (2005). Writing: A method of
inquiry. In Norman K. Denzin and Yvonna S. Lincoln (Eds.), The
Sage Handbook of Qualitative Research (3rd ed.) (pp. 959–97).
Thousand Oaks, Calif.: Sage.
48
49. Photographs by Haiku Deck:
http://www.haikudeck.com. Haiku
Deck is licensed by Creative
Commons 3.0.
Icons made by Freepik:
http://www.flaticon.com. Flaticon is
licensed by Creative Commons 3.0.
49
Situated: this is work that locates the researcher/evaluator in the world.
Interpretative practices: the goal is to make sense of people and phenomena and experiences, and the meaning people bring to them. It’s a meaning-making practice.
MQP in his book Qualitative Evaluation and Research Methods book lists seven contributions of qualitative inquiry. This is certainly not an exhaustive list, but here you go:
Illuminating meanings
Understanding how things work
Capturing stories to understand individual’s perspectives and experiences
Understanding how systems function and their consequences for people’s lives
Understanding context
Identifying unanticipated consequences
In qualitative program evaluation, you are telling the story of the program by telling the stories of the program participants.
Ok, so where are we headed today?
First I’m going to talk about a few considerations to make as you approach qualitative work, including theoretical frameworks and what it means to be situated within a qualitative field of inquiry.
Then I’m going to go into some very basic strategies for actually doing qualitative analysis work. The idea here is that they are practical, ready-to-use strategies for you and your agents.
And we’ll end by addressing issues in qualitative work. I’ll address things like credibility in qualitative work; ethics; and how to use your qualitative data.
Skills:
I think this is where many people get tripped up when thinking about doing qualitative work: they feel they don’t have the training, they’re not “certified,” they don’t have the right degree, etc. This is not true.
There are two soft skills you need to do qualitative analysis: these are skills that are not easy to teach, and they are somewhat subjective. But in my opinion, you need:
Pattern-recognition skills
Organizational skills
If you’re a human being, you are already pretty good at pattern recognition. Just by being alive and interacting in a social world we are connected patterns and therefore meaning to things, to places, to others, emotions, etc. We can differentiate and pull apart a complex and changing world. This is pattern recognition and meaning making at it’s most basic.
Organization is big. In fact, this might be the more daunting skill set needed of the two
Your theoretical framework is going to inform what you see and how you see it.
In the context of program evaluation, we’re talking about your logic model, your program map, your theory of change, or however you articulate what you believe your program will accomplish given any number of variables. This is your set of beliefs about your program, and how you believe your program is going to impact those who participate in it. It might be based on existing scholarly knowledge, it might be based on programmatic experience, in might be based on principles, on hopeful, intended outcomes, as when doing an intervention. Regardless, your framework is going to likely be in place prior to you beginning qualitative evaluation and analysis.
This is necessary, because it grounds, it guides your work when you get to the analysis stage.
However, be sure that your framework doesn’t blind you to other things that might emerge from analysis. So your framework is necessary to get started, it guides and informs your analysis, but it should also not limit your analysis. So it’s important to keep that in mind.
The next thing to keep in mind is the situated quality of doing this type of work.
Aside from your theoretical or conceptual framework, your own beliefs, worldviews, and knowledge are going to inform your data analysis. This is part and parcel of doing qualitative work. This is where you might get some criticism of qualitative work being “soft” and “subjective.” I’ll address those issues later. But the point I want to make now is that the qualitative evaluator is not objectively separate from the analysis, and it’s important to acknowledge this up front, during, and after analysis.
You often hear social scientists talking about the “lens” through which they approach their work. This is part of that conversation. And the way you address those critiques of subjectivity is that you acknowledge your subjectivity, openly. It becomes a part of how your program evaluation is conceptualized, what methods you use, how you analyze your data, and even how you write it up and present it. So this is subjective work that is guided both by frames of knowledge, or logic (or principles), as well as by your own lived human experience, your beliefs, and the like.
Acknowledging this personal piece is called reflexivity. Charlotte Aull Davies, an ethnographer, describes reflexivity as a process of self reference, of turning back on oneself through all stages of qualitative work. I’ll talk more about what reflexivity looks like during the analysis process shortly.
Inductive analysis has a few key features……
With inductive analysis, you are letting the data lead the way. Analysis is one of discovery.
Your role as the evaluator is to interpret. As such, you’re going to be highly involved. And as I’ve mentioned previously, your interpretation is going to be subjective. And again, I’ll talk later about how to add credibility to subjectivity in the context of qualitative analysis.
Inductive analysis means that concepts, themes, and models are emerging. You are not proceeding with data analysis in order to test a pre-exiting theory or framework, but rather, you’re allowing for the formation of a theory, for an evaluative look at the success of your program.
And the end result of this process is typically thoroughly qualitative. It’s a narrative analysis that explains what you as the interpreter have discovered through the inductive process.
Don’t wait to organize your data until it’s all collected if you have multiple collection points.
If focus groups or interviews, organizing while collecting means transcribing right away. As far as transcribing goes, if you are working in a group, you’ll have to decide if you want one person to transcribe, or if you’d rather distribute that work. Transcription can be very time consuming, and so this is another great reasons to get working on it right away if you have multiple collection points in your evaluation.
There are obvious advantages to spreading some of the transcription work around, but there are also sound reasons to leave this as the task of one person.
First, you want to be sure all your raw data has the same transcription approach: things like non-verbal cues if you’ve videotaped, verbal pauses, and non-word noises should all be addressed in the same way. Having one person do the transcribing is advantageous in this way, although you could decide ahead of time on a protocol as well.
Another advantage with one person doing the transcription is that you have at least one person in the group who is intimately familiar with all of the data. If you’ve ever done transcription work, you know that you’ll have people’s voices and memorable phrases emblazoned on your brain for awhile. But this is an advantage in analysis, as you have one person who has spent that much more time with the whole body of primary data, rather than with just parts, or as just readers of the clean, secondary data file (which is what a transcription is).
If you’re working in a group, it’s important to discuss these things before data collection begins. Obviously as a group you can and should share often and widely, particularly as data is getting organized. Share everything: audio files, ongoing transcriptions, and first impressions, etc.
When you do your transcribing, I highly recommend you use the line numbering function in Word. This can be found under the “Format” tab, then “Layout.” You want to check “add line numbering,” and then make sure to also check “Continuous.”
.
Immediately begin your audit trail, which is important for organization but also a very important aspect of establishing credibility with qualitative work.
Your audit trail is simply a documentary record of all the choices/changes you make as you do your analysis. It really begins as you are designing your study: why you are doing a focus group instead of interviews, etc. It continues during data collection as you make transcription choices, allocate tasks, etc.
The audit trail is a product of your research processes. I like to think of it as an administrative record, as good housekeeping. It’s a documentary record. But it’s different than a research journal, which also needs to get established during the collection and organization process.
Unlike the audit trail, the research journal is not a product of your data collection; rather, it’s a PART of your data. The research journal is a place to begin recording initial reactions. This starts, for example, DURING the focus group or interviews, and it continues from there throughout the analysis and even writing process. These are your thoughts and reactions of the data. These thoughts may end up becoming codes, they may change your meaning-making process, and I guarantee this information will be extremely informative as your analysis progresses.
If you are working in a group, you’ll want to store this journal in a communal, editible platform like Google Docs so that everyone can access.
Set an organization format that makes sense, perhaps by events or phases rather than by dates—you want to establish a macro-level structure that makes it easy for multiple people to comment on the same topics easily, so a “calendar” of sorts doesn’t make sense. All entries grouped under events or analysis phases should be dated, however, so that you have a nice, linear progression of thought. It’s also good to be sure, if you’re working with a group, that each entry is identified by it’s author. So whether everyone enters their initials, or uses the same color, or whatever, that’s important to establish right away.
And of course, if you’re working alone, a dated, calendar entry format can work just fine.
Before formal analysis gets under way, you also need to establish a participant key.
Whether you’ve given participants a chance to choose their own aliases, or if you’re using first names only, or assigning participant numbers, you need a master document to track this. I use Excel for this, creating a simple spreadsheet of participant names and aliases/numbers assigned. As you’re transcribing and taking research notes, you’ll want to be sure that you are referring to participant aliases from the very beginning. So any transcriptions need to have participant names replaced by aliases, numbers, etc., to make sure anonymity is established at the very beginning of analysis.
In your organization of your raw data, you’re going to end up with several files:
Your cleaned, anonymized data files
Your audit trail
Your research journal
And your participant key
If you’re working with open-ended surveys you still want to clean them up: remove any identifying information, be sure aliases are assigned, and get them into a common format in Word.
Reading through your data is the very first official level of analysis.
Get familiar with your data, gain an overall sense of what’s happening, content being discussed, things like tone, resonance, disagreement, etc.
Be sure to take notes in your research journal to establish a record of your thought processes as you read. Ultimately this will be helpful as you discuss your findings, and because it provides evidence that you have not simply read the data, wrote your reaction, and called it “analysis.” This is a part of inductive analysis, which is step-wise and systematic.
You are also going to want to start thinking about how you are going to track and organize your codes. I’ll show you one of my typical Excel strategies in the next slide, but you can also use the comment feature in Word, color coding in Word, and if you’re more of a visual/tactile thinker, use sticky notes, crayons, etc. Whatever works for you.
You’re going to take several runs at coding during inductive analysis. The idea is that you’re not coding just to get it over with. You’re coding to get maximum understanding. And because you’re doing emergent, bottom-up work here, you want to keep returning to your codes and the data to see if your perspectives change, if you get new insights, etc.
If you are working with larger data sets, the idea is that you being coding as the data is collected. So if you’re one doing interview per month for 3 months for example, you’re going to begin coding the first interview ASAP, do the second interview, start coding, and then go back and review your codes from the first interview to see if you have new insights given new data, and you’ll continue in this fashion until you’re done. Even if you have a static data set that is collected once, you want to frequently go back and review what you’ve coded at the outset in light of what you’re currently coding. This yields a very rich analysis.
I’m a big fan of constant-comparative analytical methods (Glaser and Straus) because I think they do ensure a certain level of rigor and credibility—you’re not coding once, but you’re coding several times. Adjusting and taking notes as you go in your research journal. And you’ll want to track in your audit trail when you’ve gone back and recoded, as a record of this type of analysis. So in your spreadsheet, you might start with one code (or several) for one chunk of text, but then change it two days later. I like to use the comment function in Excel for a changed code. And you’ll also note this in your research journal, because this change will be data in and of itself. This is the spirit of constant-comparative analysis.
Higher-level, initial codes: often initially related to evaluation aims for your programming; this makes sense. As the evaluator, you have your logic model or your theory of change on your mind, you know your program goals and benchmarks, so these will inform your codes. This is great. But at some point, things are going to get a little richer, and little deeper, and start reflecting the actual content and dynamics of the discussion.
Lower-level codes: after multiple readings of data, may end up being related to actual words or phrases in the text.
Remember that not all text will be coded.
So you’re done coding, you’re sick of reading, you have lots of notes. The next step is to go back through your codes and check for overlap. This happens all the time when you’re doing open coding like this. So if you have several “technology” codes, take a look at them. The goal here is not necessarily to simplify, although a bit of simplification is a natural outcome of this step. This is another step in rigor: e.g., are there differences in your technology codes? If so, what are they? How does this change your coding?
Capture key aspects of data that have emerged (in your view) as most important/informative. Go back and read your notes! Make connections across codes. What themes are emerging? How do you know? This is where it all comes together. Take your time.
Writing is a level of analysis!!!!!!
So you’ve gone through and read, coded, recoded, taken notes, created categories. Don’t go and put those categories into an Excel graph! WRITE about them! I am willing to bet that it is in this writing process that you gain your deepest insights. This is where your themes are going to emerge. Take your time here. And don’t forget to put a little bit of yourself in your analysis: that is, use “I,” briefly explain your process, tell your story as the interpreter of the data.
You are checking to see if the raw data you collected fits into a pre-established framework
Your role is simply to compare and contrast: Does the data here fit within this prior framework? How?
In deductive analysis you’re going to start with your categories. Notice the huge difference here vs. starting with your data and coding it. You can develop your categories without even collecting data when you do deductive analysis. Your categories should be related to your programming goals.
Then you need to clearly define those categories. Very clearly, such that nothing is left up for interpretation.
Then you apply categories to your data.
Then you count, and then you write
Again: Stay organized, and keep notes
Not all text may fall into categories.
You may choose to use inductive analysis for uncoded text.
Here’s an example of a deductive analysis of an MFLN webinar chat pod. In your evaluation report you will also provide a narrative analysis, along with definitions of your codes. It’s also really nice to provide examples of data chunks with your definitions.
Ethics is a foundational part of all human-subject research. So I touched on some ethical issues earlier but I wanted to revisit again briefly.
Informed consent is obviously necessary in all of your evaluation work. As you conceptualize your qualitative evaluation plan, which might include methods like action research or things like storytelling, it’s important to be very up front with all your participants on the type of work being done. From a practical standpoint, it may also help ensure you get the type of data you need for the chosen methodology. For example, if you’re doing a focus group and you’re hoping to utilize storytelling as a part of your analysis, it’s important your participants know that up front in part because you will be sharing larger chunks of their text, but it also lets them know that you are hoping to host a storytelling-friendly focus group.
We talked also about anonymity and confidentiality, but if you’re working in a group to complete analysis, another level of protection you can offer your participants is that only one evaluator has access to the participant key.
When you’re working with qualitative data, your participants might reveal quite a bit of information, some of which you may not have asked for. You need to deal with this delicately. Use only data that is pertinent to your evaluation aims. And of course, if participants reveal information that could pose danger for themselves or for others, you need to handle this appropriately as well.
I also want to address credibility in qualitative analysis, which is a nested process.
As I mentioned before, you’ll be keeping an audit trail throughout. This provides overall transparency: everything you’ve done, when, how, methodological choices you’ve made, tasks assigned, etc., all establish a sense of credibility, a sense that you have done systematic, step-wise work in your analysis.
Your research journal also supports credibility. Even though it becomes a part of your data, it is still documentary record of coding iterations, analytical memos, etc. Any time you forget why you made a choice, you should be able to go back to your research journal and get an answer.
Another aspect of credibility is triangulation. Often triangulation is addressed when there is a group working on analysis together, because consensus has to be reached. However, if you’re working alone, you have to triangulate in other ways. You can triangulate by utilizing multiple methods, like a follow-up survey or participant observation along with interviews.
If you’re working in a group, it’s essential to have open, free-flowing discussion as often as possible during analysis. Open codes need to come from and be agreed upon by different analysts; code definitions need to be agreed upon prior to coding if you are doing deductive work; the same processes should be in place when identifying categories. And finally, writing should be a group effort.
Another step in credibility in qualitative work is to include yourself in your work, that is, to be reflexive. It’s very common in social sciences to include a short autobiographical section that is a reflexive statement: who you are, what your experience is, and how that experience informs your work.
And now I want to talk briefly about representing your qualitative work.
We know the value of qualitative data, and the richness is can bring to reports. So make sure you’re using it effectively. You’re final report is going to be the big one that is really going to include all aspects of your analysis. But you will likely have plenty of reports along the way to highlight your qualitative program evaluations. So use it well. Tell stories about the data, mix inductive data chunks with deductive data visualizations, or any other facts and figures.
Highlight analyzed data chunks on your Web site or in newsletters, provided doing so is allowable via the signed consent form (and/or get permission again, especially if leveraging as a storytelling data).
And finally, use your analyzed data in monthly and annual report. You’ve done all this work. And while hopefully it will result in an excellent qualitative report of its own, you can still use that data in other reports you do.
We’ve been talking about some qualitative strategies to use in your social media evaluation. We’re apply basic concepts from qualitative methodologies and leveraging them in the context of social media. I do want to be very clear: if you are doing a qualitative research study, or are conducting interviews, focus groups, participant observations, etc., as part of your evaluations, there are definitely layers of complexity and depth that will need to be added in to your methods. I am NOT suggesting with this webinar that qualitative work across the board is simple, quick, and easy, or that it can or should be done lightly. However, I am of the position that context matters. And basic qualitative evaluation methods for social media can be simpler and quicker than many might assume. So my goal today has been to get you thinking about adopting some of these strategies. But before I wrap up I want to return to a few larger issues I mentioned at the start. These larger issues tare a part of all qualitative work, and still need to be considered with the strategies I’ve discussed today.
A few things to keep in mind:
--Context: I’ll be talking about a few qualitative methodologies, but I’m presenting them in a very basic way specifically for the social media context. If you are looking to do qualitative evaluations utilizing some of the more traditional methods, such as interviews, focus groups, open-ended surveys, participant observation, these methods need to become more involved. And I’ll address that toward the end of my presentation.
--Rapid feedback: I’ll be talking about strategies that are very basic not only because I’m operating under the assumption that you’ll be using them in small chunks of social media (i.e., text comments from a Facebook post), but also in such a way that you can get quick insights on your social media strategy and social media impact. We all know that qualitative work can often be very time consuming. And while the strategies I’ll talk about will take a little longer than running Facebook insights or exporting a report from Sprout Social, they will not represent a huge time investment. So I hope you give them a try.
--Caveat: While I’m talking about some quick and dirty strategies here, I do want to state that I do not want to be undermining This is a loaded phrase. So while I’m giving you some tips here, I don’t mean for the tips to undermine qualitative research as a complex, transformative field of inquiry.
--Strategy: Don’t forget about the importance of a social media strategy. Using qualitative analysis in social media presumes that you have an established presence in a particular place. If you don’t have one, or need to develop one, you might not quite be ready for qualitative social media evaluation. Since we’re trying to capture experience, impact, reaction, and the like, you need to first create an environment on social media where your target audiences are coming to discuss, respond, engage. MFLN has been up and running for 4 years now, and we’ve used that time to develop a presence on social media. But we are still working on making our sites interactive. So we’re changing our sm strategy to begin really focusing on that interaction piece and to embark on some more qualitative evaluation work within our social media sites.