Ethnographic research is a qualitative method where researchers observe and interact with study participants in their natural environment. The goal is to describe, analyze, and interpret the culture of a group over time in terms of their shared beliefs, behaviors, and language. Key characteristics include observing subjects in natural settings rather than labs, close interaction between researchers and participants, collecting unstructured data through methods like interviews and observations to understand perspectives from the group's point of view, and analyzing data within the socio-political and historical context of the culture. The process involves identifying a research problem, determining a location, collecting and analyzing data, and presenting findings.
This document provides an overview of case study research methods. It defines case study research as an in-depth exploration of a bounded system or case over time through detailed data collection from multiple sources. The document outlines the purposes, characteristics, types, advantages, and criticisms of case study research. It also discusses data collection techniques and implications for teaching.
This document discusses research ethics from an Islamic perspective. It begins by defining ethics and exploring ethics in Islam's history. It then discusses ethics in different aspects of research, including objectives of research ethics, ethics that should be followed at different research stages, and ethical issues like informed consent, privacy, and deception. The document also examines sources of tension in research ethics between principles like beneficence and human dignity. It outlines researchers' responsibilities to participants and the research community, such as protecting safety, reputation and enabling further research. The conclusion emphasizes the importance of awareness and understanding of ethical issues in research.
DIstinguish between Parametric vs nonparametric testsai prakash
Â
This document summarizes parametric and nonparametric tests. Parametric tests make assumptions about the population based on known parameters, while nonparametric tests make no assumptions about the population. Some examples of parametric tests provided are t-test, F-test, z-test, and ANOVA, while examples of nonparametric tests include Mann-Whitney, rank sum test, and Kruskal-Wallis test. The key differences between parametric and nonparametric tests are that parametric tests are based on population parameters and distributions while nonparametric tests are not, and parametric tests can only be applied to variable data while nonparametric tests can be used for variable or attribute data.
This document discusses various methods of data collection. It begins by defining key terms like data, types of data, and sources of data. The main methods of data collection discussed are observation, questionnaires, and interviews. Observation can be structured, unstructured, or participatory. Questionnaires contain open-ended, closed-ended, rating, and ranking questions. Interviews are conducted either in-person or over the phone. The document outlines advantages and disadvantages of each method.
This document discusses primary and secondary data. It defines primary data as data collected directly by the researcher through methods like observation, interviews, questionnaires, and surveys. Secondary data is data that has already been published through sources like books, journals, websites, and government records. The document outlines the merits and limitations of both primary and secondary data. It emphasizes the importance of evaluating secondary data for availability, relevance, accuracy, and sufficiency before using it in research.
Ethnographic research is a qualitative method where researchers observe and interact with study participants in their natural environment. The goal is to describe, analyze, and interpret the culture of a group over time in terms of their shared beliefs, behaviors, and language. Key characteristics include observing subjects in natural settings rather than labs, close interaction between researchers and participants, collecting unstructured data through methods like interviews and observations to understand perspectives from the group's point of view, and analyzing data within the socio-political and historical context of the culture. The process involves identifying a research problem, determining a location, collecting and analyzing data, and presenting findings.
This document provides an overview of case study research methods. It defines case study research as an in-depth exploration of a bounded system or case over time through detailed data collection from multiple sources. The document outlines the purposes, characteristics, types, advantages, and criticisms of case study research. It also discusses data collection techniques and implications for teaching.
This document discusses research ethics from an Islamic perspective. It begins by defining ethics and exploring ethics in Islam's history. It then discusses ethics in different aspects of research, including objectives of research ethics, ethics that should be followed at different research stages, and ethical issues like informed consent, privacy, and deception. The document also examines sources of tension in research ethics between principles like beneficence and human dignity. It outlines researchers' responsibilities to participants and the research community, such as protecting safety, reputation and enabling further research. The conclusion emphasizes the importance of awareness and understanding of ethical issues in research.
DIstinguish between Parametric vs nonparametric testsai prakash
Â
This document summarizes parametric and nonparametric tests. Parametric tests make assumptions about the population based on known parameters, while nonparametric tests make no assumptions about the population. Some examples of parametric tests provided are t-test, F-test, z-test, and ANOVA, while examples of nonparametric tests include Mann-Whitney, rank sum test, and Kruskal-Wallis test. The key differences between parametric and nonparametric tests are that parametric tests are based on population parameters and distributions while nonparametric tests are not, and parametric tests can only be applied to variable data while nonparametric tests can be used for variable or attribute data.
This document discusses various methods of data collection. It begins by defining key terms like data, types of data, and sources of data. The main methods of data collection discussed are observation, questionnaires, and interviews. Observation can be structured, unstructured, or participatory. Questionnaires contain open-ended, closed-ended, rating, and ranking questions. Interviews are conducted either in-person or over the phone. The document outlines advantages and disadvantages of each method.
This document discusses primary and secondary data. It defines primary data as data collected directly by the researcher through methods like observation, interviews, questionnaires, and surveys. Secondary data is data that has already been published through sources like books, journals, websites, and government records. The document outlines the merits and limitations of both primary and secondary data. It emphasizes the importance of evaluating secondary data for availability, relevance, accuracy, and sufficiency before using it in research.
The document defines a hypothesis as a conjectural statement or tentative explanation about the relationship between two or more variables that can be tested. Several authors contribute definitions stating that a hypothesis makes a specific, testable prediction and must be falsifiable. Key aspects of a hypothesis include identifying variables, having explanatory power, and being testable, quantifiable, and generalizable. The document also distinguishes between statistical hypotheses about population parameters, null hypotheses being tested, and critical regions for rejecting null hypotheses based on sample data.
Here are the key points about informed consent:
- It is a process, not just a form. Researchers must ensure participants understand what participation involves through clear verbal and written explanations.
- Consent forms should be written in plain, easy-to-understand language appropriate for the population.
- Participants must be able to refuse or withdraw from the study without penalty.
- Risks and limitations of confidentiality should be clearly explained.
- Participants should have the opportunity to ask questions to fully comprehend what they are consenting to.
- Informed consent is an ongoing process, not a single event, with the option for participants to withdraw later.
The goal is to respect participants' autonomy by
This document discusses various statistical techniques used for inferential statistics, including parametric and non-parametric techniques. Parametric techniques make assumptions about the population and can determine relationships, while non-parametric techniques make few assumptions and are useful for nominal and ordinal data. Commonly used parametric tests are t-tests, ANOVA, MANOVA, and correlation analysis. Non-parametric tests mentioned include Chi-square, Wilcoxon, and Friedman tests. Examples are provided to illustrate the appropriate uses of each technique.
The chi-square test is used to determine if an observed distribution of data differs from the theoretical distribution. It compares observed frequencies to expected frequencies based on a hypothesis. The chi-square value is calculated by summing the squared differences between observed and expected frequencies divided by the expected frequency. The chi-square value is then compared to a critical value from the chi-square distribution table based on the degrees of freedom. If the chi-square value is greater than the critical value, the null hypothesis that the distributions are the same can be rejected.
Statistical tests can be used to analyze data in two main ways: descriptive statistics provide an overview of data attributes, while inferential statistics assess how well data support hypotheses and generalizability. There are different types of tests for comparing means and distributions between groups, determining if differences or relationships exist in parametric or non-parametric data. The appropriate test depends on the question being asked, number of groups, and properties of the data.
This document discusses non-probability sampling techniques. It defines key terms like population, sample, probability sampling, and non-probability sampling. It then describes several common non-probability sampling methods: convenience sampling, which uses readily available participants; snowball sampling, which uses referrals from initial participants to recruit more; purposive sampling, which selects participants based on predefined criteria; and quota sampling, which sets quotas for participant demographics to be filled. The document notes advantages and disadvantages of these non-probability methods.
TSLB3143 Topic 1e Ethnography ResearchYee Bee Choo
Â
Ethnography is a qualitative research method that involves observing and describing a culture-sharing group. The researcher spends extensive time with the group in their natural setting to understand their shared behaviors, beliefs, languages, and other cultural elements. There are different types of ethnography, including realist ethnography which provides an objective account of the group, and critical ethnography which aims to advocate for marginalized groups and address inequities. Key aspects of ethnography include identifying a cultural theme, studying a culture-sharing group over time, and analyzing their shared patterns through fieldwork using techniques like interviews and document collection.
This document provides an introduction to statistical theory. It discusses why statistics are studied and defines key statistical concepts such as populations, samples, parameters, statistics, descriptive statistics, inferential statistics, and the different types of data and variables. It also covers experimental design, methods for collecting data such as surveys and sampling, and different sampling methods like random, stratified, cluster, and systematic sampling.
Dr. Neha Deo discusses several key ethical considerations for educational research. Ethics and laws both address proper conduct. Researchers must respect participants, ensuring voluntary and informed consent. They should follow institutional review board procedures and ethical standards from professional associations. Ethics should be considered throughout the entire research process, from design to dissemination of results. When collecting and reporting data, researchers must respect participants and sites, obtain necessary permissions, and report findings honestly without altering results.
This document discusses the history and importance of research ethics. It describes several infamous human experiments that violated ethics standards, including the Tuskegee Syphilis Study and Nazi experiments. Milestones in developing ethics guidelines are reviewed, such as the Nuremberg Code, Belmont Report, Declaration of Helsinki, and ICMR guidelines in India. The document emphasizes enforcing standards to protect human subjects and gain informed consent in research.
The document discusses validity and reliability in research. It defines reliability as the consistency of scores from one administration of an instrument to another, and validity as the appropriateness of inferences made from research findings. The document outlines different types of validity evidence including content, criterion, and construct validity. It also discusses threats to internal validity such as subject characteristics, loss of subjects, and location that could influence research outcomes. Methods for achieving validity and reliability are presented, including minimizing threats in experimental research designs.
This presentation outlines the key aspects of case study research design. It discusses the importance of context, the use of theory for building or testing explanations, and research methodology. Primary and secondary data sources are examined, including interviews, documents, observations and questionnaires. The composition of case studies is explained as focusing on research questions, theoretical propositions, units of analysis, logic and evaluation. Multiple methods and longitudinal data collection can be used. Case studies are useful for exploratory, descriptive and explanatory research aimed at understanding how and why phenomena occur.
Study Designs - Case control design by Dr Amita Kashyapamitakashyap1
Â
This document discusses various study designs used in medical research, including epidemiological and experimental studies. It provides details on descriptive, observational, and analytical study designs such as ecological, cross-sectional, case-control, and cohort studies. Case-control studies are described in more depth, including how they are analyzed and their advantages and disadvantages. Case-control studies allow investigation of multiple risk factors for diseases and provide evidence for causal relationships, though they do not prove causality. They are efficient for studying rare diseases.
This document discusses non-parametric tests, which are statistical tests that make fewer assumptions about the population distribution compared to parametric tests. Some key points:
1) Non-parametric tests like the chi-square test, sign test, Wilcoxon signed-rank test, Mann-Whitney U-test, and Kruskal-Wallis test are used when the population is not normally distributed or sample sizes are small.
2) They are applied in situations where data is on an ordinal scale rather than a continuous scale, the population is not well defined, or the distribution is unknown.
3) Advantages are that they are easier to compute and make fewer assumptions than parametric tests,
This document discusses using one-way ANOVA in SPSS to compare mean salaries among employee age groups. It finds a significant difference in monthly salaries between the three age groups. Post hoc tests show that all three group means are significantly different from one another. Two other examples are presented: the first finds no significant difference in the importance of growth and development between age groups, while the second does find a significant difference in the importance of a safe work environment between the youngest and oldest age groups specifically.
This document discusses experimental and quasi-experimental designs. It outlines the key components of classical experimental designs, including independent and dependent variables, experimental and control groups, pretesting and posttesting. It also discusses threats to internal and external validity and variations like quasi-experimental designs that use nonequivalent groups or time series when randomization is not possible. Quasi-experiments aim to make groups as comparable as possible through matching or using natural cohorts.
The document discusses various sampling techniques used in research including probability and non-probability sampling. It explains key concepts like population, sample, sampling frame, sampling error, systematic error. It describes different probability sampling designs such as simple random sampling, stratified sampling, cluster sampling and multistage sampling. It also discusses non-probability sampling techniques like convenience sampling and quota sampling. The document provides advantages and limitations of different sampling methods and guidelines for selecting an appropriate sampling design.
Field research involves direct observation and asking questions to obtain qualitative and quantitative data in a natural setting. It is appropriate for topics best understood in their natural context. There are various roles for observers, from complete participant to complete observer. Field interviews are less structured than surveys. Access to subjects may involve finding sponsors or informants. Purposive sampling is common due to the challenges of probability sampling. Observations are recorded through notes, recordings, photos. Field research can be combined with other data sources like surveys. Examples include studying speeding through photos and radar, police traffic stops through ridealong interviews, and violence in bars through participant observation. Strengths are depth of understanding and flexibility, while weaknesses include lower reliability and generalizability.
Field research involves direct observation and asking questions to obtain qualitative and quantitative data in a natural setting. It allows researchers to get a comprehensive perspective on phenomena like drug dealers or bar behavior. Researchers can take on different levels of participation from complete participant to complete observer. Sampling is usually purposive rather than probability-based. Data collection methods include note taking, recordings, photographs, and structured observations. Field research provides great depth of understanding but has lower reliability and generalizability than other methods due to its personal nature.
The document defines a hypothesis as a conjectural statement or tentative explanation about the relationship between two or more variables that can be tested. Several authors contribute definitions stating that a hypothesis makes a specific, testable prediction and must be falsifiable. Key aspects of a hypothesis include identifying variables, having explanatory power, and being testable, quantifiable, and generalizable. The document also distinguishes between statistical hypotheses about population parameters, null hypotheses being tested, and critical regions for rejecting null hypotheses based on sample data.
Here are the key points about informed consent:
- It is a process, not just a form. Researchers must ensure participants understand what participation involves through clear verbal and written explanations.
- Consent forms should be written in plain, easy-to-understand language appropriate for the population.
- Participants must be able to refuse or withdraw from the study without penalty.
- Risks and limitations of confidentiality should be clearly explained.
- Participants should have the opportunity to ask questions to fully comprehend what they are consenting to.
- Informed consent is an ongoing process, not a single event, with the option for participants to withdraw later.
The goal is to respect participants' autonomy by
This document discusses various statistical techniques used for inferential statistics, including parametric and non-parametric techniques. Parametric techniques make assumptions about the population and can determine relationships, while non-parametric techniques make few assumptions and are useful for nominal and ordinal data. Commonly used parametric tests are t-tests, ANOVA, MANOVA, and correlation analysis. Non-parametric tests mentioned include Chi-square, Wilcoxon, and Friedman tests. Examples are provided to illustrate the appropriate uses of each technique.
The chi-square test is used to determine if an observed distribution of data differs from the theoretical distribution. It compares observed frequencies to expected frequencies based on a hypothesis. The chi-square value is calculated by summing the squared differences between observed and expected frequencies divided by the expected frequency. The chi-square value is then compared to a critical value from the chi-square distribution table based on the degrees of freedom. If the chi-square value is greater than the critical value, the null hypothesis that the distributions are the same can be rejected.
Statistical tests can be used to analyze data in two main ways: descriptive statistics provide an overview of data attributes, while inferential statistics assess how well data support hypotheses and generalizability. There are different types of tests for comparing means and distributions between groups, determining if differences or relationships exist in parametric or non-parametric data. The appropriate test depends on the question being asked, number of groups, and properties of the data.
This document discusses non-probability sampling techniques. It defines key terms like population, sample, probability sampling, and non-probability sampling. It then describes several common non-probability sampling methods: convenience sampling, which uses readily available participants; snowball sampling, which uses referrals from initial participants to recruit more; purposive sampling, which selects participants based on predefined criteria; and quota sampling, which sets quotas for participant demographics to be filled. The document notes advantages and disadvantages of these non-probability methods.
TSLB3143 Topic 1e Ethnography ResearchYee Bee Choo
Â
Ethnography is a qualitative research method that involves observing and describing a culture-sharing group. The researcher spends extensive time with the group in their natural setting to understand their shared behaviors, beliefs, languages, and other cultural elements. There are different types of ethnography, including realist ethnography which provides an objective account of the group, and critical ethnography which aims to advocate for marginalized groups and address inequities. Key aspects of ethnography include identifying a cultural theme, studying a culture-sharing group over time, and analyzing their shared patterns through fieldwork using techniques like interviews and document collection.
This document provides an introduction to statistical theory. It discusses why statistics are studied and defines key statistical concepts such as populations, samples, parameters, statistics, descriptive statistics, inferential statistics, and the different types of data and variables. It also covers experimental design, methods for collecting data such as surveys and sampling, and different sampling methods like random, stratified, cluster, and systematic sampling.
Dr. Neha Deo discusses several key ethical considerations for educational research. Ethics and laws both address proper conduct. Researchers must respect participants, ensuring voluntary and informed consent. They should follow institutional review board procedures and ethical standards from professional associations. Ethics should be considered throughout the entire research process, from design to dissemination of results. When collecting and reporting data, researchers must respect participants and sites, obtain necessary permissions, and report findings honestly without altering results.
This document discusses the history and importance of research ethics. It describes several infamous human experiments that violated ethics standards, including the Tuskegee Syphilis Study and Nazi experiments. Milestones in developing ethics guidelines are reviewed, such as the Nuremberg Code, Belmont Report, Declaration of Helsinki, and ICMR guidelines in India. The document emphasizes enforcing standards to protect human subjects and gain informed consent in research.
The document discusses validity and reliability in research. It defines reliability as the consistency of scores from one administration of an instrument to another, and validity as the appropriateness of inferences made from research findings. The document outlines different types of validity evidence including content, criterion, and construct validity. It also discusses threats to internal validity such as subject characteristics, loss of subjects, and location that could influence research outcomes. Methods for achieving validity and reliability are presented, including minimizing threats in experimental research designs.
This presentation outlines the key aspects of case study research design. It discusses the importance of context, the use of theory for building or testing explanations, and research methodology. Primary and secondary data sources are examined, including interviews, documents, observations and questionnaires. The composition of case studies is explained as focusing on research questions, theoretical propositions, units of analysis, logic and evaluation. Multiple methods and longitudinal data collection can be used. Case studies are useful for exploratory, descriptive and explanatory research aimed at understanding how and why phenomena occur.
Study Designs - Case control design by Dr Amita Kashyapamitakashyap1
Â
This document discusses various study designs used in medical research, including epidemiological and experimental studies. It provides details on descriptive, observational, and analytical study designs such as ecological, cross-sectional, case-control, and cohort studies. Case-control studies are described in more depth, including how they are analyzed and their advantages and disadvantages. Case-control studies allow investigation of multiple risk factors for diseases and provide evidence for causal relationships, though they do not prove causality. They are efficient for studying rare diseases.
This document discusses non-parametric tests, which are statistical tests that make fewer assumptions about the population distribution compared to parametric tests. Some key points:
1) Non-parametric tests like the chi-square test, sign test, Wilcoxon signed-rank test, Mann-Whitney U-test, and Kruskal-Wallis test are used when the population is not normally distributed or sample sizes are small.
2) They are applied in situations where data is on an ordinal scale rather than a continuous scale, the population is not well defined, or the distribution is unknown.
3) Advantages are that they are easier to compute and make fewer assumptions than parametric tests,
This document discusses using one-way ANOVA in SPSS to compare mean salaries among employee age groups. It finds a significant difference in monthly salaries between the three age groups. Post hoc tests show that all three group means are significantly different from one another. Two other examples are presented: the first finds no significant difference in the importance of growth and development between age groups, while the second does find a significant difference in the importance of a safe work environment between the youngest and oldest age groups specifically.
This document discusses experimental and quasi-experimental designs. It outlines the key components of classical experimental designs, including independent and dependent variables, experimental and control groups, pretesting and posttesting. It also discusses threats to internal and external validity and variations like quasi-experimental designs that use nonequivalent groups or time series when randomization is not possible. Quasi-experiments aim to make groups as comparable as possible through matching or using natural cohorts.
The document discusses various sampling techniques used in research including probability and non-probability sampling. It explains key concepts like population, sample, sampling frame, sampling error, systematic error. It describes different probability sampling designs such as simple random sampling, stratified sampling, cluster sampling and multistage sampling. It also discusses non-probability sampling techniques like convenience sampling and quota sampling. The document provides advantages and limitations of different sampling methods and guidelines for selecting an appropriate sampling design.
Field research involves direct observation and asking questions to obtain qualitative and quantitative data in a natural setting. It is appropriate for topics best understood in their natural context. There are various roles for observers, from complete participant to complete observer. Field interviews are less structured than surveys. Access to subjects may involve finding sponsors or informants. Purposive sampling is common due to the challenges of probability sampling. Observations are recorded through notes, recordings, photos. Field research can be combined with other data sources like surveys. Examples include studying speeding through photos and radar, police traffic stops through ridealong interviews, and violence in bars through participant observation. Strengths are depth of understanding and flexibility, while weaknesses include lower reliability and generalizability.
Field research involves direct observation and asking questions to obtain qualitative and quantitative data in a natural setting. It allows researchers to get a comprehensive perspective on phenomena like drug dealers or bar behavior. Researchers can take on different levels of participation from complete participant to complete observer. Sampling is usually purposive rather than probability-based. Data collection methods include note taking, recordings, photographs, and structured observations. Field research provides great depth of understanding but has lower reliability and generalizability than other methods due to its personal nature.
This document outlines different methods for collecting primary data, including observation, interviews, and questionnaires. It discusses the types, advantages, and disadvantages of each method. Observation can be participative or non-participative. Interviews can be structured or unstructured. Questionnaires can be administered in different ways such as mailing, collective administration, or in public places. Both interviews and questionnaires have advantages like flexibility but also disadvantages such as potential bias or low response rates.
This document discusses field observation as a method of data collection in criminal justice research. It involves directly observing phenomena in their natural settings to obtain qualitative and/or quantitative data. Key points covered include defining field observation, its use for understanding settings, behavior and events, different roles for observers, purposive sampling techniques used, methods for recording observations, linking observations to other data sources, examples of shoplifting and seatbelt use studies, and strengths and weaknesses of the method.
This document discusses various information gathering tools for system analysis including review of literature, on-site observation, interviews, and questionnaires. It provides details on each tool such as reviewing procedures manuals and forms to understand current processes, observing users on-site to understand real systems, conducting interviews to understand perceptions and feelings, and distributing questionnaires to gather information from many people simultaneously. The key is to use these tools accurately and methodically to acquire information with minimal disruption to users.
1. Qualitative interviews involve interactions between an interviewer and respondent to explore topics in an unstructured or semi-structured format. This allows researchers to understand human perspectives and lived experiences.
2. Qualitative interviews are used in criminal justice research to understand subjects' perspectives and gather first-hand accounts. They can also explore how people feel about their roles and identities.
3. There are different types of interview structures from fully structured to unstructured, with semi-structured in between, allowing some flexibility to explore emerging themes. The structure influences how in-depth the interviews can be.
This document discusses qualitative research techniques used in marketing research. It covers observation methods, focus groups, and other techniques like in-depth interviews and projective techniques. Specifically, it defines focus groups as small groups guided by a moderator through discussion to gain relevant information. It also describes how focus groups and online focus groups work, their advantages and disadvantages, and when they should and should not be used.
Data Analysis for Marketing - Observation techniquesJodie Caston
Â
This document discusses various observation techniques used in research including structured vs unstructured, disguised vs undisguised, natural vs contrived, personal observation, electronic observation, audit analysis, content analysis, and trace analysis. It provides details on how each technique is implemented and compares their degrees of structure, disguise, natural setting, observation bias, and analysis bias. Advantages of observation techniques include measuring actual behavior rather than intended behavior. Disadvantages include difficulties establishing underlying motives and potential ethical issues with techniques like hidden cameras.
What is qualitative research? Discuss the methods of qualitative research.pdfMd. Sajjat Hossain
Â
Qualitative research involves collecting and analyzing non-numerical data to understand meanings, experiences, and perspectives of research subjects. It uses methods like field observations, focus groups, and case studies. Field observations involve observing research subjects in natural settings either covertly or overtly. Focus groups involve interviewing 6-12 people in a group setting led by a moderator. Qualitative research provides descriptive insights but results are not generalizable.
This document discusses various methods of collecting primary data through observation for research purposes. It outlines two main types of observation: naturalistic observation where the researcher passively observes subjects in natural settings without influencing them, and laboratory observation where settings are controlled. It also describes participant observation where the researcher joins the group being studied and non-participant observation where the group is unaware of observation. Both qualitative and quantitative approaches to observation are covered.
This document discusses qualitative research techniques used in marketing research, including observation, focus groups, interviews, and other methods. Specifically, it covers how to conduct observations and focus groups, the advantages and disadvantages of each, and when each method is best applied. It also discusses other qualitative techniques like in-depth interviews, projective techniques, and physiological measurements. The overall purpose is to introduce various tools that researchers can use to understand consumers by analyzing what people say and do rather than just collecting numeric data.
Survey research is a commonly used method in sociology, political science, and criminal justice research. Surveys can ask people about their victimization experiences, criminal behavior, attitudes and perceptions. Different survey methods include in-person interviews, mail/online surveys, and telephone surveys. Care must be taken in designing survey questions to avoid biases and get accurate responses. While surveys can efficiently gather information from large populations, they only provide superficial coverage of complex topics.
This document discusses methods for collecting qualitative data, including observations, interviews, documents, and audiovisual materials. It describes the process of conducting observations at a research site, including selecting a site, easing into the site, determining what to observe and for how long, and recording descriptive and reflective field notes. The document also discusses interviews, noting the advantages of permitting detailed descriptions but the disadvantages of responses being filtered or deceptive. It outlines types of interviews and conducting them ethically. Finally, it addresses collecting and analyzing documents located at research sites.
CH-4 Constructing an Instrument for Data Collection.pptxjemalmohamed4
Â
This chapter discusses ethical considerations and methods for collecting data. It covers issues related to participants, researchers, and sponsoring organizations. The two major approaches to gathering information are through primary and secondary sources. Primary data is collected directly for the research purpose while secondary data comes from existing sources. Common primary collection methods include observation, interviews, and questionnaires. Observation can be participant or non-participant. Interviews are structured or unstructured. Questionnaires are administered via mail, in groups, or in public places. Secondary sources include government publications, organizations, earlier research, and media.
This document discusses various data collection methods used in research, including observation, interviews, focus groups, and projective techniques. It provides details on each method, such as how to conduct interviews and focus groups, the advantages and disadvantages of each approach, and examples of projective techniques like word association tests. The document aims to explain the appropriate use of both qualitative and quantitative data collection methods in social science research.
This document discusses various data collection methods used in research, including observation, interviews, focus groups, and projective techniques. It provides details on each method, such as how to conduct interviews and focus groups, and the advantages and disadvantages of each approach. The document also differentiates between primary and secondary data and discusses validity and reliability in data collection.
This document introduces different types of research techniques used in media, including audience research, market research, and production research. It distinguishes between primary research methods like observations, interviews, questionnaires; and secondary research methods like referring to published materials, online sources, and audio/visual formats. Reliability and validity are important factors to consider when choosing research methods to ensure the information can be trusted and is relevant to the investigation. Proper record keeping and storage of research materials is also emphasized.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
Â
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Â
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Â
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
Â
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Â
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Â
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Â
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Â
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Â
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Â
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
Â
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Â
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Â
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Â
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
Â
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
2. OUTLINE
ď‚ž Introduction
ď‚ž Topics Appropriate to Field Research
ď‚ž The Various Roles of the Observer
ď‚ž Asking Questions
ď‚ž Gaining Access to Subjects
ď‚ž Recording Observations
ď‚ž Linking Field Observation and Other Data
ď‚ž Illustrations of Field Research
ď‚ž Strengths and Weaknesses of Field
Research
3. 3
•Field research encompasses two different
methods of obtaining data:
•Direct observation
•Asking questions
•May yield qualitative and quantitative data
•Often no precisely defined hypotheses to be
tested
•Used to make sense out of an ongoing
process
4. 4
•Gives comprehensive perspective – enhances
validity
•Go directly to phenomenon, observe it as
completely as possible
•Especially appropriate for topics best
understood in their natural setting
•Street level drug dealers to distinguish
customers
•Ethnography: Focuses on detailed and
accurate description rather than explanation
5. 5
•Complete participant: Participates fully; true
identity and purpose are not known to
subjects
•Participant-as-observer: Make known your
position as researcher and participate with the
group
•Observer-as-participant: Make known your
position as a researcher; do not actually
participate
•Complete observer: Observes without
becoming a participant
6. 6
•Qualitative Interview: Is based on a set of topics
to be discussed in depth rather than based on the
use of standardized questions
•Field research is often a matter of going where
the action is and simply watching and listening
•Also a matter of asking questions & recording
answers
•Field research interviews are must less
structured than survey interviews
•Ideally set up and conducted just like a normal,
casual conversation
7. 7
•Begins with initial contact: Sponsor, Letter,
Phone Call, Meeting
•Access to formal organizations
•Find a sponsor, write a letter to executive
director, arrange a phone call, arrange a
meeting
•Access to subcultures
•Find an informant (usually person who works
with criminals), use that person as your “in”
•Snowball sampling is useful as informant
identifies others, who identify others, etc.
8. 8
•Controlled probability sampling used rarely;
purposive sampling is common
•Bear in mind two stages of sampling:
•To what extent are the situations available for
observation representative of the general
phenomena you wish to describe and explain?
•Are your actual observations within those
total situations representative of all
observations?
9. 9
•Note taking, tape recording when interviewing
and when making observations (dictation)
•Videotaping or photographs can make records of
“before” and “after” some physical design change
•Field notes: Observations are recorded as
written notes, often in a field journal; first take
sketchy notes and then rewrite your notes in
detail
•Structured observations: Observers mark
closed-ended forms, which produce numeric
measures
10. 10
•Useful to combine field research with surveys
or data from official records
•Baltimore study of the effects of
neighborhood physical characteristics on
residents’ perceptions of crime problems
(Taylor, Shumaker, & Gottfredson, 1985)
•Perceptions: Surveys
•Physical problems: Observations, actual
population and crime information - census
data & crime reports from police records
11. 11
•Counted only when offense is seen; takes place
only in certain locations; crime of stealth and not
confrontation
•Prevalence defined as ratio of shoplifters:
shoppers
•Subjects selected by systematic sampling, e.g.,
every 20th shopper was followed by a field observer
•Other research staff were employed as shoplifters
to measure reliability of observers’ detections
•Could adjust prevalence rate with reliability
figures
12. 12
•Rate of use: # of people wearing: # of cars
observed
•Stationary observers at roadsides rather than
mobile
•Placed at controlled intersections
•Sampled cars on three dimensions: Time of day,
roadway type, observation site; stratified sites by
density of auto ownership (correlated with
population)
•Emphasized marking “U” when uncertain
13. 13
•Alcohol has a disinhibiting effect which can lead to
aggression and subsequent violence
•Researcher set out to learn how situational
factors promote or inhibit violence in Australian
bars/nightclubs
•Observers in pairs stayed 2-6 hours multiple
times at 23 sites, “complete participant” –
narratives written later
•Correlates: Violence in bars frequented by
working-class males; discomfort & boredom,
drinking patterns, management issues (cover, food
availability, bouncers)
14. 14
•Provides great depth of understanding
•Flexibility (no need to prepare much in advance)
•More appropriate to measure behavior than
surveys
•High validity; quant. measures – Incomplete
picture
•Low reliability– Often very personal
•Generalizability – Personal nature may produce
findings that may not be replicated by another
•Precise probability samples can’t normally be
drawn