This presentation discusses various sampling methods that can be used in research in social and natural sciences. It introduces key concepts in sampling like population, sampling frame, sample size determination. It covers probability sampling methods like simple random sampling, systematic sampling, stratified sampling, cluster sampling and non-probability sampling methods like convenience sampling, purposive sampling and quota sampling. Examples of how these methods are applied in biological and sociological data collection are provided.
concept of sample and sampling, sampling process and problems, types of samples: probability and non probability sampling, determination and sample size, sampling and non sampling errors
Methods of Data Collection in Quantitative Research (Biostatistik)AKak Long
DEFINITION : Quantitative research, is defined as a the systematic investigation of phenomena by gathering quantifiable data and performing statistical, mathematical or computational techniques.
Quantitative research gathers information from existing and potential customers using sampling methods and sending out online surveys, online polls, questionnaires etc., the results of which can be depicted in the form of numericals.
After careful understanding of these numbers to predict the future of a product or service and make changes accordingly.
Described as the process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer research questions, test hypothesis and evaluate outcome.
Importance of data collection:
Helps us search for answers and resolutions
Facilitates and improve decision-making processes and the quality of the decisions made.
#Types of quantitative research.
. Survey research
The collection of data attained by asking individuals questions by either in person, on paper, by phone or online.
2. Correlational research
Measures two variables, understand assess the statistical relationship between them with no influence from any extraneous variable.
3. Casual-comparative research
To find relationship between independent and dependent variables after an action or event has already occurred.
4. Experimental research
Researcher manipulates one variables, and control/randomizes the rest of the variables.
concept of sample and sampling, sampling process and problems, types of samples: probability and non probability sampling, determination and sample size, sampling and non sampling errors
Methods of Data Collection in Quantitative Research (Biostatistik)AKak Long
DEFINITION : Quantitative research, is defined as a the systematic investigation of phenomena by gathering quantifiable data and performing statistical, mathematical or computational techniques.
Quantitative research gathers information from existing and potential customers using sampling methods and sending out online surveys, online polls, questionnaires etc., the results of which can be depicted in the form of numericals.
After careful understanding of these numbers to predict the future of a product or service and make changes accordingly.
Described as the process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer research questions, test hypothesis and evaluate outcome.
Importance of data collection:
Helps us search for answers and resolutions
Facilitates and improve decision-making processes and the quality of the decisions made.
#Types of quantitative research.
. Survey research
The collection of data attained by asking individuals questions by either in person, on paper, by phone or online.
2. Correlational research
Measures two variables, understand assess the statistical relationship between them with no influence from any extraneous variable.
3. Casual-comparative research
To find relationship between independent and dependent variables after an action or event has already occurred.
4. Experimental research
Researcher manipulates one variables, and control/randomizes the rest of the variables.
Data collection - Statistical data are a numerical statement of aggregates. Data, generally, are obtained through properly organized statistical inquiries conducted by the investigators. Data can either be from primary or secondary sources.
Qualitative analysis of data. STRATEGIES FOR ANALYZING OBSERVATIONSselvaraj227
QUALITATIVE RESEARCH QUALITATIVE DATA COLLECTION METHODS CHARACTERISTICS OF QUALITATIVE RESEARCH METHODS APPROACHES TO QUALITATIVE DATA ANALYSISPRINCIPLES OF QUALITATIVE DATA ANALYSISSTRATEGIES FOR ANALYZING OBSERVATIONS
Data collection - Statistical data are a numerical statement of aggregates. Data, generally, are obtained through properly organized statistical inquiries conducted by the investigators. Data can either be from primary or secondary sources.
Qualitative analysis of data. STRATEGIES FOR ANALYZING OBSERVATIONSselvaraj227
QUALITATIVE RESEARCH QUALITATIVE DATA COLLECTION METHODS CHARACTERISTICS OF QUALITATIVE RESEARCH METHODS APPROACHES TO QUALITATIVE DATA ANALYSISPRINCIPLES OF QUALITATIVE DATA ANALYSISSTRATEGIES FOR ANALYZING OBSERVATIONS
Flybe case study: How to transform the Back Office
An honest case study of the transformation program of the Back Office at FlyBe
Learn how Richard and the team managed change to transform HR, Finance and Procurement at one of Europe's largest Airline
This was a presentation that was carried out in our research method class by our group. It will be useful for PHD and master students quantitative and qualitative method. It consist sample definition, purpose of sampling, stages in the selection of a sample, types of sampling in quantitative researches, types of sampling in qualitative researches, and ethical Considerations in Data Collection.
Sampling is concerned with the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population
Explains the different methods of Sampling with diagram. In statistics, quality assurance, and survey methodology, sampling is the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt for the samples to represent the population in question.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
2. Whether it is in Natural Science or Social
Science , most of the students will have to
do a project or assignment using some kind
of research
In the research process, sampling and data
collection is one of the vital components
This presentation will provide an
introduction to various sampling methods
that one could adopt in research in Social as
well as Natural Sciences
3. Process
The sampling process comprises several stages:
Defining the population of concern
Specifying a sampling frame, a set of items or events
possible to measure
Specifying a sampling method for selecting items or
events from the frame
Determining the sample size
Implementing the sampling plan
Sampling and data collecting
Reviewing the sampling process
3
4. Population
All subjects (items/people) having the characteristic
the researcher wishes to understand.
As the time and resources are limited to get
information from all, it is required to identify a subset
or a representative sample of that population.
5. Sampling Frame
The sample which we believe to have the
elements/properties we are looking for
Is representative of the population
6. Sampling
A sample is a smaller but representative collection of
units from a population to determine the truths about
the population
Why sample
As time, resources and work are limited need to work
on something manageable but representative
7. Methods of data collection
i. Measurements
ii. Observations ( non- interviews)
iii. Personal interviews
iv. Type- structured or unstructured
v. Approach – direct or indirect
vi. Telephone interviews
vii. Mailing questionnaires
8. Types of Sampling
Quantitative sampling
Sampling of biological material
Plots, transects, quadrats etc.
Qualitative sampling
Surveys, questionnaires, discussions, observations etc.
11. Some methods used in sociological
data collection
Surveys
Key informant interviews
12. Preparation of a questionnaire
with different categories
I. Quantity or information
II. In which year did you receive the membership of Knuckles
Environment Society?
III. Category
IV. Have you ever been or are you now involved in conservation
activities for the nature?
V. 1.Yes(currently) 2.yes (in the past) 3.Never
VI. List or multiple choices
VII. Do you think the time spend on nature protection programs as any
of the following?
1.A must 2.a necessity 3.a right 4.an investment
5.Waste of time 6.non of these
13. iv. Scale
How would you describe your parents’ attitude to nature
protection programs?
1.Very positive 2.positive 3.mixed/neutral 4.negative
5.very negative 6.not sure
v. Ranking
What do you see as the main purpose of your nature
protection activities? Please rank all these relevant in order
from 1.
Personal development/career development/ subject interest/
recreation/ fulfill ambition /keeping stimulated /other
14. vi. Complex grid/table
How would you rank the benefits of your study for each of
the following. Please rank each item.
For Very
positive
Positive Neutr
al
Negativ
e
Very
negative
Not
sure
you
Your family
Your
employer
country
community
15. Vii. Open ended
We would like to hear from you if you have any further
comments.
16. Ethical issues in data collection
Ethical issues concerning the participants….
I.Collecting information (time wasting)
II.Seeking consent
III.Providing incentives
IV.Seeking sensitive information
V.Possibility of causing harm to the participants
VI.Maintaining confidentiality
17. Ethical issues in data collection
Ethical issues relating to the researcher….
i.Avoiding bias
ii.Provision of deprivation of a treatment
iii.Using inappropriate research methodology
iv.Incorrect reporting
v.Inappropriate use of information
18. What is your population of interest?
To whom do you want to generalize
your results?
All doctors
School children
All Canadians
All Women aged 15-45 years
Other
21. Types of SamplingProbability Sampling
Every unit in the population has a chance of being
selected in the sample
All sample units are given same weight
Also known as equal probability of selection
Non Probability Sampling
Some elements of the population have no chance of
selection hence non random sampling
Sampling is done based on a predetermined criteria
23. Example
We visit every household in a given street,
and interview the first person to answer the
door. In any household with more than one
occupant, this is a nonprobability sample,
because some people are more likely to answer
the door (e.g. an unemployed person who
spends most of their time at home is more
likely to answer than an employed housemate
who might be at work when the interviewer
calls) and it's not practical to calculate these
probabilities.
24. Types of Samples
Probability (Random) Samples
Simple random sample
Systematic random sample
Stratified random sample
Multistage sample
Multiphase sample
Cluster sample
Non-Probability Samples
Convenience sample
Purposive sample
Quota
24
25. SIMPLE RANDOM SAMPLING
• Applicable when population is small, homogeneous
& readily available
• All subsets of the frame are given an equal
probability. Each element of the frame thus has
an equal probability of selection.
• It provides for greatest number of possible
samples. This is done by assigning a number to
each unit in the sampling frame.
• A table of random number or lottery system is
used to determine which units are to be selected.
• Estimates are easy to calculate.
25
26. SIMPLE RANDOM SAMPLING
contd……..
Disadvantages
If sampling frame large, this method
impracticable.
Minority subgroups of interest in population
may not be present in sample in sufficient
numbers for study.
26
27. SYSTEMATIC SAMPLING
Systematic sampling relies on arranging the
target population according to some ordering
scheme and then selecting elements at regular
intervals through that ordered list.
Systematic sampling involves a random start
and then proceeds with the selection of every
kth element from then onwards. In this case,
k=(population size/sample size).
A simple example would be to select every 10th
name from the telephone directory (an 'every
10th' sample, also referred to as 'sampling with
a skip of 10'). 27
28.
29. SYSTEMATIC SAMPLING……
ADVANTAGES:
Sample easy to select
Suitable sampling frame can be identified
easily
Sample evenly spread over entire reference
population
DISADVANTAGES:
Sample may be biased if hidden periodicity in
population coincides with that of selection.
Difficult to assess precision of estimate from
one survey.
29
30. Stratified Sampling
The sampling frame is organised into pre determined
strata
Sampling is done within the strata as an independent
sub population
Individual elements are randomly selected within it
As each stratum is treated as independent population
different sampling approaches can be applied to
different strata.
31. Advantages
Ensures proportionate representation of the sample
E.g.. If we want to represent the minority sub groups adequately
this can be done by this
Drawbacks
When there are many strata to be used, the sampling size per
group may be larger than other methods
Stratifying variable may be related to some but not to others and
may lead to complications
If equal no of samples taken from all the stratified groups, less
representative ones could be over sampled if not careful.
33. CLUSTER SAMPLING
Cluster sampling is an example of 'two-stage
sampling' .
First stage a sample of areas is chosen
Second stage a sample of respondents within
those areas is selected.
Population divided into clusters of homogeneous
units, usually based on geographical contiguity.
Sampling units are groups rather than individuals.
A sample of such clusters is then selected.
All units from the selected clusters are studied.
Cuts down on the cost of travel and other administrative costs
33
34. Difference Between Strata and Clusters
Although strata and clusters are both non-
overlapping subsets of the population, they differ in
several ways.
All strata are represented in the sample; but only a
subset of clusters are in the sample.
With stratified sampling, the best survey results
occur when elements within strata are internally
homogeneous. However, with cluster sampling, the
best results occur when elements within clusters are
internally heterogeneous
34
35. Activity
In estimation of immunization coverage in a
province, data on seven children aged 12-23
months in 30 clusters are used to determine
proportion of fully immunized children in the
province.
Give reasons why cluster sampling is used in this
survey.
37. CONVENIENCE SAMPLING
Sometimes known as grab or opportunity sampling or accidental
or haphazard sampling.
A type of nonprobability sampling which involves the sample being
drawn from that part of the population which is close to hand.
That is, readily available and convenient.
The researcher using such a sample cannot scientifically make
generalizations about the total population from this sample
because it would not be representative enough.
For example, if the interviewer was to conduct a survey at a
shopping center early in the morning on a given day, the people
that he/she could interview would be limited to those given there
at that given time, which would not represent the views of other
members of society in such an area, if the survey was to be
conducted at different times of day and several times per week.
This type of sampling is most useful for pilot testing.
In social science research, snowball sampling is a similar technique,
where existing study subjects are used to recruit more subjects
into the sample.
37
39. QUOTA SAMPLING
The population is first segmented into mutually exclusive
sub-groups, just as in stratified sampling.
Then judgment used to select subjects or units from
each segment based on a specified proportion.
For example, an interviewer may be told to sample 200
females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of
non-probability sampling.
In quota sampling the selection of the sample is non-
random.
For example interviewers might be tempted to interview
those who look most helpful. The problem is that these
samples may be biased because not everyone gets a
chance of selection. This random element is its greatest
weakness and quota versus probability has been a matter
of controversy for many years
39
40. Judgmental sampling or
Purposive sampling
- The researcher chooses the sample based on
who they think would be appropriate for the
study. This is used primarily when there is a
limited number of people that have expertise in
the area being researched
40
41. PANEL SAMPLING (Time Series)
Method of first selecting a group of participants through a
random sampling method and then asking that group for the same
information again several times over a period of time.
Therefore, each participant is given same survey or interview at
two or more time points; each period of data collection called a
"wave".
This sampling methodology often chosen for large scale or nation-
wide studies in order to gauge changes in the population with
regard to any number of variables from chronic illness to job
stress to weekly food expenditures.
Panel sampling can also be used to inform researchers about
within-person health changes due to age or help explain changes in
continuous dependent variables such as spousal interaction.
There have been several proposed methods of analyzing panel
sample data, including growth curves.
41
42. Selecting sample sizes
Selecting sample size is a function of
Study goals
Degree of precision required
Design type
Budget
Other (ethical etc.)
43. Selecting the sample sizeA simple formula for this is as follows;
n = N/1+N*(e)2
Where
n=sample size
N = population size
e=the confidence level we like to work with (eg. If it is
95% then the error is 5% (0.05); if it is 99% then the
error is 1% (0.01)
44. The larger the population variability larger the
sample size to get an accurate reading
If the population is mostly homogenous the sample
size can be small
45. Example:
It is required to identify a presence of a disease in a
population. The number of the population that we
need to get information is 2500. We would like to
have the confidence level is 95%. Then the sample
size would be
N=2500/1+ (2500)*(0.05)2
=344
46. Eg. Investigating the level of
biodiversity in a natural forests
Using either plots or transects, the sampling needs to
be increased until the number of plant species
becomes no more
47. Describe physical/biological and sociological
experiments separately taking some
examples
For examples
Biological experiments – can show how to use the
plots/transects and give reasons for using them – this
is for non moving objects such as plants.
For moving objects – circular plots with time series
observations
For social experiments – other methodologies can be
used such as interviews, observations, key informant
surveys, focal groups discussions etc. – elaborate this
Two general approaches to sampling are used in social science research. With probability sampling, all elements (e.g., persons, households) in the population have some opportunity of being included in the sample, and the mathematical probability that any one of them will be selected can be calculated. With nonprobability sampling, in contrast, population elements are selected on the basis of their availability (e.g., because they volunteered) or because of the researcher&apos;s personal judgment that they are representative. The consequence is that an unknown portion of the population is excluded (e.g., those who did not volunteer). One of the most common types of nonprobability sample is called a convenience sample – not because such samples are necessarily easy to recruit, but because the researcher uses whatever individuals are available rather than selecting from the entire population.
Because some members of the population have no chance of being sampled, the extent to which a convenience sample – regardless of its size – actually represents the entire population cannot be known