Application Of Sampling Methods For The Research Design
1. 180
Archives of Business Review ā Vol. 8, No.11
Publication Date: November 25, 2020
DOI: 10.14738/abr.811.9042.
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-
193.
Application Of Sampling Methods For The Research Design
Dr. Geoffrey Kapasa Mweshi
Dean- School of Social Sciences
ZCAS University ā Zambia
Kwesi Sakyi
Head of Research
ZCAS University ā Zambia
ABSTRACT
The objective of this paper is to discuss the application of the sampling
framework in a research with a view to understanding what it is, and
examining the application of the concept to the analysis of sampling as
one procedure that makes research manageable. When investigators
choose a sample they select a relatively small but representative
number of cases from the population of interest or universe of
discourse for enumeration or observation. A sample chosen in an
unbiased or scientific way is likely to yield results which are closer to
the population parameters. The discussion in this paper will address
the issues and decisions which are considered before determining the
sampling framework in a research so that there is a clear identification
of the phenomenon being researched on to create room for rigorous
analysis. The discussion went further by comparing and contrasting the
qualitative research approaches.
The paper further explored some philosophical underpinnings of
research in order to understand and appreciate some of the individual
organizational problems. The paper relied mainly on secondary
research by drawing insights from publications and books that had
contributed to the revelations about the nature and issues that may be
an important in both qualitative and quantitative research. The
literature review was therefore mainly focused on the nature of
analysis of quantitative and qualitative data collected through
empirical study through cross sectional and time series data using
various data collection instruments. The paper also examined in detail
data presentation methods and their implications for analysis.
Keywords words: Research, Collaborative and participatory methods,
Inductive and deductive approaches, Epistemological, Ethnography,
Grounded theory, Qualitative analysis, Qualitative research, Methodologies,
Sample size, Sampling methods.
INTRODUCTION
Sampling is the process by which a researcher carefully selects through probabilistic and non-
probabilistic methods a number of individual items from a larger population of interest for closer
2. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
181
study. Sampling strategies should, whenever possible, identify inclusion and exclusion criteria to
set boundaries on what item is selected and what is not selected from a given population of study.
To be fair, unbiased and to give each item in a given population an equal chance of being selected,
most researchers go for the scientific method of random sampling which sometimes for practical
purposes is idealistic and far-fetched. Next the population is identified as a group of elements about
which claims can be made after investigation through research. Once identified, the total elements
in the population of interest need to be ascertained so as to decide on the proportion of that
population to select as a representative sample of that population. (This is sometimes called the
sampling frame).
The sample size is carefully chosen as the number of individual cases for the study to yield
information that will represent the entire population. If the sample is well chosen, then the sample
statistics or characteristics will come closer to the entire population parameters or characteristics.
This is based on what is termed the Central Limit Theorem. We use sample statistics to describe all
the measures of central tendency and measures of dispersion (mean, mode, median; range,
variance, standard deviation, and inter-quartile range) pertaining to a sample while we use the
term population parameters when the mean, mode, median, variance, standard deviation, and the
inter-quartile range are derived from examining the entire population. If our sample is a good
representation of the actual population, then our sample statistics and population parameters
should tend to be the same with small margins of error.
SAMPLE SIZE
The number of individuals in the sample depends on the size of the population, and how precisely
the required results should be to represent the population as a whole. This is done based on the
following: how the sample is to be determined; how many individual cases may be needed. Sample
sizes do vary dramatically from studies involving a single case to those involving thousands, and
could be guided based on the questions to determine what size is appropriate. The sample size
chosen depends greatly on how the population is structured or composed into segments, layers,
clusters and groups; meaning the population is heterogeneous. If the population is heterogeneous,
we need to do multi-stage sampling or to give some weights to the groupings to show their
importance in the study. It is straight-forward if the population is homogenous.
Quantitative research favours larger sample sizes. Generally, it is wise to select a sample of at least
100 elements of a given population. In statistics, a small sample is anything 30 or below. The
implication is that if the population under investigation is small (say, less than 150), elaborate
sampling procedures are probably inappropriate (Gray et. al, 2007). For example, in survey
research, accuracy increases with larger samples. However, you must also consider the additional
costs and time wasted often associated with dealing with larger samples. There are sample size
calculators available online that can be used to determine the ideal sample size for a particular
study or population (see http://surveysystem.com, http://fluidsurveys.com, or just Google sample
size calculator for options).
Qualitative studies tend to rely heavily on people who are articulate and introspective enough to
provide rich descriptions of their experiences. Interviews that produce sketchy answers from
disinterested respondents are poor for content analysis. Qualitative methods favour naturalistic
observation and interviewing, and smaller sample sizes. The size of a sample is an important
3. 182
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
element in determining the statistical precision with which population values can be estimated. In
general, increased sample size is associated with decreased sampling error. The larger the sample
size, the more likely the sample statistics or results are closer to the population parameters.
However, the relationship between sampling error and sample size is not simple or proportional.
There are diminishing returns associated with adding elements to a sample. Of course, the ultimate
decision about sampling should be driven by the studyās research questions and goals. However,
the increase in accuracy with increased sample size does reach a point of diminishing returns (Gray
et al. 2007). We need much more than an increase in sample size to reduce the margin of error.
There are no hard-and-fast rules for sample sizes. It is a question of how much data is needed to
address the research questions, and how rigorous the results should be. In the natural sciences,
there is a lot of rigour but in the social sciences, we are relaxed because of the nature of handling
human issues which are unpredictable. Researchers need to provide a rationale or justification of
the sample as sufficient to meet the research purpose. In some projects, a single case may be all
that is needed (for example, in some oral history or auto-ethnography projects), whereas in other
cases there may be the need for about 20 or more participants (for example, in some focus group
projects). Although researchers have proposed some very loose guidelines, Svend Brinkmann
(2013) suggests that qualitative interview studies typically have no more than 15 participants.
These guidelines are somewhat erroneous, as each study differs in context. It is also erroneous for
some academics from science backgrounds to suggest that if a research paper or project does not
incorporate quantitative methods of data collection and analysis, then that research paper or
project is sub-standard. The Quantitative approach of research is not superior to the Qualitative
approach as each is used in a different context and both approaches used together reinforce each
other. The Qualitative approach is common in the Social Sciences and Humanities while the
Quantitative approach is common in the Natural Sciences but there is no hard and fast rule to stop
researchers in these fields to use any of these approaches or to use mixed approaches which is
termed triangulation. In fact, the results of a Qualitative research can be analyzed by using
Quantitative methods of inferential statistics if need be. In Qualitative research, we can carry out
content analysis of interviews and excerpts by using the Atlas T software. For Quantitative
research, we can use software such as SPSS, PSPP, EVIVO, Google Plus, and Excel Spreadsheet,
among others.
Paul J. Lavrakas noted that sample size should be considered during two phases of the research
process in interview studies: research design and data collection (their suggestions can be applied
to other forms of qualitative research, including ethnography and content analysis). During
research design, Roller and Lavrakas (2015:73) suggest considering four factors: 1. The breadth,
depth, and nature of the research topic or issue. 2. The heterogeneity or homogeneity of the
population of interest. 3. The level of analysis and interpretation required to meet research
objectives. 4. Practical parameters such as the availability of and access to interviewees, budget for
financial resources, time constraints, as well as travel and other logistics associated with
conducting face-to-face interviews.
Research Design is the discipline of how to plan and conduct empirical research, including the use
of both Quantitative and Qualitative methods incross-sectional and time series data collection over
time and space. Important decisions are made on the results of such studies. However, when an
4. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
183
effect has a corresponding confidence interval that is wide, then decisions based on such effect
need to be made with caution. It is entirely possible for a point estimate to be impressive according
to some standard, only for the confidence limits to illustrate that the estimate is not very accurate.
As part of the Research Design, a Conceptual Framework has to be drawn through producing mind-
maps to be able to have a clear picture of what the dependent and independent variables are with
regard to cause and effect. The Conceptual Framework is derived from self-brainstorming sessions
or group brainstorming sessions, and it can be illustrated either graphically or mathematically in
an equation.
It is from there that the researcher can know the variables to collect information on either through
primary firsthand field research or through secondary research based on already existing
published data. It is at this stage that you begin to think about the universe of discourse or
population of interest and the sampling methods and sample sizes to use for your investigation.
During research design, four factors should be considered based on the breadth, depth, and nature
of the research topic or issue. These are: The heterogeneity or homogeneity of the population of
interest; The level of analysis and interpretation required to meet research objectives; and The
practical parameters such as the availability of and access to interviewees, budget for financial
resources, time constraints, as well as travel and other logistics associated with conducting face-
to-face interviews.
First, there is a need to identify the target population of the research, and the population should be
for the entire group that is required to draw conclusions about. The population should then be
defined in terms of geographical location, age, income, and many other characteristics. It goes
without saying that it is important to carefully define the target population according to the
purpose and practicalities of the project.
If the population is very large, demographically mixed, and geographically dispersed, it might be
difficult to gain access to a representative sample. The sampling frame is considered for the actual
list of individuals from which a sample is to be drawn. Ideally, it should include the entire target
population. The sample is the specific group of individuals that you will collect data from as a good
representative group of the entire population. In this exercise, we may here mention inferential
statistics as that branch of statistics that makes deductions and inductions about the characteristics
of the population from the sample using probabilistic methods of estimation. The standardized
discriminant coefficient and the structure coefficient can be unreliable with a small sample size.
For example, a commonly used set of guidelines for the standardized mean difference in the
behavioral, educational, and social sciences is that population standardized effect sizes of 0.2, 0.5,
and 0.8 are regarded as small, medium, and large effects, respectively, following conventions
established by Jacob Cohen beginning in the 1960s. Suppose that the population standardized
mean difference is thought to be medium (i.e., 0.50), based on an existing theory and a review of
the relevant literature. Furthermore, suppose that a researcher planned the sample size so that
there would be a statistical power of .80 when the Type I error rate is set to .05, which yields a
necessary sample size of 64 participants per group (128 total).
5. 184
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
DEGREES OF FREEDOM
In statistics, the degrees of freedom is a measure of the level of precision required to estimate a
parameter (i.e., a quantity representing some aspect of the population). It expresses the number
of independent factors on which the parameter estimation is based and is often a function of
sample size. In general, the number of degrees of freedom increases with increasing sample size
and with decreasing number of estimated parameters. The quantity is commonly abbreviated df.
For a set of observations, the degrees of freedom is the minimum number of independent values
required to resolve the entire data set. It is equal to the number of independent observations being
used to determine the estimate (n) minus the number of parameters being estimated in the
approximation of the parameter itself, as determined by the statistical procedure under
consideration.
The concept of degrees of freedom is fundamental to understanding the estimation of population
parameters (e.g., mean) based on information obtained from a sample. The amount of information
used to make a population estimate can vary considerably as a function of sample size. For instance,
the standard deviation (a measure of variability) of a population estimated on a sample size of 100
is based on 10 times more information than is a sample size of 10. The use of large amounts of
independent information (i.e., a large sample size) to make an estimate of the population usually
means that the likelihood that the sample estimates are truly representative of the entire
population is greater (Salkind et al, 2010). This is the meaning behind the number of degrees of
freedom. The larger the degrees of freedom, the greater the confidence the researcher can have
that the statistics gained from the sample accurately describe the population.
To demonstrate this concept, consider a sample data set of the following observations (n Ā¼ 5): 1,
2, 3, 4, and 5. The sample mean (the sum of the observations divided by the number of
observations) equals 3, and the deviations about the mean are _2, _1, 0, Ć¾1, and Ć¾2, respectively.
In such a situation, supposing that the observed standardized mean difference was in fact exactly
0.50, the 95% confidence interval has lower and upper limits of .147 and .851 respectively. Thus,
the lower confidence limit is smaller than āāsmallāā and the upper confidence limit is larger than
āālarge.āā Although there was enough statistical power (recall that sample size was planned so that
powerĀ¼.80, and indeed, the null hypothesis of no group mean difference was rejected, pĀ¼.005).
In this case, sample size was not sufficient from an accuracy perspective, as illustrated by the wide
confidence interval.
The argument for planning sample size from an AIPE perspective is based on the desire to report
point estimates and confidence intervals instead of or in addition to the results of null hypothesis
significance tests. This paradigmatic shift has led to AIPE approaches to sample size planning
becoming more useful than was previously the case, given the emphasis now placed on confidence
intervals instead of a narrow focus on the results of null hypothesis significance tests.
The approach to sample size planning is able to simultaneously consider the direction of an effect
(which is what the null hypothesis significance test provides), its magnitude (best- and worst-case
scenarios based on the values of the confidence limits), and the accuracy with which the population
parameter was estimated (via the width of the confidence interval). The sample size is critical in
inferential statistics. The N comprises part of the formula for estimates of sample variances and the
6. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
185
standard error. The standard error forms the denominator for statistics such as t tests. The N is
also used to calculate degrees of freedom for many statistics, such as F tests in analysis of variance
or multiple regression, and it influences the size of Chi square.
Human participants in studies generally represent a subset of the entire population of people
whom the researcher wishes to understand; this subset of the entire population is known as the
study sample. Unless a study sample is chosen using some form of random sampling in which every
member of the population has a known, non-zero chance of being chosen to participate in the study,
it is likely that some form of sampling bias exists. Even for surveys that attempt to use random
sampling of a population via random-digit dialing, the sample necessarily excludes people who do
not have phone service.
HOMOGENEITY OF VARIANCEāCOVARIANCE MATRICES
A further assumption made in discriminant analysis is that the population varianceācovariance
matrices are equal across groups. This assumption is called homogeneity of varianceācovariance
matrices (Salkind et al, 2020). When sample sizes are large or equal across groups, the significance
test of discriminant function is usually robust with respect to the violation of the homogeneity
assumption. However, the classification is not so robust in that cases tend to be overclassified into
groups with greater variability. When sample sizes are small and unequal, the failure to meet the
homogeneity assumption often causes misleading results of both significance tests and
classifications. Therefore, prior to performing discriminant analysis, the tenability of the
assumption of homogeneity of varianceācovariance matrices must be tested.
In the context of research, participants are individuals who are selected to participate in a research
study or who have volunteered to participate in a research study. They are one of the major units
of analysis in both qualitative and quantitative studies and are selected using either probability or
non-probability sampling techniques. Participants make major contributions to research in many
fields. Participants can be identified or selected using two methods: probability sampling or non-
probability sampling. The method used for sample selection is dictated by the research questions.
In probability sampling methodologies (for example, simple random, systematic, stratified
random, cluster), a random selection procedure is used to ensure that no systematic bias occurs in
the selection process. This contrasts with non-probability sampling methodologies (for example,
convenience, purposive, snowball, quota), where random selection is not used.
Once potential participants are identified, the next task involves approaching the individuals to
elicit their cooperation. Depending on the purpose of the research, the unit of analysis may be
either individuals or groups.
PROBABILITY SAMPLING
Probability sampling relies on probability theory and involves the use of any strategy in which
samples are selected in a way that every element in the population has a known and non-zero and
equal chance of being selected. This means that the chance that each element in the population will
be included in the sample can be statistically determined, and the chance of inclusion, no matter
how small, will be a number above zero. Each element has equal chance of inclusion. Probability
sampling is based on the notion that the people or events chosen are selected because they are
representative of the entire population. Probability sampling allows one to have confidence that
7. 186
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
the results are accurate and unbiased, and it allows one to estimate how precise the data is likely
to be. The data from a properly drawn sample is superior to data drawn from an individual.
Probability sampling strategies are typically used in quantitative research, and may also be used in
the quantitative phase of mixed methods research or what is referred to as triangulation. These
samples are useful when researchers want to generalize their findings to a larger population. The
results of studies that rely on probability sampling are typically statistical in nature. The following
subsections describe the main types of probability sampling strategies.
Probability sampling allows one to have confidence that the results are accurate and unbiased, and
it allows one to estimate how precise the data is likely to be. The data from a properly drawn
sample is superior to data drawn from individuals who just pop up or show up at a meeting or
perhaps speak the loudest and convey their personal thoughts and sentiments.
The sampling frame (the set of people that have a chance of being selected and how well it
corresponds to the population studied), the size of the sample (Salkind et al, 2010), the sample
design (particularly the details of the sample design, including size and selection procedures)
influence the precision of sample estimates regarding how likely the sample is to approximate
population characteristics.
In probability sampling, the selection probabilities of individual population elements and the
algorithm with which these are randomly selected are specified by a sampling design. In turn, to
apply a sampling design requires a device or frame that delineates the extent of the population of
interest. A population often can be sampled directly using a list frame that identifies all the
elements in that population, as when the names of all students in a district are registered on school
records. If a list frame is available, a probability sample can be formed as each of a string of random
numbers generated in accordance with a design is matched to an element or cluster of elements in
the population.
Regardless of how a population is framed, however, the frame must be complete in the sense that
it includes the entirety of the population. Inasmuch as any fraction of the population omitted from
the frame will have zero probability of being selected, the frame ultimately fixes the population to
which probability sampling inferences apply.
Probability sampling admits the selection of any one of a typically large number of possible
samples. Indeed, an advantage of probability sampling is that the character and degree of
variability in an estimatorās randomization distribution often can be derived from the sampling
design and the estimatorās statistical form. Probability sampling is instead formulated to objectify
the selection process so as to permit valid assessments of the distribution of sample-based
estimates.
One area of active research in probability sampling is the incorporation of statistical models into
sample selection and estimation strategies. In many cases, this offers the potential to improve
accuracy without sacrificing the objectivity of the probability sampling design as the basis for
inference. Of course, in many applications, statistical models can be used to great effect as a basis
for inference, but the validity of inferences so drawn then rest on the veracity of the presumed
8. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
187
model rather than on the sample selection process itself. There are many formulas for helping to
choose the sample size, n, from a given population, N. Slovin/Slonimās formula, also known as
Yamaneās formula which was brought up in 1960 is given as:
n = N / (1+ N e2)
where n is the sample size to be determined,
N is the actual population size,
e is the margin of error required such as 1% or 5%
from the normal distribution table.
The formula which can be used to calculate the minimum required sample size is adapted from
Cochran (1977) where the confidence level is the level of certainty that the characteristics of the
data collected will represent the characteristics of the total population. A higher confidence level
indicates a larger sample size. The margin of error refers to the accuracy required for any estimates
made from the selected sample. A smaller margin of error means a larger sample size. The z-score
represents the number of standard deviations a given proportion is away from the mean. Required
Sample size =
! + (
%&
' ((! ā ()
+&,
)
Soure: (Cochran, 1977)
Where:
Ā§ N = Lusaka population size =2774000
Ā§ P = Percentage/proportion picking a choice =0.5
Ā§ e = Margin of error (percentage in decimal form) = 6% = 0.06
Ā§ Confidence level of 95% gives us a z-score of 1.96.
Ā§ z = z-score = 1.96
Sample size
! + -
!. /0&
' 1. 2(! ā 1. 2)
1. 10& ' &334111
5
Sample size = 267
Therefore, a minimum convenient sample of 267 respondents will be required for this quantitative
study.
Data collection
Simple random sampling
In a simple random sample, every member of the population has an equal chance of being selected.
Your sampling frame should include the whole population. To conduct this type of sampling, you
can use tools like random number generators or other techniques that are based entirely on
chance. Example: You want to select a simple random sample of 100 employees of Company X. You
assign a number to every employee in the company database from 1 to 1000, and use a random
number generator to select 100 numbers from the lot.
9. 188
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
Systematic sampling
Systematic sampling provides a direct means of drawing dispersed sets of elements. With a
systematic probability sampling design, one element is randomly selected from the frame, and then
all elements that are separated from this initial selection by a fixed sampling interval are added to
the sample. Systematic sampling generally ensures that all elements have equal inclusion
probabilities but, at the same time, renders observable only those combinations of elements that
are congruent with the sampling interval. Systematic designs are particularly efficient when the
sampling interval separates selected elements along a population gradient, be it a natural gradient
or one artificially imposed by ordering the frame.
Systematic sampling is similar to simple random sampling, but it is usually slightly easier to
conduct. Every member of the population is listed with a number, but instead of randomly
generating numbers, individuals are chosen at regular intervals.
For example: All employees of the organization considered for the study are expected to be
alphabetically listed. Out of the first 10 numbers that are selected randomly should be considered
a starting point: if the number 5 is considered for the sampling, then all the numbers from 5
onwards are picked up to the end of the sample: that is for every 10th subset on the list that is
selected (5, 15, 25, 35, 45, and 55, and so on). If you use this technique, it is important to make sure
that there is no hidden pattern in the list that might skew the sample. For example, if the HR
database groups employees by teams, and team members are listed in order of seniority, there is a
risk that your interval might skip over people in junior roles, resulting in a sample that is skewed
towards senior employees.
Stratified sampling
Stratified sampling designs often vary the inclusion probabilities across strata in order to sample
larger, more variable, or more important strata with higher intensity. This concept is carried
further by many unequal probability sampling designs, such as Poisson sampling and list sampling.
These designs are most effective when the inclusion probability of each element can be made
approximately proportional to the magnitude of the attribute of interest. Often this is achieved by
making the inclusion probabilities proportional to a readily available auxiliary variable that is in
turn positively correlated with the attribute of interest.
Stratified sampling method is appropriate when the population has mixed characteristics where
the need is to ensure that every characteristic is proportionally represented in the sample. The
population is divided into subgroups (called strata) based on the relevant characteristic (e.g.
gender, age range, income bracket, job role). From the overall proportions of the population, a
calculation of the number of people should be sampled from each subgroup. Then the random or
systematic sampling could be used to select a sample from each sub-group.
For example, a company has 800 female employees and 200 male employees. You want to ensure
that the sample reflects the gender balance of the company, so you sort the population into two
strata based on gender. Then you use random sampling on each group, selecting 80 women and 20
men, which gives you a representative sample of 100 people.
10. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
189
Cluster sampling
This is a multi-stage sampling strategy. First, pre-existing clusters are randomly selected from a
population. Next, elements in each cluster are selected randomly. Cluster sampling also involves
dividing the population into sub-groups, but each sub-group should have similar characteristics to
the whole sample. Instead of sampling individuals from each sub-group, you randomly select entire
sub-groups. If it is practically possible, you might include every individual from each sampled
cluster. If the clusters themselves are large, you can also sample individuals from within each
cluster using one of the techniques above.
This method is good for dealing with large and dispersed populations, but there is more risk of
error in the sample, as there could be substantial differences between clusters. It is difficult to
guarantee that the sampled clusters are really representative of the whole population.Example The
company has offices in 10 cities across the country (all with roughly the same number of employees
in similar roles). You do not have the capacity to travel to every office to collect your data, so you
use random sampling to select 3 offices ā these are your clusters.
Non-probability sampling
Non-probability refers to procedures in which researchers select their sample elements not based
on a pre-determined probability. This entry examines the application, limitations, and utility of
non-probability sampling procedures. Non-probability sampling is conducted without the
knowledge about whether those chosen in the sample are representative of the entire population.
In some instances, the researcher does not have sufficient information about the population to
undertake probability sampling. The researcher might not even know who or how many people or
events make up the population.
In other instances, non-probability sampling is based on a specific research purpose, the
availability of subjects, or a variety of other non- statistical criteria. Applied social and behavioral
researchers often face challenges and dilemmas in using a random sample, because such samples
in a real-world research are āāhard to reachāā or not readily available. Even if researchers have
contact with hard to reach samples, they might be unable to obtain a complete sampling frame
because of peculiarities of the study phenomenon, and time and financial constraints.
In a non-probability sample, individuals are selected based on non-random criteria, and not every
individual has a chance of being included. This type of sample is easier and cheaper to access, but
you cannot use it to make valid statistical inferences about the whole population. Non-probability
sampling techniques are often appropriate for exploratory and qualitative research. In these types
of research, the aim is not to test a hypothesis about a broad population, but to develop an initial
understanding of a small or under-researched population.
Non-probability studies could be used where data is gathered through a sampling procedure with
hidden selection bias and/or with small sample sizes and which include several versions of survey
sampling that often are expedient to implement but do not allow calculation of the probability of
selection of the sample from among possible alternatives. It is difficult to offer precise remedial
measures to correct the most commonly encountered problems associated with the use of non-
probability samples because such measures vary by the nature of research questions and type of
data researchers employ in their studies. Instead of offering specific measures, the following
11. 190
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
strategies are offered to address the conceptual and empirical dilemmas in using non-probability
samples. In theory, all research should use probabilistic sampling methodology, but in practice this
is difficult especially for hard to reach, hidden, or stigmatized populations. This is because, it is
important to stress that the results of the study are meaningful if they are interpreted
appropriately and used in conjunction with statistical theories. Theory, design, analysis, and
interpretation are all connected closely.
As a researcher, it is important to study compelling populations and compelling questions. This
often involves purposive samples in which the research population has some special significance.
Most commonly used samples, particularly in applied research, are purposive. Purposive sampling
is more applicable in exploratory studies and studies that contribute new knowledge. Therefore,
it is imperative for researchers to conduct thorough literature review to understand the āāedge of
the fieldāā and whether the study population or question is a new or significant contribution.
Purposive samples are selected based on a predetermined criterion related to the research.
Research that is field-oriented and not concerned with statistical generalizability often uses non-
probabilistic samples. This is especially true in qualitative research studies. Qualitative researchers
are more apt to use some form of purposive sampling. They might seek out people, cases, events,
or communities because they are extreme, critical, typical, or atypical. Adequate sample size
typically relies on the notion of āāsaturationā, or the point in which no new information or themes
are obtained from the data like in interviews or focus group discussions. In qualitative research
practice, this can be a challenging situation. According to Hussey (2004), a homogeneous sample
produced by non-probability sampling is better than a less homogeneous sample produced by
probability sampling in prediction. Therefore, if the intention is not to infer statistics from sample
to population, using a non-probability sample is a better strategy than using a probability sample.
Non-probability sampling remains an easy way to obtain feedback and collect information. It is
convenient, verifiable, and low cost, particularlywhen compared with face-to-face paper and pencil
questionnaires.
Convenience sampling
A convenience sample simply includes the individuals who happen to be most accessible to the
researcher and who may be in a position to give the required information that is sought by the
researcher. This is an easy and inexpensive way to gather initial data, but there is no way to tell if
the sample is representative of the population, so it cannot produce generalizable results. Example:
You are researching opinions about student support services in your university, so after each of
your classes, you ask your fellow students to complete a survey on the topic. This is a convenient
way to gather data, but as you only surveyed students taking the same classes as you at the same
level, the sample is not representative of all the students at your university.
Purposive sampling
Purposeful sampling (also called purposive or judgment sampling) is based on the premise that
seeking out the best cases for the study produces the best data, and research results are a direct
result of the cases sampled. This is a strategic approach to sampling in which āinformation-rich
casesā are sought out in order to best address the research purpose and questions. Sampling is a
central feature of research design when purposeful strategies are used because the better the
participants are positioned in relation to the topic, the richer the data will be.
12. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
191
Purposive sampling implies that a researcher interested in how cancer patients cope with pain will
seek out respondents who have pain rather than randomly sample from an oncologistās patient
roster. As such, it should not be confused with convenience sampling, or selecting respondents
based solely on their availability. This is commonly used in clinical research (where ready access
to specific types of patients overshadows concerns about non-representativeness), convenience
sampling is generally antithetical to the aims of qualitative methods (Padgett, 2017). Convenience
sampling may lead a researcher to a particular site (e.g., a domestic violence shelter where he or
she has volunteered in the past), but this should be done only if that site is most appropriate for
the study. Even when an appropriate site is available, the method for recruiting and selecting study
participants should be purposive and not one of convenience.
Purposeful sampling strategies are typically used in qualitative research, and involve the use of the
researcherās knowledge of the population in terms of research goals. Elements are selected based
on the researcherās judgment that they will provide access to the desired information. For example,
sometimes purposive sampling is used to select typical cases, and sometimes it is used to select
atypical cases. Purposive sampling also can be used to select participants based on their willingness
to be studied or on their knowledge of a particular topic.
This type of sampling involves the researcher using their judgement to select a sample that is most
useful for the purposes of the research. It is often used in qualitative research where the researcher
wants to gain detailed knowledge about a specific phenomenon rather than make statistical
inferences.
An effective purposive sample must have clear criteria and rationale for inclusion. Example -You
want to know more about the opinions and experiences of disabled students at your university, so
you purposefully select a number of students with different support needs in order to gather a
varied range of data on their experiences with student services.
Snowball sampling
Snowball sampling is used with isolated or hidden populations whose members are not likely to be
found unless with cooperation from a known individual who with time, having developed trust in
you, will give referrals to others in their network. Examples include gang members, IV drug users,
or members of a religious sect (Padgett, 2017). A quantitative variant of snowball sampling,
respondent-driven sampling (RDS), was developed to assist researchers in gaining access to hard-
to-reach populations for study such as AIDS transmission, IV drug use and other hidden behaviors.
Using chain referrals and āsteering incentivesā for recruitment, RDS has been shown to yield larger
and more representative samples despite having origins in snowball or convenience samples
(Salganik & Heckathorn, 2004).
If the population is hard to access, snowball sampling can be used to recruit participants via other
participants. The number of people you have access to āsnowballsā as you get in contact with more
people. Snowball sampling is sampling from a known network. Snowball sampling is used to
identify participants when appropriate candidates for study are difficult to locate. For example, if
locating an adequate number of profoundly deaf people is difficult, a profoundly deaf person who
participates in a local support group could be recruited to assist in locating other profoundly deaf
people willing to participate in a study. In other words, it is possible to have known members of a
population help identify other members of their population. Example: You are researching
13. 192
URL: http://dx.doi.org/10.14738/abr.811.9042
Mweshi, G. K., & Sakyi, K. (2020). Application Of Sampling Methods For The Research Design. Archives of Business Research, 8(11). 180-193.
experiences of homelessness in your city. Since there is no list of all homeless people in the city,
probability sampling is not possible. You meet one person who agrees to participate in the
research, and she puts you in contact with other homeless people that she knows in the area. The
same could apply to the study of prostitutes or sexual workers in a city.
Convenience Sampling
Few terms or concepts in the study of research design are as self-explanatory as convenience
sampling. Convenience sampling (sometimes called accidental sampling) is the selection of a
sample of participants from a population based on how convenient and readily available that group
of participants is. It is a type of non-probability sampling that focuses on a sample that is easy to
access and readily available. For example, if one were interested in knowing the attitudes of a group
of Sophomore college students toward binge drinking, a convenience sample would be those
students enrolled in an introductory biology class. The advantages of convenience sampling are
clear. Such samples are easy to obtain, and the cost of obtaining them is relatively low. The
disadvantages of convenience sampling should be equally clear. Results from studies using
convenience sampling are not very generalizable to other settings, given the narrow focus of the
technique.
References
Bhattacherjee, A. (2012). Social Science Research-Principles, Methods, and Practices [Online] Retrieved from
http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1002&context=oa_textbooks
Blair, E. & Blair, J. (2015) Applied Survey Sampling London: Sage Publications
Brinkmann, S. (2012) Qualitative Inquiry in Everyday Life: Working with Everyday Life Materials London: SAGE.
Bryman, A. & Bell, E. [Online] Retrieved from
https://books.google.co.zm/books?id=YnCcAQAAQBAJ&printsec=frontcover&rediresc=y#v=onepage&q&f=false
Cochran, W. (1977) Sampling Techniques (3rd ed.) New York, N.Y.: John Wiley & Sons
Cohen, Louis. Manion, Lawrence & Morrison, Keith (2007) Research Methods in Education [Online] Retrieved from
https://islmblogblog.files.wordpress.com/2016/05/rme-edu-helpline-blogspot-com.pdf
Dawson, C. (2002) Practical Research Methods [Online] Retrieved
fromhttps://www.google.com/search?q=practical+research+methods%2Fcatherine+dawson&ie=utf-8&oe=utf-
8&client=firefox-b-ab
Francis, A. (2015) Business Mathematics and Statistics (6th ed.) London: Cengage Learning
Gray, S.P, Williamson, J.B, Karp, D.A, and Dalphin J.R. (2007), The Research Imagination: An Introduction to
Qualitative and Quantitative Methods Cambridge University Press
Hussey, D. & Guo, S., (2004) Non-probability sampling in social work research Journal of Social Service Research, 30,
1ā18.
Kumar, R. (2011) Research Methodology London: Sage Publications
Lucey, T. (2002) Quantitative Techniques (6th ed.) London: Cengage
Roller, M. R., & Lavrakas, P. J. (2015) Applied Qualitative Research Design: A Total Quality Framework Approach New
York: Guilford Press
Salganik, M. J., & Heckathorn, D. D. (2004) Sampling and Estimation in Hidden Populations using Respondent-driven
Sampling Sociological Methodology, 34, 193ā239
Salkind N.J., Frey B.B. (2010). Encyclopedia of Research Design, SAGE Publications, Inc. Volume 1 United States of
America
14. Archives of Business Research (ABR) Vol 8, Issue 11, November-2020
193
Saunders, M., Lewis, P., & Thornhill, A. (2009) Research Methods for Business Students [Online] Retrieved from
https://is.vsfs.cz/el/6410/leto2015/BA_BSeBM/um/um/Research_Methods_for_Business_Students__5th_Edition.pdf
Stafford, L.W.T. (1978) Business Mathematics for Economists Norwich, U.K.: M & E Handbooks
Yeoman, K.A. (1968) Introductory Statistics-Statistics for the Social Scientist London: Penguin Books
Greener, S. (2008) Business Research Methods [Online] Retrieved from
http://web.ftvs.cuni.cz/hendl/metodologie/introduction-to-research-methods.pdf
Khan Academy (The) Scientific Method [Online] Retrieved from
https://www.khanacademy.org/science/biology/intro-to-biology/science-of-biology/a/the-science-of-biology
Kothari, C. (2004) Research Methodology-Methods and Techniques [Online] Retrieved from
http://www.modares.ac.ir/uploads/Agr.Oth.Lib.17.pdf
Kumar, R. (2011). Research Methodology-A Step by Step Guide for Beginners [Online] Chapter 1 pp. 1 to 35 Retrieved
from http://www.sociology.kpi.ua/wp-content/uploads/2014/06/Ranjit_Kumar-Research_Methodology_A_Step-by-
Step_G.pdf
Logic and Venn Diagrams [Online] Retrieved from http://www.cimt.org.uk/mepjamaica/unit10/TeachingNotes.pdf
Macdonald, S. & Headlam, N. (2011) Research Methods Handbook [Online] Retrieved from
http://www.cles.org.uk/wpcontent/uploads/2011/01/Research-Methods-Handbook.pdf
Moroney, M.J. (1962) Facts from Figures London: Penguin Books
Sevilla, C. G. et al.(2007) Research Methods Quezon City, Philippines: Rex Printing Company
Stanford Encyclopaedia of Philosophy-Thomas Kuhn [Online] Retrieved from
https://plato.stanford.edu/entries/thomas-kuhn/
Steps of the Scientific method [Online] Retrieved from http://www.sciencebuddies.org/science-fair-
projects/project_scientific_method.shtml
Thompson, P. & Walker, M. (2010) (The) Routledge Doctoral Students Companion- Getting to Grips with Research
[Online] Retrieved from https://www.routledge.com/The-Routledge-Doctoral-Students-Companion-Getting-to
Grips-with Research/ThomsonWalker/p/book/9780415484121
Walliman, N. (2011) Research Methods-The Basics [Online] Routledge Retrieved from
https://edisciplinas.usp.br/pluginfile.php/2317618/mod_resource/content/1/BLOCO%
202_Research%20Methods%20The%20Basics.pdf
Note
This paper was inspired by our supervision of both undergraduate and post-graduate students
who have to write their dissertations after taking courses in Research Methods. We realize the
difficulty many of them go through in undertaking the seemingly humungous task of research in
their final years. We thought we should write something together to share our insights gained from
guiding our students at ZCAS University in Lusaka, Zambia with their work. We hope that our
students, other students and researchers at large, will find this paper valuable for their work.
However, we will like to note that the cost of publishing this paper was borne by us from our
meager personal resources.
Acknowledgement
We will like to thank our University, ZCAS University, for encouraging us to publish or else we
perish as the new normal is for our lecturers to publish as many articles as possible in leading
journals which have high Impact Factor. We also thank our supervisors at work for giving us the
leeway to create some chink of time to engage in research work despite our heavy work schedule.