Banzon, Janssen Marton
Engada, Renz Aldaine
Experimentation is being conducted to investigate a population that has been altered for study.
The basic idea is to study the effect that a change in one variable (independent variable) has on
another variable (dependent variable). However, an experimental design cannot be carried out
unless one knows how to do sampling. A chemical analysis is meaningless unless you begin with
a meaningful sample.
Researchers usually cannot make direct observations of every individual in the population they
are studying. Instead, they collect data from a subset of individuals – a sample – and use those
observations to make inferences about the entire population.
Ideally, the sample corresponds to the larger population on the characteristic(s) of interest. In
that case, the researcher's conclusions from the sample are probably applicable to the entire
population. This type of correspondence between the sample and the larger population is most
important when a researcher wants to know what proportion of the population has a certain
The objectives of this report include the following:
1. Define sampling
2. Enumerate the reasons for sampling
3. Describe the two types of samples, and the different sampling methods
4. Discuss the sampling process, and the sampling storage.
TERMINOLOGIES AND DEFINITIONS
Sampling: A process of selecting representative material to analyze. Sampling is the act,
process, or technique of selecting a suitable sample, or a representative part of a population for
the purpose of determining parameters or characteristics of the whole population. Sampling is
one of the most important operations in a chemical analysis. Chemical analyses use only a small
fraction of the available sample. The fractions of the samples that collected for analyses must
be representative of the bulk materials. Knowing how much sample to collect and how to
further subdivide the collected sample to obtain a laboratory sample is vital in the analytical
Sample: A finite part of a statistical population whose properties are studied to gain
information about the whole (Webster, 1985). When dealing with people, it can be defined as a
set of respondents (people) selected from a larger population for the purpose of a survey. In a
short definition, a sample is a subset of the population.
Population: A group of individuals, objects, or items from which samples are taken for
measurement. It is a set which includes all measurements of interest to the researcher (the
collection of all responses, measurements, or counts that are of interest).
Target Population: The population to be studied/ to which the investigator wants to generalize
Sampling Unit: The smallest unit from which sample can be selected. Also called subjects.
Sampling frame: List of all the sampling units from which sample is drawn.
Sampling scheme: Method of selecting sampling units from sampling frame.
Randomization: A sampling method used in scientific experiments. It is commonly used in
randomized controlled trials in experimental research.
REASONS FOR SAMPLING
Sampling is done for any or all of the following reasons:
1. Due to limitations of time, money or personnel, it is impossible to study every item in
the population (Inaccessibility of the entire population)
2. Examining an item may require that the item/sample be destroyed (Destructive nature
of many Observation)
3. Samples due to its small size can be thoroughly studied.
4. Fewer errors are encountered in the collection and handling of data (Reliability or
TYPES OF SAMPLING
A sample is being selected from a population according to some rule or plan. There are two
types of samples: the Probability Sample and the Non-probability Sample. When the selection
of items is done according to some chance mechanism where the elements have an equal
chance of being selected, we have a Probability Sample. On the other hand, items selected by
judgment where elements do not have an equal chance of being taken are called Non-
In sampling, the population from which the sample must be drawn has to be defined. The
sample from the identified population must be selected by using the appropriate method.
Some guidelines must be followed in drawing the sample. Homogeneity of the population is an
important factor to consider. If the subjects under study are homogenous in their
characteristics that might affect the results, a small sample is sufficient. However, if the
material of interest is variable, it will be best to take a large sample, and this will undergo the
sampling process. Now, why don't we use non-probability sampling schemes? Two reasons:
first, we can't use the mathematics of probability to analyze the results; and second, we can't
count on a non-probability sampling scheme to produce representative samples.
Simple Random Sampling: A simple random sample (SRS) of size n is produced by a scheme
which ensures that each subgroup of the population of size n has an equal probability of being
chosen as the sample. Each unit in the population is identified, and each unit has an equal
chance of being in the sample. The selection of each unit is independent of the selection of
every other unit. Selection of one unit does not affect the chances of any other unit.
Stratified Random Sampling: Divide the population into "strata". There can be any number of
these. Then choose a simple random sample from each stratum. Combine those into the overall
sample. That is a stratified random sample. Each unit in the population is identified, and each
unit has a known, non-zero chance of being in the sample. This is used when the researcher
knows that the population has sub-groups (strata) that are of interest.
Multi-Stage Sampling: Sometimes the population is too large and scattered for it to be practical
to make a list of the entire population from which to draw a SRS. This sampling is a complex
form of cluster sampling. Cluster sampling is a type of sampling which involves dividing the
population into groups (or clusters). Then, one or more clusters are chosen at random and
everyone within the chosen cluster is sampled.
Disproportional sampling: A probability sampling technique used to address the difficulty
researchers’ encounter with stratified samples of unequal sizes. This sampling method divides
the population into subgroups or strata but employs a sampling fraction that is not similar for
all strata; some strata are oversampled relative to others.
Convenience sampling: Also called an "accidental" sample or "man-in-the-street" samples. The
researcher selects units that are convenient, close at hand, easy to reach, etc. Subjects are
selected because of their convenient accessibility and proximity to the researcher.
In all forms of research, it would be ideal to test the entire population, but in most cases, the
population is just too large that it is impossible to include every individual. This is the reason
why most researchers rely on sampling techniques like convenience sampling, the most
common of all sampling techniques. Many researchers prefer this sampling technique because
it is fast, inexpensive, easy and the subjects are readily available.
Quota Sampling: As with stratified samples, the population is broken down into different
categories. However, the size of the sample of each category does not reflect the population as
a whole. This can be used where an unrepresentative sample is desirable (e.g. you might want
to interview more children than adults for a survey on computer games), or where it would be
too difficult to undertake a stratified sample. Quota sampling is a non-probability sampling
technique wherein the assembled sample has the same proportions of individuals as the entire
population with respect to known characteristics, traits or focused phenomenon. In addition to
this, the researcher must make sure that the composition of the final sample to be used in the
study meets the research's quota criteria.
Purposive Sampling/ Judgmental Sampling: The researcher selects the units with some
purpose in mind, for example, students who live in dorms on campus, or experts on urban
development. A non-probability sampling technique where the researcher selects units to be
sampled based on their knowledge and professional judgment. This type of sampling technique
is also known as purposive sampling and authoritative sampling.
Purposive sampling is used in cases where the specialty of an authority can select a more
representative sample that can bring more accurate results than by using other probability
sampling techniques. The process involves nothing but purposely handpicking individuals from
the population based on the authority's or the researcher's knowledge and judgment.
THE SAMPLING PROCESS
We say that a substance is homogeneous if its composition is the same everywhere. By
contrast, a heterogeneous substance has a different composition from one place to another.
If you wanted to know how much aluminum is in ocean water, you could not simply take a
sample from one depth or one location. Even a shallow lake is likely to be heterogeneous, with
the topmost layer in equilibrium with the atmosphere and the bottom in equilibrium with
sediments. Temperature and density gradients in the lake prevent rapid mixing of the layers.
Many analytical problems begin with objects that are not suitable for a laboratory experiment.
The object might be human tissue, a 2,000-year-old urn, a lake full of water, or a trainload of
ore. To perform a meaningful chemical analysis, we must obtain a small homogeneous sample
whose composition is representative of the larger object.
The Steps Involved in Sampling
The flow diagrams below summarises the steps in going from a real object to individual samples
that can be analysed. A lot is the total material (the tissue, the urn, the lake, etc.) from which
samples are taken. A bulk sample (also called a gross sample) is taken from the lot for analysis
or archiving (storing for future reference). The bulk sample must be representative of the lot,
and the choice of bulk sample is critical to producing a valid analysis. The statistics of the
sampling process are also important.
From the representative bulk sample, a smaller, homogeneous laboratory sample is formed
that must have the same composition as the bulk sample. For example, we might obtain a
laboratory sample by grinding an entire solid bulk sample to a fine powder, mixing thoroughly,
and keeping one bottle of powder for testing. Small test portions (called aliquots) of the
laboratory sample are used for individual analyses. Sample preparation is the series of steps
needed to convert a representative bulk sample into a form suitable for chemical analysis. In
the case of a chocolate bar, we assumed that the bar was homogeneous. Sample preparation
would consist of removing fat and dissolving the desired analytes.
Here, Quartering Samples are often used. Quartering is a method of obtaining a representative
sample for analysis or test of an aggregate with occasional shovelful, of which a heap or cone is
formed, this is flattened out and two opposite quarter parts are rejected. Another cone is
formed from the remainder which is again quartered, the process being repeated until a sample
of the required size is left
Besides choosing a sample judiciously, we must be careful about storing the sample. The
composition of the sample may change with time after collection because of chemical changes,
reaction with air, or interaction of the sample with its container. Glass is a notorious ion
exchanger that can alter the concentrations of trace ions in a solution. Therefore, plastic
(especially Teflon or Tedlar bags) collection bottles are frequently employed. But even plastic
containers must be washed properly before use (e.g. manganese in blood serum samples
increased by a factor of 7 when stored in unwashed polyethylene containers prior to analysis).
Steel needles are an avoidable source of metal contamination in biochemical analysis.
TYPES OF ERROR
All experimental uncertainty is due to either random errors or systematic errors. Random
errors are statistical fluctuations (in either direction) in the measured data due to the precision
limitations of the measurement device. Random errors usually result from the experimenter's
inability to take the same measurement in exactly the same way to get exact the same
number. Systematic errors, by contrast, are reproducible inaccuracies that are consistently in
the same direction. Systematic errors are often due to a problem which persists throughout the
Note that systematic and random errors refer to problems associated with making
measurements. Mistakes made in the calculations or in reading the instrument are not
considered in error analysis. It is assumed that the experimenters are careful and competent.
Sampling Error/Random Error
Sampling error is the deviation of the selected sample from the true characteristics, traits,
behaviors, qualities or figures of the entire population.
Why Does This Error Occur?
Sampling process error occurs because researchers draw different subjects from the same
population but still, the subjects have individual differences. Keep in mind that when you take a
sample, it is only a subset of the entire population; therefore, there may be a difference
between the sample and population.
The most frequent cause of the said error is a biased sampling procedure. Every researcher
must seek to establish a sample that is free from bias and is representative of the entire
population. In this case, the researcher is able to minimize or eliminate sampling error.
Another possible cause of this error is chance. The process of randomization and probability
sampling is done to minimize sampling process error but it is still possible that all the
randomized subjects are not representative of the population.
The most common result of sampling error is systematic error wherein the results from the
sample differ significantly from the results from the entire population. It follows logic that if the
sample is not representative of the entire population, the results from it will most likely differ
from the results taken from the entire population.
Sample Size and Sampling Error
Given two exactly the same studies, same sampling methods, same population, the study with a
larger sample size will have less sampling process error compared to the study with smaller
sample size. Keep in mind that as the sample size increases, it approaches the size of the entire
population, therefore, it also approaches all the characteristics of the population, thus,
decreasing sampling process error.
Standard Deviation and Sampling Error
Standard deviation is used to express the variability of the population. More technically, it is the
average difference of all the actual scores of the subjects from the mean or average of all the
scores. Therefore, if the sample has high standard deviation, it follows that sample also has high
sampling process error.
It will be easier to understand this if you will relate standard deviation with sample size. Keep in
mind that as the sample size increases, the standard deviation decreases.
Imagine having only 10 subjects, with this very little sample size, the tendency of their results is
to vary greatly, thus a high standard deviation. Then, imagine increasing the sample size to 100,
the tendency of their scores is to cluster, thus a low standard deviation.
Ways to Eliminate Sampling Error
There is only one way to eliminate this error. This solution is to eliminate the concept of
sample, and to test the entire population. In most cases this is not possible; consequently, what
a researcher must to do is to minimize sampling process error. This can be achieved by a proper
and unbiased probability sampling and by using a large sample size.
Bias problems/ Systematic Error
Sampling bias is a possible source of sampling errors. It leads to sampling errors which either
have a prevalence to be positive or negative. Such errors can be considered to be systematic
errors. Systematic errors are biases in measurement which lead to the situation where
the mean of many separate measurements differs significantly from the actual value of the
measured attribute. All measurements are prone to systematic errors, often of several different
types. Sources of systematic error may be imperfect calibration of measurement instruments
(zero error), changes in the environment which interfere with the measurement process and
sometimes imperfect methods of observation can be either zero error or percentage error.
Minimizing Systematic Error
Systematic error can be difficult to identify and correct. Given a particular experimental
procedure and setup, it doesn't matter how many times you repeat and average your
measurements; the error remains unchanged. No statistical analysis of the data set will
eliminate a systematic error, or even alert you to its presence. Systematic error can be located
and minimized with careful analysis and design of the test conditions and procedure; by
comparing your results to other results obtained independently, using different equipment or
techniques; or by trying out an experimental procedure on a known reference value, and
adjusting the procedure until the desired result is obtained (this is called calibration). A few
items to consider:
1. What are the characteristics of your test equipment, and of the item you are testing?
Under what conditions will the instrument distort or change the physical quantity you
are trying to measure? For example, a voltmeter seems straightforward enough. You
hook it up to two points in a circuit and it gives you the voltage between them. Under
conditions of very low current or high voltage, however, the voltmeter itself becomes a
significant part of the circuit, and the measured voltage may be significantly altered.
Similarly, a large temperature probe touched to a small object may significantly affect its
temperature, and distort the reading.
2. It is unusual to make a direct measurement of the quantity you are interested in. Most
often, you will be making measurements of a related physical quantity, often several
times removed, and at each stage some kind of assumption must be made about the
relationship between the data you obtain and the quantity you are actually trying to
measure. Sometimes this is a straightforward conversion process; other cases may be
3. Calibration: Sometimes systematic error can be tracked down by comparing the results
of your experiment to someone else's results, or to results from a theoretical model.
However, it may not be clear which of the sets of data is accurate. Calibration, when
feasible, is the most reliable way to reduce systematic errors. To calibrate your
experimental procedure, you perform it upon a reference quantity for which the correct
result is already known. When possible, calibrate the whole apparatus and procedure in
one test, on a known quantity similar in size and type to your unknown quantities.
Sampling error can be contrasted with non-sampling error. Non-sampling error is a catch-all
term for the deviations from the true value that are not a function of the sample chosen,
including various systematic errors and any random errors that are not due to sampling. Non-
sampling errors are much harder to quantify than sampling error. Non-sampling error is caused
by factors other than those related to sample selection. It refers to the presence of any factor,
whether systemic or random, that results in the data values not accurately reflecting the 'true'
value for the population.