1. HECKAL PLOTS, SIMILARITY FACTOR F1 AND F2, HIGUCHI
AND KROSMEYER-PEPPAS MODEL, LINEARITY CONCEPT OF
SIGNIFICANCE, STANDARD DEVIATION, CHI-SQUARE TEST,
STUDENT-T TEST, ANOVA TEST
PRESENTED BY:
ABDUL NAIM
M PHARM 1ST YEAR
DEPT OF PHARMACEUTICS
NARGUND COLLEGE OF
PHARMACY
2. SIMILARITY FACTORS
F1AND F2:
DIFFERENCE FACTOR (F1):
The difference factor (f1) as defined by FDA calculates the % difference between 2 curves
at each time point and is a measurement of the relative error between 2 curves.
Where, n = number of time points
Rt = % dissolved at time t of reference product(pre change)
Tt = % dissolved at time t of test product (post change).
SIMILARITY FACTOR (F2):
The similarity factor (f2) as defined by FDA is logarithmic reciprocal square root
transformation of sum of squared error and is a measurement of the similarity in the
3. Difference factor Similarity factor Inference
0 100 Dissolution profile are
similar
< 15 > 50 Similarity or
equivalence of two
profiles
Limits for similarity factor and difference
factor:
Data structure and steps to follow:
• This model-independent method is most suitable for the dissolution
profile comparison when three to four or more dissolution time points are
available.
• Determine the dissolution profile of two products (12 units each) of the
4. • Using the mean dissolution values from both curves at each time
interval, calculate the difference factor (f1) and similarity factor (f2)
using the above equations.
• For curves to be considered similar, f1 values should be close to 0,
and f2 values should be close to 100. Generally, f1 values up to 15
(0-15) and f2 values greater than 50 (50-100) ensure sameness or
equivalence of the two curves and, thus, of the performance of the
test (post-change) and reference (pre-change) products.
• In dissolution profile comparisons, especially to assure similarity in
product performance, the regulatory interest is in knowing how
similar the two curves are, and to have a measure which is more
sensitive to large differences at any particular.
Some
recommendations:
• The dissolution measurements of the test and reference batches
should be made under exactly the same conditions.
5. • The dissolution time points for both the profiles should be the
same (e.g. 15, 30, 45, 60 minutes).
• The reference batch used should be the most recently
manufactured pre-change product.
• Only one measurement should be considered after 85% dissolution
of both the products (when applicable).
• To allow use of mean data, the percent coefficient of variation (%
CV) at the earlier time points (e.g. 15 minutes) should not be more
than 20%, and at other time points should not be more than 10%.
• The mean dissolution values for reference can be derived either
from last pre-change batch or the last two or more consecutively
manufactured pre-change batches
Applications:
• This method is more appropriate when more than three or four
dissolution time points are available.
6. • The f2 may become invariant with respect to the location change and
the consequence of failure to take into account the shape of the curve
and the unequal spacing between sampling time points lead to errors.
• Nevertheless, with a slight modification in the statistical analysis,
similarity factor would definitely serves as an efficient tool for reliable
comparison of dissolution profiles.
Advantages:
1. They are easy to compute.
2. They provide a single number to describe the comparison of
dissolution profile data.
Disadvantages:
1. The values of f1 and f2 are sensitive to the number of dissolution
time points used.
2. The basis of the criteria for deciding the difference or similarity
7. HECKEL PLOT:
• The Heckel analysis is a most popular method of deforming reduction
under compression pressure .
• Powder packing with increasing compression load is normally
attributed to particles rearrangement , elastic & plastic deformation &
particle fragmentation.
• It is analogous to first order reaction ,where the pores in the mass are
the reactant , that is: Log 1/E= Ky . P + Kr
Where….. Ky =material dependent constant inversely proportional to its
yield strength ‘s’
Kr=initial repacking stage hence E0
• The applied compressional force F & the movement of the punches
during compression cycle & applied pressure P , porosity E.
• For a cylindrical tablets p=4F/л. D2
Where… D is the tablet diameter similarly E can be calculated by
E=100.(1-4w/ρt .л.D2.H)
8. Where…w is the weight of the tableting mass ,
ρt is its true density ,
H is the thickness of the tablets.
• Heckel plot is density v/s applied pressure
• Follows first order kinetics
• As porosity increases compression force also increases
• Thus the Heckel’s plot allows for the interpretation of the mechanism
of bonding.
• Materials that are comparatively soft & that readily undergo plastic
deformation retain different degree of porosity , depending upon the
initial packing in the die.
• Harder material with higher yield pressure values usually undergo
compression by fragmentation first, to provide a denser packing.
EX: Lactose, sucrose (shown in type b in graph)
9. Compressional force
Log
E
-1
Log
E-1
Heckel plots: (a) material undergoing plastic
deformation;
(b) material undergoing brittle fracture
APPLICATION OF HECKEL EQUATION:
• Heckel plots can be influenced by the overall time of compression, the degree
of lubrication and even the size of the die, so that the effects of these variables
are also important and should be taken into consideration.
• Larger k values usually indicate harder tablets.
• Such information can be used as a means of binder selection when
designing tablet formulations.
10. • The crushing strength of tablets can be correlated with the values of k of the
Heckel plot.
HIGUCHI MODEL:
The first example of a mathematical model aimed to describe drug release
from a matrix system was proposed by Higuchi in 1961.
This model is based on the hypothesis that
(i) drug diffusion takes place only in one dimension
(ii) drug particles are much smaller than system thickness
(iii) drug diffusivity is constant
(iv) perfect sink conditions are always attained in the release environment.
Accordingly, model expression is given by the equation:
ft = Q = A √D(2C ñ Cs) Cs t
where Q is the amount of drug released in time t per unit area A,
C is the drug initial concentration,
Cs is the drug solubility in the matrix media and
D is the diffusivity of the drug molecules (diffusion coefficient) in the matrix
substance.
11. Initial drug concentration in matrix is much higher.
As drug is released distance for diffusion is
progressively increases.
Drug is leached out polymer matrix by entrance of
surrounding medium.
In release environment perfect sink is maintained.
Higuchi equation model:
The Equation Of Higuchi model:
Q =[D(2A-Cs)Cs×t]1/2
OR
Q=(2ADCst)1/2
By differentiating above equation we
get,
dQ/dt=(ADCs/2t)1/2
12. The Drug release from granular matrix is given by:
Where,
dQ/dt- rate of drug release
Cs- Solubility of drug in matrix
A- Total Concentration of drug in matrix
D- Diffusion Coefficient
t- Time
€- porosity of matrix
t- Tortuosity
13. • The data obtained were plotted as cumulative percentage drug release
versus square root of time.
Application:
This relationship can be used to describe the drug dissolution from
several types of modified release pharmaceutical dosage forms, as in the
case of some transdermal systems and matrix tablets with water soluble
drug.
14.
15. KORSMEYER-PEPPAS MODEL:
1. Korsmeyer et al. (1983) derived a simple relationship which
described drug release from a polymeric system equation.
2.To find out the mechanism of drug release, first 60% drug release
data were fitted in Korsmeyer Peppas mode.
F = Mt / M∞ = Km tn
Where,
Mt / M∞ is a fraction of drug released at time t
k is the release rate constant and
n is the release exponent.
The n value is used to characterize different release for cylindrical
shaped matrices.
16. To find out the exponent of n the portion of the release curve,
where Mt / M∞ <0.6 should only be used.
• To study the release kinetics, data obtained from in vitro drug release
studies were plotted as log cumulative percentage drug release versus log
time.
17.
18.
19. Linearity concept of
significance:
Definition of significance Testing :
In statistics, it is important to know if the result of an
experiment is significant enough or not. In order to measure the
significance, there are some predefined tests which could be applied.
These tests are called the tests of significance or simply the significance
tests.
• This statistical testing is subjected to some degree of error. For some
experiments, the researcher is required to define the probability of
sampling error in advance. In any test which does not consider the entire
population, the sampling error does exist. The testing of significance is
very important in statistical research.
• The significance level is the level at which it can be accepted if a given
event is statistically significant. This is also termed as p-value.
• It is observed that the bigger samples are less prone to chance, thus
the sample size plays a vital role in measuring the statistical significance.
20. • In short, the significance is the probability that a relationship exists.
Significance tests tell us about the probability that if a relationship we
found is due to random chance or not and to which level. This indicates
about the error that would be made by us if the found relationship is
assumed to exist.
Objectives of linearity testing :
• The statistical significance refers to the probability of a result of
some statistical test or research occurring by chance.
• The main purpose of performing statistical research is basically to
find the truth.
• In this process, the researcher has to make sure about the quality of
sample, accuracy, and good measures which need a number of steps to
be done.
• The researcher has to determine whether the findings of experiments
have occurred due to a good study or just by fluke.
21. Process of Significance Testing :
In the process of testing for statistical significance, there are the
following steps:
1. Stating a Hypothesis for Research
2. Stating a Null Hypothesis
3. Selecting a Probability of Error Level
4. Selecting and Computing a Statistical Significance Test
5. Interpreting the results
The claim tested by a statistical test is called the null hypothesis (H0).
The test is designed to assess the strength of the evidence against the
null hypothesis. Often the null hypothesis is a statement of “no
difference.”
The claim about the population that evidence is being sought for is the
alternative hypothesis (Ha).
22. ➢ When using logical reasoning, it is much easier to demonstrate that a
statement is false, than to demonstrate that it is true. This is because
proving something false only requires one counter example.
➢ Proving something true, however, requires proving the statement is true
in every possible situation.
➢ For this reason, when conducting a test of significance, a null
hypothesis is used.
➢ The term null is used because this hypothesis assumes that there is no
difference between the two means or that the recorded difference is not
significant.
➢ The notation that is typically used for the null hypothesis is H0.
➢ The opposite of a null hypothesis is called the alternative hypothesis.
➢ The alternative hypothesis is the claim that researchers are actually
trying to prove is true. However, they prove it is true by proving that the
null hypothesis is false.
23. Standard deviation:
Standard Deviation is a measure which shows how much variation (such as
spread, dispersion, spread,) from the mean exists.
✓ The standard deviation indicates a “typical” deviation from the mean. It
is a popular measure of variability because it returns to the original units
of measure of the data set. Like the variance, if the data points are close
to mean, there is a small variation whereas the data points are highely
spread out form the mean, then it has a high variance.
✓ Standard deviation is denoted by the symbol σ, describes the square
root of the mean of the squares of all the values of a series derived from
the arithmetic mean which is also called as the root-mean-square
deviation.
✓ 0 is the smallest value of standard deviation since it cannot be negative.
When the elements in a series are more isolated from the mean, then the
standard deviation is also large
24. Formula for standard deviation :
Where,
∑ sum means "sum of",
x is a value in the data set,
x bar is the mean of the data set,
n is the number of data points in the population
25. Merits of Standard Deviation:
1- It is the most reliable measure of dispersion
2- It is most widely used measure of dispersion or
variability
3- Its computation is based on all the observations
Demerits of Standard Deviation:
1- It is relatively difficult to calculate and understand.
2- It cannot be used for comparing the dispersion of two, or
more series given in different units.
3- It is affected very much by the extreme values
26. Summary:
Standard deviation measure the dispersion of data.
It is the most reliable
The greater the value of standard deviation, the further
the data tend to be dispersed from the mean.
27. Chi-Square test:
A chi-squared test (symbolically represented as χ2) is
basically a data analysis on the basis of observations of a random set
of variables. Usually, it is a comparison of two statistical data sets.
• This test was introduced by Karl Pearson in 1900 for categorical data
analysis and distribution.
• So it was mentioned as Pearson’s chi-squared test.
• A chi-square statistic is one way to show a relationship between two
categorical variables.
• In statistics, there are two types of variables: numerical (countable)
variables and non-numerical (categorical) variables
• The chi-squared statistic is a single number that tells you how much
difference exists between your observed counts and the counts you
would expect if there were no relationship at all in the population
28. Properties chi-squared test:
1. Two times the number of degrees of freedom is equal to the
variance.
2. The number of degree of freedom is equal to the mean distribution
3. The chi-square distribution curve approaches the normal
distribution when
the degree of freedom increases.
Formula;
Or
χ2 = ∑(Oi – Ei)2/Ei
Where,
Oi is the observed value
29. Student’s t-test:
• T Distribution also called the student’s t-distribution and is
used while making assumptions about a mean when we don’t
know the standard deviation.
• In probability and statistics, the normal distribution is a
bell-shaped distribution whose mean is μ and the standard
deviation is σ.
• There are two Student’s t-tests; one evaluates pairs of
results with something in common, known as the dependent
test, tdep. The other compares the averages of independent
results, tind.
30. T Distribution Formula:
In this equation, x̄ is the sample mean
μ is the population mean,
s is the sample standard deviation, and
n is the number of observations in the sample.
example of a dependent design is comparing the results obtained
from the same individuals before and after a treatment.
• An independent design would be, for instance, comparing the
resultsobtained in groups of healthy men and women.
• Thus, the tdep considers the difference between every pair of
values, whereas the tind only considers the averages, the standard
deviation and number of observations in each group. Access to these
31. Properties of T Distribution :
1. It ranges from −∞ to +∞.
2. It has a bell-shaped curve and symmetry similar to
normal distribution.
3. The shape of the t-distribution varies with the change
in degrees of freedom.
4. The variance of the t-distribution is always greater
than ‘1’ and is limited only
to 3 or more degrees of freedom. It means this
distribution has a higher
dispersion than the standard normal distribution.
32. ANOVA test:
If a specific quantity of a given sample is measured repeatedly
on several occasions, e.g. using different instruments or on different
days, it may be interesting to compare the averages in the groups or
from the various occasions. The procedure of choice in this case is the
ANOVA. The ANOVA reduces the risk of overestimating a significance
of differences caused by chance which may be an effect of repeated
tind.
✓ It also shows us a way to make multiple comparisons of several
populations means.
✓ The Anova test is performed by comparing two types of variation,
the variation between the sample means, as well as the variation
within each of the samples.
33. Types of ANOVA :
1. One Way ANOVA –
It is also known as one factor ANOVA. Here, we are using one criterion
variable (or called as a factor) and analyze the difference between more than two sample groups.
Suppose in glass industry, we want to compare the variation of three batches (glass) for their
average weight (factor).
34. Example :
From Table 1, 20 patient’s DBP (at 30 min) are given.
One-way ANOVA test was used to compare the mean
DBP in three age groups (independent variable), which was
found statistically significant (p = 0.002). Levene test for
homogeneity was insignificant (p = 0.231), as a result
Bonferroni test was used for multiple comparisons, which
showed that DBP was significantly different between two
pairs i.e., age group of <30 to 30–50 and <30 to >50 (P <
0.05) but insignificant between one pair i.e., 30–50 to >50
(P > 0.05)
35. 2. Two Way ANOVA –
Here, we are using two independent variables
(factors) and analyze the difference between more than two sample
groups. Similarly, we want to compare the variation of three batches
of glass weight and hardness (two factors).
Example :
From Table 1, 20 patient’s DBP (at 30 min) are given.
Two-way ANOVA test was used to compare the mean DBP between
age groups (independent variable_1) and gender (independent
variable_2), which indicated that there was no significant interaction
of DBP with age groups and gender (tests of Between-Subjects effects
in age groups*gender; P = 0.626) with effect size (Partial Eta Squared)
of 0.065.
The result also showed that there was significant difference in
estimated marginal means (adjusted mean) of DBP between age