1. HISTORY OF COMPUTERS IN
PHARMACEUTICAL RESEARCH
AND DEVELOPMENT
Manoj R
M.Pharm II semester
Dept of Pharmaceutics
Nandha college of Pharmacy.
2. History or Introduction
Germination :the 1960s
Gaining a foothold: the 1970s
The growth :-in 1980s
Fruition :-in 1990s
Computers are very important in pharmaceutical research and development that it
may be hard to imagine a time when there were no computers to assist the
medicinal chemist or biologist. • Computers began to be deployed at
pharmaceutical companies as early as the 1940s.
3. Germination: The 1960s
In 1960 essentially 100% of the computational chemists were in
academia, not industry
In 1962 the Quantum Chemistry Program Exchange (QCPE) came
into picture. Competitive scientists were initially slow to give away
programs they worked so hard to write, but gradually the depositions
to QCPE increased.
Programs were written in FORTRAN II
4. Gaining a Foothold: 1970s
Some of the companies that first got into using the software’s
dropped out after a few years (but returned later), either for lack of
management support or because the technology was not intellectually
satisfying to the scientist involved. Other companies, like Lilly,
persisted.
Other companies such as Merck and Smith Kline and French (using
the old name) entered the field a few years later.
Two new computer-based resources were launched in the 1970s. One
was the Cambridge Structural Database (CSD), and the other was the
Protein Data Bank (PDB).
5. The Growth: in 1980s
• This was the growth period and the renaissance period of the
computers in pharmaceutical industries.
• Professor Allinger launched his journal of computational chemistry
which includes quantum chemistry, molecular mechanics, molecular
simulation, QSAR and molecular graphics.
• Development of the VAX by Digital Equipment Corporation (DEC),
personal computer (PC) by IBM and Apple Macintosh brought
interactive computing to new level.
• Program’s called MACCS was used to check whether a compound had
previously been synthesized and REACCS, for chemical reactions.
• Koga of Japan was first to use QSAR to discover the antibacterial
agent norfloxacin around 1982.
6. Fruition: In 1990s
• Super computers (Cray 2S) helped in speeding the identification of
new drug candidates, for performing longer molecular dynamics
simulation and quantum mechanical calculations on large molecules. •
Later, Computational technique such as QSAR or data mining was
more effective at discovering and optimizing new lead compounds,
than the supercomputer.
7. Recent softwares in pharma industry
Oracle JD Edwards – Manufacturing
MasterControl Quality Management System (QMS)
NetSuite
SapphireOne
Fishbowl Manufacturing
Lot Tracking
Interface System and Business-to-Business Trading
SYSPRO
MAXLife365
8. STATISTICAL MODELLING IN PHARMACEUTICAL
RESEARCH AND DEVELOPMENT
DESCRIPTIVE MODELS
They are based on direct observation, measurement and extensive data
records
Descriptive model is a generic term for activities that create models
by observation and experiment.
Descriptive model operates on a simple logic: the maker observes a
close correspondence between the behaviour of the model and that of
its referent
9. MECHANISTIC MODELS
They are based on an understanding of the behaviour of a system's
components.
A mechanistic model assumes that a complex system can be
understood by examining the workings of its individual parts
Mechanistic models typically have a tangible, physical aspect. In that
system components are real, solid and visible.
A mechanistic model is one where the basic elements of the model
have a direct correspondence to the underlying mechanisms in the
system being modelled
10. Statistical Parameters Estimation (Sample
Statistics)
The process by which one makes inferences about a population,
based on information obtained from a sample.
Inferential statistics are used to determine the likelihood that a
conclusion, based on the analysis of the data from a sample, is true and
represents the population studied
The two common forms of statistical inference are:
Estimation
Null hypothesis tests of significance (NHTS)
11. Estimation
A parameter is a statistical constant that describes a feature about a
phenomenon, population etc.
There are two forms of estimation:
Point estimation (maximally likely value for parameter)
Interval estimation (also called confidence interval for parameter)
12. Point Estimation
Point estimation is an estimate of a population parameter given by a
single number.
They are single points that estimates parameter directly which serve
as a "best guess" or "best estimate" of an unknown population
parameter
Example: Population mean, standard deviation
13. LIMITATION OF POINT ESTIMATION
• Point estimation does not provide information about sample to sample
variability Point estimates are single points that are used to infer
parameters directly.
• For example,
Sample proportion pˆ(“p hat”) is the point estimator of p
Sample mean x (“x bar”) is the point estimator of μ
Sample standard deviation s is the point estimator of σ
14. Confidence regions
It is a set of points in an n-dimensional space, often represented as an
ellipsoid around a point which is an estimated solution to a problem,
although other shapes can occur.
Confidence regions are multivariate extensions of univariate
confidence intervals.
15. Interpretation of confidence interval
It provides a range of possible value for the parameter.
Give information about closeness of the sample to unknown
population parameter.
Provides a measure of the extent to which a sample estimate is likely
to differ from the true population value.
Indicates with a stated level of certainty, the range of values with in
which the true population mean is likely to lie.
CI depends on the level of confidence
16. Null hypothesis tests of significance, NHST
It is a method of statistical inference by which an experimental factor is tested
against a hypothesis of no effect or no relationship based on a given
observation.
Here is a simple example: A school principal claims that students in her school
score an average of seven out of 10 in exams. The null hypothesis is that the
population mean is 7.0.
Null Hypothesis
It is a statement about population parameter, Denoted by H0
Tests the likelihood of the statement being true in order to decide whether to
accept or reject the alternative hypothesis.
It needs to be tested if it’s true.
Includes signs , =,≤ or ≥
Ex: Accepted theory, ethanol boils at 73.1 degrees
17. Test of significance
It is a formal procedure for comparing observed data with a claim (also called a
hypothesis) whose truth we want to assess.
Test of significance is used to test a claim about an unknown population parameter.
A significance test uses data to evaluate a hypothesis by comparing sample
point estimates of parameters to values predicted by the hypothesis.
Null Hypothesis
True False
Accept if p>=0.05 (non
significant)
conclusion- negative
1-α (confidence level) β (type 2 error)
Reject if p< 0.05
(significant)
conclusion- Positive
α (type 1 error) 1-β (power of the test)
18. STATISTICAL PARAMETERS-
NONLINEARITY AT THE OPTIMUM
It is useful to study the degree of nonlinearity of our model in a
neighbourhood of the forecast
Briefly, there exist methods of assessing the maximum degree of
intrinsic nonlinearity that the model exhibits around the optimum found.
If maximum nonlinearity is excessive, for one or more parameters the
confidence regions obtained applying the results of the classic theory are
not to be trusted.
In this case, alternative simulation procedures may be employed to
provide empirical confidence regions.
19. Sensitivity analysis
It is the process by which the robustness of a cost- utility analysis (CUA) is
assessed by examining the changes in the results of the analysis when key
variables are varied.
Sensitivity analysis is a way to predict the outcome of a decision if a situation
turns out to be different compared to the key prediction
Steps to perform sensitivity analysis
• Create A Model
• Write A Set Of Requirements
• Design A System
• Make A Decision
• Do A Trade off Study
• Originate A Risk Analysis
• Want To Discover The Cost Drivers
20. OPTIMAL DESIGN
Introduction - It is the process of finding the best way of using the
existing resources while taking in to the account of all the factors that
influences decisions in any experiment.
The objective of designing quality formulation is achieved by various
optimization techniques. In Pharmacy word “optimization” is found in
the literature referring to study of the formula.
21. Advantages of optimal designs
• Optimal designs reduce the costs of experimentation by allowing
statistical models to be estimated with fewer experimental runs.
• Optimal designs can accommodate multiple types of factors, such as
process and discrete factors.
• Designs can be optimized when the design-space is constrained, for
example, when the mathematical process-space contains factor-settings
that are practically infeasible (e.g- due to safety concern)
23. STATISTICAL PARAMETERS-
POPULATION MODELING
• Introduction-
• Modeling and simulation have emerged as important tools for integrating
data, knowledge and mechanisms to aid in arriving at rational decisions
regarding drug use and development.
• Population modeling methods provide a framework for quantitating and
explaining variability in drug exposure and response.
• Population modeling is a tool to identify and describe relationships
between a subject's physiologic characteristics and observed drug
exposure or response
24.
25. Traditional Standard Two Stage Method
It is a traditional method.
It involves study of relatively small number of individuals subjected to
intense sampling.
The period of the study is short, since the individuals are usually categorised.
Advantages
It provides reliable and robust estimates when extensive numbers of
samples are available for each individual.
It is a simple method.
It is a well tried and straightforward method to implement.
Many software packages are available for this method.
Disadvantages
Being a controlled study design, it is very expensive and requires careful
planning and implementation.
It gives unreliable results in case of sparse data.
26. Native Pooling Method
It is a traditional method.
In this method, the data from all individuals are pooled and
analysed simultaneously without consideration of the individual
from whom the specific data were obtained
Advantages
It may be the only viable approach in certain situations, for
e.g. in case on animal data, where each animal provides only one
data point.
Disadvantages
This method is generally considered the least favourable. It is
susceptible to bias. It produces inaccurate estimates of
pharmacokinetic parameters
27. Parametric methods
Mixed Effect modelling
Mixed effect modelling is a parametric method which assumes a specific
distribution of pharmacokinetic parameters prior to estimation.
It is considered as the optimum population model method.
They are of two types:
Fixed
Random
Fixed effects are components of the structural pharmacokinetic model
They do not include any unexplainable variation either between or within
individuals. Fixed effect parameters are represented by the symbol theta.
Random Effects: Each individual in a population will have a specific value
for their pharmacokinetic parameter, which will differ from the population
typical value due to unexplainable variability
28. Non-Parametric Methods
Non parametric methods do not assume any specific distribution of
parameters about the population values, but rather allow for many
possible distributions.
In this method, the entire population distribution of each parameter is
estimated from the population data.
This permits visual inspection of distribution before committing to one.
Different non parametric methods are:
Non parametric maximum likelihood [NPML]
Non parametric expectation maximization [NPEM]
Nonparametric Semi/ smooth non parametric method [SNP]
29. NON PARAMETRIC MAXIMUM LIKELIHOOD [NPML]
This method permits all forms of distributions including those containing sharp
changes, such as discontinuities and kinks.
It uses maximum likelihood as estimator.
NON PARAMETRIC EXPECTATION MAXIMIZATION [NPEM]
This method is preferred to any parametric method when there is an unexpected
multimodal or non-normal distribution of atleast one of the nodal parameters.
It eliminates the need for initial guesses which are required for nonlinear least
square procedure. It is preferable to traditional method in case of sparse data. It uses
expectation maximization as the estimator.
SEMI/ SMOOTH NON PARAMETRIC METHOD [SNP]
This method places some restrictions on the type of parametric distributions considered.
Functions that are not permitted include those containing sharp edges and discontinuities.
30. References
1) https://www.softwareadvice.com/manufacturing/pharmaceutical-
manufacturing-software-comparison/
2) Mannam A, Mubeen H “Review Article Digitalisation And
Automation In Pharmaceuticals From Drug Discovery To Drug
Administration” international Journal of Pharmacy and
Pharmaceutical Science 10(6), 2018 May 8, 1-10.
3) Hoffmann A, IGihny-Simonius J, Marcel Plattner , Vanja Schmidli-
Vckovski , Kronseder e C “Computer system validation: An
overview of official requirements and standards” Pharmaceutics
Acta Helvetiae, 72, 1998, 317-325.