Computational modelling of drug disposition lalitajoshi9
computational modelling of drug disposition is the integral part of computer aided drug design. different kinds of tools being used in the prediction of drug disposition in human body. This topic in the CADD explains the details about the drug disposition, active transporters and tools.
Computational modelling of drug disposition lalitajoshi9
computational modelling of drug disposition is the integral part of computer aided drug design. different kinds of tools being used in the prediction of drug disposition in human body. This topic in the CADD explains the details about the drug disposition, active transporters and tools.
Computational modeling of drug dispositionPV. Viji
Computational modeling of drug disposition , Modeling techniques , Drug absorption , solubility , intestinal permeation , Drug distribution , Drug excretion , Active Transport , P-gp , BCRP , Nucleoside transporters , hPEPT1 , ASBT , OCT , OATP , BBB-choline transporter
Myself Omkar Tipugade , M- Pharm ,Sem - II, Department of pharmaceutics , from Shree Santkrupa College Of Pharmacy , ghogaon . Today I upload presentation on Active Transport like P-gp , BCPR, Nucleoside transporters etc .
LEGAL PROTECTION OF INNOVATIVE USES OF COMPUTERS IN R & D.pptxTanvi Mhashakhetri
CONTENTS :
Introduction
Intellectual Property Rights
Patents
Patents on Algorithms
Patents on Human Interfaces
Patents on Machine-Machine Interfaces
Patents on Data Structures
Copyright
Protection of Databases
Trade Secrets
Enforcement of Rights
Conclusion
References
INTRODUCTION :
The days in which IP (intellectual property) strategists were separated into groups of pharmacologists (chemists or biologists) and other groups of computer scientists are slowly passing—in the same manner in which the technologies are increasingly overlapping in the scientific world.
Pharmacology patent lawyers had typically spent their training in the laboratory working with chemicals or using polymerase chain reaction (PCR) techniques; they understood how small molecular entities functioned and characterized sequences of RNA, DNA, and proteins.
Computer scientists, on the other hand, spent hours programming computers and later writing software and business method patents.
Just as understanding the application of computers in pharmacology presents a challenge for researchers in both fields, it also means that the IP specialists also need to combine strategies from both fields to obtain the best possible legal protection for innovation.
A few years ago a study carried out by the London-based consulting firm Silico Research reported that very few patent applications had been filed in bioinformatics.
The reasons cited in the study for the scarcity of patents included the fact that many current bioinformatics products merely combined existing data sources into a single product and the difficulty of proving infringement of software patents.
The United States Patent and Trademark Office (USPTO) recognized in 1999 that bioinformatics represented a special challenge and that same year created a special examination group—Art Unit 1631—to examine the increasing number of applications .
Since these studies were published, however, the growth in the number of bioinformatics patents seems to have stalled.
INTELLECTUAL PROPERTY RIGHTS
The term “ Intellectual property Rights” is used to describe the legal instrument for protecting innovation .
There are intellectual property issues associated with four elements of a software program:
Program function - whether the algorithm is performed by the hardware or the software,
External design - the conventions for communication between the program and the user or other programs,
User interfaces - the interactions between the program and the user,
Program code - the implementation of the function and external design of the program.
CONCLUSION
The use of computers in developing new pharmaceutical products is nowadays common place, and a number of tools and databases have been developed to improve their use. Although intellectual property rights have to date rarely been the subject of court cases.
• In silico (literally alluding the mass use of silicon for semiconductor computer chips) is an expression used to performed on computer or via computer simulation
• In silico tools capable of identifying critical factors (i.e. drug physicochemical properties, dosage form factors) influencing drug in vivo performance, and predicting drug absorption based on the selected data set (s) of input factors.
Computational modeling of drug dispositionPV. Viji
Computational modeling of drug disposition , Modeling techniques , Drug absorption , solubility , intestinal permeation , Drug distribution , Drug excretion , Active Transport , P-gp , BCRP , Nucleoside transporters , hPEPT1 , ASBT , OCT , OATP , BBB-choline transporter
Myself Omkar Tipugade , M- Pharm ,Sem - II, Department of pharmaceutics , from Shree Santkrupa College Of Pharmacy , ghogaon . Today I upload presentation on Active Transport like P-gp , BCPR, Nucleoside transporters etc .
LEGAL PROTECTION OF INNOVATIVE USES OF COMPUTERS IN R & D.pptxTanvi Mhashakhetri
CONTENTS :
Introduction
Intellectual Property Rights
Patents
Patents on Algorithms
Patents on Human Interfaces
Patents on Machine-Machine Interfaces
Patents on Data Structures
Copyright
Protection of Databases
Trade Secrets
Enforcement of Rights
Conclusion
References
INTRODUCTION :
The days in which IP (intellectual property) strategists were separated into groups of pharmacologists (chemists or biologists) and other groups of computer scientists are slowly passing—in the same manner in which the technologies are increasingly overlapping in the scientific world.
Pharmacology patent lawyers had typically spent their training in the laboratory working with chemicals or using polymerase chain reaction (PCR) techniques; they understood how small molecular entities functioned and characterized sequences of RNA, DNA, and proteins.
Computer scientists, on the other hand, spent hours programming computers and later writing software and business method patents.
Just as understanding the application of computers in pharmacology presents a challenge for researchers in both fields, it also means that the IP specialists also need to combine strategies from both fields to obtain the best possible legal protection for innovation.
A few years ago a study carried out by the London-based consulting firm Silico Research reported that very few patent applications had been filed in bioinformatics.
The reasons cited in the study for the scarcity of patents included the fact that many current bioinformatics products merely combined existing data sources into a single product and the difficulty of proving infringement of software patents.
The United States Patent and Trademark Office (USPTO) recognized in 1999 that bioinformatics represented a special challenge and that same year created a special examination group—Art Unit 1631—to examine the increasing number of applications .
Since these studies were published, however, the growth in the number of bioinformatics patents seems to have stalled.
INTELLECTUAL PROPERTY RIGHTS
The term “ Intellectual property Rights” is used to describe the legal instrument for protecting innovation .
There are intellectual property issues associated with four elements of a software program:
Program function - whether the algorithm is performed by the hardware or the software,
External design - the conventions for communication between the program and the user or other programs,
User interfaces - the interactions between the program and the user,
Program code - the implementation of the function and external design of the program.
CONCLUSION
The use of computers in developing new pharmaceutical products is nowadays common place, and a number of tools and databases have been developed to improve their use. Although intellectual property rights have to date rarely been the subject of court cases.
• In silico (literally alluding the mass use of silicon for semiconductor computer chips) is an expression used to performed on computer or via computer simulation
• In silico tools capable of identifying critical factors (i.e. drug physicochemical properties, dosage form factors) influencing drug in vivo performance, and predicting drug absorption based on the selected data set (s) of input factors.
Sensitivity Analysis, Optimal Design, Population Modeling.pptxAditiChauhan701637
Sensitivity analysis is the study of the unreliability related to output and input of mathematical model or numerical system which can be divided and allocated to various sources.
The process of outcome under possible speculation to find out the impact of a variable under sensitivity analysis can be useful for a range of purpose, consisting -
1. In the existence of unreliability, prefer testing of the results of a model or system.
2. Enhanced understanding of correlation between input and output variables in a model or system.
Sensitivity analysis methods:
There are many number of methods to study the sensitivity analysis, many of which have been developed to address one or more of the limitations discussed above. By the type sensitivity analysis measurement they are differentiate, be it based on variance decompositions, partial derivatives or elementary effects.
This presentation deals with the formal presentation of anomaly detection and outlier analysis and types of anomalies and outliers. Different approaches to tackel anomaly detection problems.
Anomaly detection (or Outlier analysis) is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. It is used is applications such as intrusion detection, fraud detection, fault detection and monitoring processes in various domains including energy, healthcare and finance. In this talk, we will introduce anomaly detection and discuss the various analytical and machine learning techniques used in in this field. Through a case study, we will discuss how anomaly detection techniques could be applied to energy data sets. We will also demonstrate, using R and Apache Spark, an application to help reinforce concepts in anomaly detection and best practices in analyzing and reviewing results.
Research is a systematic and scientific method of finding solutions by obtaining various types of data and systematic analysis of the multiple aspects of the issues related.
The techniques or the specific procedure which helps to identify, choose, process, and analyze information about a subject is called Research Methodology
Experimental design is a statistical tool for improving product design and solving production problems.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
Data reduction: breaking down large sets of data into more-manageable groups or segments that provide better insight.
- Data sampling
- Data cleaning
- Data transformation
- Data segmentation
- Dimension reduction
Anomaly detection: Core Techniques and Advances in Big Data and Deep LearningQuantUniversity
Anomaly detection (or Outlier analysis) is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. It is used is applications such as intrusion detection, fraud detection, fault detection and monitoring processes in various domains including energy, healthcare and finance.
Liquid dosage forms: Advantages and disadvantages of liquid dosage forms. Excipients used in formulation of liquid dosage forms. Solubility enhancement techniques
Good Laboratory Practices: General Provisions, Organization and Personnel, Facilities, Equipment,
Testing Facilities Operation, Test and Control Articles, Protocol for Conduct of a Nonclinical Laboratory
Study, Records and Reports, Disqualification of Testing Facilities, Organization and Personnel, Facilities, Equipment,
Testing Facilities Operation, Test and Control Articles, Protocol for Conduct of a Nonclinical Laboratory
Study, Records and Reports, Disqualification of Testing Facilities
PHARMACEUTICAL QUALITY ASSURANCE SIXTH SEMSTER B PHARM
Introduction, definition and general principles of calibration, qualification
and validation, importance and scope of validation, types of validation, validation master plan. Calibration of pH meter, Qualification of UV-Visible spectrophotometer, General principles of Analytical
method Validation.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
12. STATISTICAL PARAMETERS
Dispersion (also called Variability, Scatter, Spread)
12
It is the extent to which a distribution is stretched
or squeezed.
Common examples of Statistical Dispersion are the
variance, standard deviation and interquartile
range.
Coefficient of Dispersion (COD)
It is a measure of spread that describes the amount of
variability relative to the mean and it is unit less.
𝑪𝑶𝑫= 𝝈∕𝝁∗𝟏𝟎𝟎
𝝈
𝝁
4
13. Variance:
13
It is the expectation of the squared deviation of a
random variable from its mean and it informally
measures how far a set of random numbers are
spread out from the mean.
It is calculated by taking the differences between each
number in the set and the mean, squaring the
differences (to make them positive) and diving the
sum of the squares by the number of values in the set.
The variance provides the user with a numerical
measure of the scatter of the data.𝝈 𝟐
=
𝑵
𝑵
= −𝝁 𝟐; 𝝁(𝑴
𝒆𝒂𝒏) =
𝑿𝑵
5
𝝈 𝟐= ∑(𝑿−𝝁) 𝟐∕∕ 𝑵= ∑𝑿 𝟐∕ 𝑵−𝝁 𝟐; 𝝁(𝑴𝒆𝒂𝒏)= ∑𝑿∕𝑵
14. Standard Déviation (SD) σ
14
It is a measure used to quantify the amount of
variation or dispersion of a set of data values.
It is a number that tells how measurement for a group
are spread out from the average (mean) or expected
value.
A low standard deviation means most of the numbers
are very close to the average while a high value
indicates the data to be spread out.
The SD provides the user with a numerical measure
of the scatter of the data.
𝝈=√𝟏∕𝑵 ∑(𝑿−𝝁)𝟐
15. Root Mean Squared Error
(RMSE)
15
It is also termed as Root Mean Square Deviation
(RMSD).
It is used to measure the differences between values
(sample and population values) predicted by a model
or an estimator and the values actually observed.
𝑹𝑴𝑺𝑬=√∑ (𝑿𝒐𝒃𝒅𝒆𝒓𝒗𝒆𝒅−𝑿𝒎𝒐𝒅𝒆𝒍𝒍𝒆𝒅)𝟐∕𝑵
16. Absolute Error (AE)
16
It is the magnitude of the difference between the
exact value and the approximation.
The relative error is the absolute error divided by the
magnitude of the exact value.
𝑨𝑬 = 𝑿 𝒎𝒆𝒂𝒔𝒖𝒓𝒆𝒅 −𝑿 𝒂𝒄𝒕𝒖𝒂𝒍
17. Mean Square Error (MSE)
17
Also termed as Mean Square Deviation (MSD).
It measures the average of the squares of the errors
or deviations i.e. the difference between the estimator
and that is estimated.
18. Factor Analysis
18
It is a useful tool for investigating variable
relationships for complex concepts allowing
researchers to investigate concepts that are not easily
measured directly by collapsing a large number of
variables into a few interpretable underlying factors.
32. Confidence region
• In statistics, a confidence region is a multi-dimensional
generalization of a confidence interval. It is a set of points in
an n-dimensional space, often represented as an ellipsoid
around a point which is an estimated solution to a problem,
although other shapes can occur.
• Interpretation
• The confidence region is calculated in such a way that if a set
of measurements were repeated many times and a confidence
region calculated in the same way on each set of
measurements, then a certain percentage of the time (e.g. 95%)
the confidence region would include the point representing the
"true" values of the set of variables being estimated
32
33. .
• Nonlinear problems
• Confidence regions can be defined for any probability
distribution. The experimenter can choose the significance
level and the shape of the region, and then the size of the
region is determined by the probability distribution. A natural
choice is to use as a boundary a set of points with
constant (chi-squared) values.
• One approach is to use a linear approximation to the nonlinear
model, which may be a close approximation in the vicinity of
the solution, and then apply the analysis for a linear problem to
find an approximate confidence region. This may be a
reasonable approach if the confidence region is not very large
and the second derivatives of the model are also not very large.
33
34. .• Nonlinearity at the Optimum
• It is useful to study the degree of nonlinearity of our model in
a neighbourhood of the forecast.
• Briefly, there exist methods of assessing the maximum degree
of intrinsic nonlinearity that the model exhibits around the
optimum found. If maximum nonlinearity is excessive, for one
or more parameters the confidence regions obtained applying
the results of the classic theory are not to be trusted. In this
case, alternative simulation procedures may be employed to
provide empirical confidence regions.
34
35. SENSITIVITY ANALYSIS
• Sensitivity analysis is the study of how the uncertainty in the output of a
mathematical model or system (numerical or otherwise) can be divided and
allocated to different sources of uncertainty in its inputs[A related practice
is uncertainty analysis, which has a greater focus on uncertainty
quantification and propagation of uncertainty; ideally, uncertainty and
sensitivity analysis should be run in tandem.
• The process of recalculating outcomes under alternative assumptions to
determine the impact of a variable under sensitivity analysis can be useful
for a range of purposes including:
35
36. .
• Testing the robustness of the results of a model or system in the presence of
uncertainty.
• Increased understanding of the relationships between input and output
variables in a system or model.
• Uncertainty reduction, through the identification of model inputs that cause
significant uncertainty in the output and should therefore be the focus of
attention in order to increase robustness (perhaps by further research).
• Searching for errors in the model (by encountering unexpected
relationships between inputs and outputs).
• Model simplification – fixing model inputs that have no effect on the
output, or identifying and removing redundant parts of the model
structure.Enhancing communication from modellers to decision makers
(e.g. by making recommendations more credible, understandable,
compelling or persuasive). 36
37. .
• Settings and constraints
• The choice of method of sensitivity analysis is typically dictated by a
number of problem constraints or settings. Some of the most common are
• Computational expense: Sensitivity analysis is almost always performed by
running the model a (possibly large) number of times, i.e. a sampling-based
approach.
• The model has a large number of uncertain inputs. Sensitivity analysis is
essentially the exploration of the multidimensional input space, which
grows exponentially in size with the number of inputs. See the curse of
dimensionality.
• Correlated inputs: Most common sensitivity analysis methods assume
independence between model inputs, but sometimes inputs can be strongly
correlated. This is still an immature field of research and definitive
methods have yet to be established.
• Nonlinearity: Some sensitivity analysis approaches, such as those based on
linear regression, can inaccurately measure sensitivity when the model37
38. • Correlated inputs: Most common sensitivity analysis methods assume
independence between model inputs, but sometimes inputs can be strongly
correlated. This is still an immature field of research and definitive methods
have yet to be established.
• Nonlinearity: Some sensitivity analysis approaches, such as those based on
linear regression, can inaccurately measure sensitivity when the model
response is nonlinear with respect to its inputs. In such cases, variance-
based measures are more appropriate.
38
39. .• Sensitivity analysis methods
• There are a large number of approaches to performing a sensitivity
analysis, many of which have been developed to address one or more
of the constraints discussed above. They are also distinguished by the
type of sensitivity measure, be it based on (for example) variance
decompositions.
• Regression analysis, in the context of sensitivity analysis, involves
fitting a linear regression to the model response and using
standardized regression coefficients as direct measures of sensitivity.
• variance-based methods
• Variance-based are a class of probabilistic approaches which quantify
the input and output uncertainties as probability distributions, and
decompose the output variance into parts attributable to input
variables and combinations of variables. 39
40. Applications
• Chemistry
• Sensitivity analysis is common in many areas of physics and
chemistry.
• Sensitivity analysis has been proven to be a powerful tool to
investigate a complex kinetic model.
• In a meta analysis, a sensitivity analysis tests if the results are
sensitive to restrictions on the data included. Common
examples are large trials only, higher quality trials only, and
more recent trials only. If results are consistent it provides
stronger evidence of an effect and of generalizability.
Engineering
• Modern engineering design makes extensive use of computer
models to test designs before they are manufactured.
Sensitivity analysis allows designers to assess the effects and
sources of uncertainties, in the interest of building robust
models. 40
41. Optimal design
• In the design of experiments, optimal designs are a class of
experimental designs that are optimal with respect to some statistical
criterion.
• In the design of experiments for estimating statistical models, optimal
designs allow parameters to be estimated without bias and with
minimum variance.
• A non-optimal design requires a greater number of experimental runs
to estimate the parameters with the same precision as an optimal
design. In practical terms, optimal experiments can reduce the costs of
experimentation.
41
42. .
• Advantages
• Optimal designs reduce the costs of experimentation by
allowing statistical models to be estimated with fewer
experimental runs.
• Optimal designs can accommodate multiple types of
factors, such as process, mixture, and discrete factors.
• Designs can be optimized when the design-space is
constrained, for example, when the mathematical process-
space contains factor-settings that are practically infeasible
(e.g. due to safety concerns).
42
43. .
• Minimizing the variance of estimators
• Experimental designs are evaluated using statistical criteria.
• .In the estimation theory for statistical models with one real parameter, the
reciprocal of the variance of an ("efficient") estimator is called the "Fisher
information" for that estimator. Because of this reciprocity, minimizing the variance
corresponds to maximizing the information.
• When the statistical model has several parameters, however, the mean of the
parameter-estimator is a vector and its variance is a matrix.
• The inverse matrix of the variance-matrix is called the "information matrix".
Because the variance of the estimator of a parameter vector is a matrix, the problem
of "minimizing the variance" is complicated.
• Using statistical theory, statisticians compress the information-matrix using real-
valued summary statistics; being real-valued functions, these "information criteria"
can be maximized.
43
44. POPULATION MODELLING
• A population model is a type of mathematical model
that is applied to the study of population dynamics.
• Population modelling is a tool to identify and describe
relationships between a subject’s physiologic
characteristics and observed drug exposure or response.
Population pharmacokinetics (PK) modelling is not a
new concept; it was first introduced in 1972 by Sheiner
et al.
44
46. Figure represents a brief outline of some areas in which modelling and simulation are
commonly employed during drug development. Appropriate models can provide a
framework for predicting the time course of exposure and response for different dose
regimens. Central to this evolution has been the widespread adoption of population
modelling methods that provide a framework for quantitating and explaining variability
in drug exposure and response.
Types of Models
PK models
PK models describe the relationship between drug concentration(s) and time. The
building block of many PK models is a “compartment”—a region of the body in which
the drug is well mixed and kinetically homogenous (and can therefore be described in
terms of a single representative concentration at any time point).
46
47. Disease progression models
• Disease progression models were first used in 1992 to describe the time
course of a disease metric (e.g., ADASC in Alzheimer's disease).
• Such models also capture the inter-subject variability in disease
progression, and the manner in which the time course is influenced by
covariates or by treatment.
• They can be linked to a concurrent PK model and used to determine
whether a drug exhibits symptomatic activity or affects progression.
• Models of disease progress in placebo groups are crucial for
understanding the time course of the disease in treated groups, as well as
for predicting the likely response in a placebo group in a clinical trial
47
48. References
• 1.Computer application in pharmaceutical
research and development ,Sean Ekins 2006.
• Non linear regression analysis ,Bales and
Watts.
• www.slideshare.net.
• www.google.com 48