Regulatory guidelines for conducting toxicity studies by ichAnimatedWorld
ICH is the “International Conference on Harmonization of
Technical Requirements for Registration of Pharmaceuticals for
Human Use”
ICH is a joint initiative involving both regulators and research based industry representatives of the EU, Japan and the US in
scientific and technical discussions of the testing procedures required
to assess and ensure the safety, quality and efficacy of medicines
Regulatory guidelines for conducting toxicity studies by ichAnimatedWorld
ICH is the “International Conference on Harmonization of
Technical Requirements for Registration of Pharmaceuticals for
Human Use”
ICH is a joint initiative involving both regulators and research based industry representatives of the EU, Japan and the US in
scientific and technical discussions of the testing procedures required
to assess and ensure the safety, quality and efficacy of medicines
REPRODUCTIVE TOXICITY STUDIES, Definition
Introduction, OECD guidelines for reproductive toxicity studies
Principle of the test, Description of Method, Procedure, Experimental Schedule, Data and Reporting, Results, Male Fertility Toxicological Studies
Ms. I. Sai Reddemma.
Department of Pharmacology
Dermal Irritation and Dermal Toxicity Studies Dinesh Gangoda
Dermal irritation and Corrosion test guidelines 204.
Dermal irritation is the production of reversible damage of the skin following the application of a test chemical for up to 4 hours.
Corrosive reactions are typified by ulcers, bleeding, bloody scabs, and, by the end of observation at 14 days, by discolouration due to blanching of the skin, complete areas of alopecia, and scars. Histopathology should be considered to evaluate questionable lesions. [1]
Dermal corrosion is the production of irreversible damage of the skin; namely, visible necrosis through the epidermis and into the dermis, following the application of a test chemical for up to four hours.[2]
REFERENCES
OECD/OCDE, Test No. 404: ‘‘Acute Dermal Irritation/Corrosion’’, 28 July 2015 OECD Publishing, peris, Page no, 1- 8.
Robert A., Turner., Screening Methods in Pharmacology; 1st edition; Academic press an imprint of Elsevier, pp, 279- 281.
OECD Guideline for testing of chemicals (1981). ‘‘Repeated Dose Dermal Toxicity’’, 21/28- day Study.
Toxicity is the science dealing with properties, action, toxicity, fatal dose detection or interpretation of result of toxicological analysis & treatment of poison.
Toxicity studies helps to avoid adverse effect and enhance the safety of drug.
This slide provides the information about toxicity screening on experimental animals.
Extrapolation of in vitro data to preclinical and.pptxARSHIKHANAM4
Extrapolation of in vitro data to preclinical.
the topic is included in m.pharmacy 1st sem syllabus. which is essential for the study and that include the details about how you deal with the preclinical data that will help to decide the NOEAL and LOEAL, the humane dose of the drug can be calculated and further formation is also done.
REPRODUCTIVE TOXICITY STUDIES, Definition
Introduction, OECD guidelines for reproductive toxicity studies
Principle of the test, Description of Method, Procedure, Experimental Schedule, Data and Reporting, Results, Male Fertility Toxicological Studies
Ms. I. Sai Reddemma.
Department of Pharmacology
Dermal Irritation and Dermal Toxicity Studies Dinesh Gangoda
Dermal irritation and Corrosion test guidelines 204.
Dermal irritation is the production of reversible damage of the skin following the application of a test chemical for up to 4 hours.
Corrosive reactions are typified by ulcers, bleeding, bloody scabs, and, by the end of observation at 14 days, by discolouration due to blanching of the skin, complete areas of alopecia, and scars. Histopathology should be considered to evaluate questionable lesions. [1]
Dermal corrosion is the production of irreversible damage of the skin; namely, visible necrosis through the epidermis and into the dermis, following the application of a test chemical for up to four hours.[2]
REFERENCES
OECD/OCDE, Test No. 404: ‘‘Acute Dermal Irritation/Corrosion’’, 28 July 2015 OECD Publishing, peris, Page no, 1- 8.
Robert A., Turner., Screening Methods in Pharmacology; 1st edition; Academic press an imprint of Elsevier, pp, 279- 281.
OECD Guideline for testing of chemicals (1981). ‘‘Repeated Dose Dermal Toxicity’’, 21/28- day Study.
Toxicity is the science dealing with properties, action, toxicity, fatal dose detection or interpretation of result of toxicological analysis & treatment of poison.
Toxicity studies helps to avoid adverse effect and enhance the safety of drug.
This slide provides the information about toxicity screening on experimental animals.
Extrapolation of in vitro data to preclinical and.pptxARSHIKHANAM4
Extrapolation of in vitro data to preclinical.
the topic is included in m.pharmacy 1st sem syllabus. which is essential for the study and that include the details about how you deal with the preclinical data that will help to decide the NOEAL and LOEAL, the humane dose of the drug can be calculated and further formation is also done.
In vivo is the Latin word which means with in the living body.
When effects of various biological entities are tested on whole, living organism or cells, usually animals including humans and plants.
Animal testing and clinical trials are major elements of in-vivo research.
In vivo testing is often employed over in vitro because it is better suited for observing the overall effects of an experiment on a living subject in drug discovery.
example, verification of efficacy in vivo is crucial, because in vitro assays can sometimes yield misleading results with drug.
Harry Smith found that sterile filtrates of serum from animals infected with Bacillus anthracis were lethal for other animals, whereas extracts of culture fluid from the same organism grown in vitro were not.
In microbiology Once cells are disrupted and individual parts are tested or analyzed, this is known as in vitro.
In vitro studies within the glass, i.e., in a laboratory environment using test tubes, petri dishes, etc. Examples of investigations in vivo include: the pathogenesis of disease.
In vitro toxicology:-
The bridge exists between new drug discovery and drug development.-
Provide information on mechanism of action of a drug
Provides an early indication of the potential for some kinds of toxic effects, allowing a decision to terminate or to proceed further.
In vitro methods are widely used for:-
Screening and ranking chemicals
Get a platform for animal studies for physiological actions
Studying cell, tissue, or target specific effects
Improve subsequent study design
Advantages and Disadvantages:-
Faster than in vivo studies
Less expensive to run
Less predictive of toxicity in intact organisms
In vitro to in vivo extrapolation (IVIVE) refers to the qualitative or quantitative transposition of experimental results or observations made in vitro to predict phenomena in vivo, biological organisms.
The problem of transposing in vitro results is particularly acute in areas such as toxicology where animal experiments are being phased out and are increasingly being replaced by alternative tests.
Results obtained from in vitro experiments cannot often be directly applied to predict biological responses of organisms to chemical exposure in vivo.
Therefore, it is extremely important to build a consistent and reliable in vitro to in vivo extrapolation method.
Two solutions are now commonly accepted:
Increasing the complexity of in vitro systems where multiple cells can interact with each other in order recapitulate cell-cell interactions present in tissues (as in "human on chip" systems).
Using mathematical modeling to numerically simulate the behavior of a complex system, whereby in vitro data provides the parameter values for developing a model.
The two approaches can be applied simultaneously allowing in vitro systems to provide adequate data for the development of mathematical models. To comply with push for the development of alternative testing methods.
S3A: NOTE FOR GUIDANCE ON TOXICOKINETICS: THE ASSESSMENT OF SYSTEMIC EXPOSURE IN TOXICITY STUDIES
S3B: PHARMACOKINETICS: GUIDANCE FOR REPEATED DOSE TISSUE DISTRIBUTION STUDIES
Epidemiology designs for clinical trials - PubricaPubrica
1. Clinical trial study design
2. Cohort Study design
3. Case-Control Studies
4. Cross-Sectional Studies
5. Ecological Studies
6. Randomized Clinical Trials
Continue Reading: https://bit.ly/3tDt6rH
Reference: https://pubrica.com/services/research-services/experimental-design/
Why Pubrica:
When you order our services, We promise you the following – Plagiarism free | always on Time | 24*7 customer support | Written to international Standard | Unlimited Revisions support | Medical writing Expert | Publication Support | Biostatistical experts | High-quality Subject Matter Experts.
Contact us:
Web: https://pubrica.com/
Blog: https://pubrica.com/academy/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom: +44- 74248 10299
UPDATED-Early Phase Drug Developmetn and Population PK and Its' ValueE. Dennis Bashaw
Presentation Given at Regional AAPS DDDI Meeting in Baltimore. Similar to previous talks BUT updated to include a discussion of BIA 10-2474 and extended discussion of risk
Excelsior College PBH 321 Page 1 EXPERI MENTAL E.docxgitagrimston
Excelsior College PBH 321
Page 1
EXPERI MENTAL E PIDE MIOLOGICAL STUDIE S
Epidemiologic studies are either observational or experimental. Observational studies, including ecologic,
cross-sectional, cohort, and case-control designs, are considered “natural” experiments, but experimental
studies are considered true experiments. We will spend the next 2 modules discussing these designs.
Before we begin to discuss study designs, we need a brief introduction to a concept that we will spend more
time discussing in later modules -- bias. The definition of bias is:
“Deviation of results or inferences from the truth, or processes leading to such deviation. Any trend in the
collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are
systematically different from the truth.” (Last, J.M., A Dictionary of Epidemiology, 4th ed.)
Epidemiologists are naturally concerned whether the results of an epidemiologic study are biased, since many
important public health decisions are often drawn from epidemiologic research. The severity of the bias, that
is - how much it influences or distorts the results, is related to the study design as well as how information is
analyzed.
Experimental Studies
The defining feature of experimental studies is that the investigator assigns exposure to the study subjects.
Experimental studies most closely resemble controlled laboratory experiments and serve as models for the
conduct of observational studies, thus they are the “gold standard” of epidemiologic research. Experimental
studies have high validity (i.e., less bias), and can identify even very small effects. The most well known type of
experimental study is a randomized trial (sometimes referred to as a randomized controlled trial), where the
investigator randomly assigns exposure to the study subjects. In this type of study, the only expected
difference between the experimental and control groups is the outcome variable being studied.
Experimental designs like the randomized trial can assess both preventive interventions, where a prophylactic
agent is given to healthy or high-risk individual to prevent disease, or can assess effects of therapeutic
treatment, such as those given to diseased individuals to reduce their risk of disease recurrence, or to improve
their survival or quality of life.
Preventive intervention: Does tamoxifen lower the incidence of breast cancer in women with high risk profile
compared to high risk women not given tamoxifen?
Therapeutic intervention: Do combinations of two or three antiretroviral drugs prolong survival of AIDS
patients as well as regimens of single drugs?
The investigator can assign exposures (or allocate interventions) to either individuals or to an entire
community.
Individual-level assignment: Do women with stage I breast cancer given a lumpectomy alone survive as long
without recurrence of disease as women given a lumpec ...
Introduction.
Synthesis, Storage and Release of Oxytocin.
Chemistry of Oxytocin.
Structure of Oxytocin.
Relation to Vasopressin.
Functions of oxytocin.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Basic phrases for greeting and assisting costumers
EPA Guidelines for Carcinogen Risk Assessment.pptx
1. Presented by: Megh Pravin Vithalkar
Roll no. 07 M.Pharm Sem-II
Department of Pharmacology.
Goa College of Pharmacy, Panaji-Goa.
1
25 November 2022
2. Dose-response assessment estimates potential risks to humans at
exposure levels of interest.
Dose-response assessments are useful in many applications:
estimating risk at different exposure levels, estimating the risk
reduction for different decision options, estimating the risk remaining
after an action is taken, providing the risk information needed for
benefit-cost analyses of different decision options, comparing risks
across different agents or health effects, and setting research
priorities.
The scope, depth, and use of a dose-response assessment vary in
different circumstances.
2
3. Although the quality of dose-response data is not necessarily related
to the weight of evidence descriptor, dose-response assessments are
generally completed for agents considered “Carcinogenic to Humans”
and “Likely to Be Carcinogenic to Humans.”
Dose-response assessments are generally not done when there is
inadequate evidence, although calculating a bounding estimate from
an epidemiologic or experimental study that does not show positive
results can indicate the study's level of sensitivity and capacity to
detect risk levels of concern.
Cancer is a collection of several diseases that develop through cell
and tissue changes over time.
Dose-response assessment procedures based on tumour incidence
have seldom taken into account the effects of key precursor events
within the whole biological process due to lack of empirical data and
understanding about these events.
3
4. Response data include measures of key precursor events considered
integral to the carcinogenic process in addition to tumour incidence.
These responses may include changes in DNA, chromosomes, or other
key macromolecules; effects on growth signal transduction, including
induction of hormonal changes; or physiological or toxic effects that
include proliferative events diagnosed as precancerous but not
pathology that is judged to be cancer.
Analysis of such responses may be done along with that of tumour
incidence to enhance the tumour dose-response analysis.
If dose-response analysis of non-tumour key events is more
informative about the carcinogenic process for an agent, it can be
used in lieu of, or in conjunction with, tumour incidence analysis for
the overall dose-response assessment.
4
5. For each effect observed, dose-response assessment should begin by
determining an appropriate dose metric.
Several dose metrics have been used, e.g., delivered dose, body
burden, and area under the curve, and others may be appropriate
depending on the data and mode of action.
The dose metric specifies:
• the agent measured, preferably the active agent (administered agent
or a metabolite);
• proximity to the target site (exposure concentration, potential dose,
internal dose, or delivered dose,5 reflecting increasing proximity); and
• the time component of the effective dose (cumulative dose, average
dose, peak dose, or body burden). 5
6. To facilitate comparing results from different study designs and to
make inferences about human exposures, a summary estimate of the
dose metric, whether the administered dose or inhalation exposure
concentration or an internal metric, may be derived for a complex
exposure regimen.
Toxicokinetic modelling is the preferred approach for estimating
dose metrics from exposure.
Toxicokinetic models generally describe the relationship between
exposure and measures of internal dose over time.
More complex models can reflect sources of intrinsic variation, such
as polymorphisms in metabolism and clearance rates.
6
7. For chronic exposure studies, the cumulative exposure or dose
administered often is expressed as an average over the duration of
the study, as one consistent dose metric.
This approach implies that a higher dose administered over a short
duration is equivalent to a commensurately lower dose
administered over a longer duration.
Uncertainty usually increases as the duration becomes shorter
relative to the averaging duration or the intermittent doses
become more intense than the averaged dose.
For mode of action studies, the dose metric should be calculated
over a duration that reflects the time to occurrence of the key
precursor effects.
Mode of action studies are often of limited duration, as the
precursors can be observed after less-than-chronic exposures.
When the experimental exposure regimen is specified on a weekly
basis (for example, 4 hours a day, 5 days a week), the daily
exposure may be averaged over the week, where appropriate.
7
8. In the absence of chemical-specific data, physiologically based
toxicokinetic modeling is potentially the most comprehensive way to
account for biological processes that determine internal dose.
Toxicokinetic models generally need data on absorption, distribution,
metabolism, and elimination of the administered agent and its
metabolites
Toxicokinetic models can improve dose-response assessment by
revealing and describing nonlinear relationships between applied and
internal dose.
Toxicokinetic modelling results may be presented alone as the
preferred method of estimating human equivalent exposures or doses,
or these results may be presented in parallel with default procedures
depending on the confidence in the modelling.
8
9. Standard cross-species scaling procedures are available when the data
are not sufficient to support a toxicokinetic model or when the
purpose of the assessment does not warrant developing one.
The aim is to define exposure levels for humans and animals that are
expected to produce the same degree of effect, taking into account
differences in scale between test animals and humans, such as size
and lifespan.
The two types of exposures are:
1. Oral
2. Inhalational
9
10. For Oral & Inhalational exposures, administered doses should be scaled
from animals to humans on the basis of equivalence of mg/kg3/4.
The 3/4 power is consistent with current science, including empirical
data that allow comparison of potencies in humans and animals, and it
is also supported by analysis of the allo-metric variation of key
physiological parameters across mammalian species.
It is generally more appropriate at low doses, where sources of
nonlinearity such as saturation of enzyme activity are less likely to
occur.
The aim of these cross-species scaling procedures is to estimate
administered doses in animals and humans that result in equal lifetime
risks. 10
11. It is useful to recognize two components of this equivalence:
1. toxicokinetic equivalence, which determines administered doses in
animals and humans that yield equal tissue doses, and
2. toxicodynamic equivalence, which determines tissue doses in
animals and humans that yield equal lifetime risks.
When toxicokinetic modelling is used without toxicodynamic
modelling, the dose-response assessment develops and supports an
approach for addressing toxicodynamic equivalence, perhaps by
retaining some of the cross-species scaling factor.
When assessing risks from childhood exposure, the mg/kg3/4-d scaling
factor does not use the child's body weight.
Children have faster rates of cell division than do adults, so scaling
across different life stages and species simultaneously may be
particularly uncertain.
11
12. The current default models for inhalational exposure
use parameters such as:
• inhalation rate and surface area of the affected part
of the respiratory tract for gases eliciting the response
locally,
• blood:gas partition coefficients for remote acting
gases,
• fractional deposition with inhalation rate and surface
area of the affected part of the respiratory tract for
particles eliciting the response locally, and
• fractional deposition with inhalation rate and body
weight for particles eliciting the response remotely.
12
13. Analysis of epidemiologic studies depends on the type of study and
quality of the data, particularly the availability of quantitative
measures of exposure.
The objective is to develop a dose-response curve that estimates the
incidence of cancer attributable to the dose (as estimated from the
exposure) to the agent.
In some cases, e.g., tobacco smoke or occupational exposures, the
data are in the range of the exposures of interest.
In other cases, as with data from animal experiments, information
from the observable range is extrapolated to exposures of interest.
13
14. Toxicodynamic modelling is potentially the most comprehensive way
to account for the biological processes involved in a response.
Such models seek to reflect the sequence of key precursor events that
lead to cancer.
Toxicodynamic models can contribute to dose-response assessment by
revealing and describing nonlinear relationships between internal
dose and cancer response.
If a new model is developed for a specific agent, extensive data on
the agent are important for identifying the form of the model,
estimating its parameters, and building confidence in its results.
14
15. If a standard model already exists for the agent's mode of action, the
model can be adapted for the agent by using agent-specific data to
estimate the model's parameters.
An example is the two-stage clonal expansion model developed by
Moolgavkar and Knudson (1981) and Chen and Farland (1991). These
models continue to be improved as more information becomes
available.
This approach reduces model uncertainty and ensures that the model
does not give answers that are biologically unrealistic.
This approach also provides a robustness of results, where the results
are not likely to change substantially if fitted to slightly different
data.
Toxicodynamic modelling can provide insight into the relationship
between tumours and key precursor events.
15
16. When a toxicodynamic model is not available or when the purpose of
the assessment does not warrant developing such a model, empirical
modelling (sometimes called “curve fitting”) should be used in the
range of observation.
A model can be fitted to data on either tumour incidence or a key
precursor event.
Many different curve-fitting models have been developed, and those
that fit the observed data reasonably well may lead to several-fold
differences in estimated risk at the lower end of the observed range.
This form of model uncertainty reflects primarily the availability of
different computer models and not biological information about the
agent being assessed or about carcinogenesis in general. 16
17. In cases where curve-fitting models are used because the data are not
adequate to support a toxicodynamic model, there generally would be
no biological basis to choose among alternative curve-fitting models.
However, in situations where there are alternative models with
significant biological support, the decision maker can be informed by
the presentation of these alternatives along with their strengths and
uncertainties.
For incidence data on either tumors or a precursor, an established
empirical procedure is used to provide objectivity and consistency
among assessments.
The procedure models incidence, corrected for background, as an
increasing function of dose.
The models are sufficiently flexible in the observed range to fit linear
and nonlinear datasets.
Additional judgments and perhaps alternative analyses are used when
the procedure fails to yield reliable results. 17
18. Exposure assessment is the determination (qualitative and
quantitative) of the magnitude, frequency, and duration of exposure
and internal dose.
This section of guideline provides a brief overview of exposure
assessment principles, with an emphasis on issues related to
carcinogenic risk assessment.
Exposure assessment generally consists of four major steps:
1. defining the assessment questions,
2. selecting or developing the conceptual and mathematical models,
3. collecting data or selecting and evaluating available data, and
4. exposure characterization.
18
19. The exposure characterization is a technical characterization that
presents the assessment results and supports the risk
characterization.
It provides a statement of the purpose, scope, and approach used in
the assessment, identifying the exposure scenarios and population
subgroups covered.
It provides estimates of the magnitude, frequency, duration, and
distribution of exposures among members of the exposed population
as the data permit.
It identifies and compares the contribution of different sources,
pathways, and routes of exposure.
19
20. EPA has developed general guidance on risk characterization for use in
its risk assessment activities.
A risk characterization should be prepared in a manner that is clear,
transparent, reasonable, and consistent with other risk
characterizations of similar scope prepared across programs in the
Agency.
The nature of the risk characterization will depend upon the
information available, the regulatory application of the risk
information, and the resources (including time) available.
In all cases, however, the assessment should identify and discuss all
the major issues associated with determining the nature and extent of
the risk and provide commentary on any constraints limiting fuller
exposition.
20
21. The risk characterization includes a summary for the risk manager in a
nontechnical discussion that minimizes the use of technical terms.
It is an appraisal of the science that informs the risk manager in
public health decisions, as do other decision-making analyses of
economic, social, or technology issues.
The risk characterization also brings together the assessments of
hazard, dose response, and exposure to make risk estimates for the
exposure scenarios of interest.
The intent of this approach is to convey an estimate of risk in the
upper range of the distribution, but to avoid estimates that are
beyond the true distribution.
Overly conservative assumptions, when combined, can lead to
unrealistic estimates of risk.
The risk characterization presents an integrated and balanced picture
of the analysis of the hazard, dose-response, and exposure.
The risk analyst should provide summaries of the evidence and results
and describe the quality of available data and the degree of
confidence to be placed in the risk estimates. 21