1) Analysis of variance (ANOVA) is a statistical technique developed by R.A. Fisher in 1920 to analyze the differences between group means and their associated procedures.
2) ANOVA divides the total variation into different parts that can be attributed to various sources of variation - between groups, within groups, etc.
3) There are two main classifications of ANOVA - one-way ANOVA, which looks at the effect of one factor on the dependent variable, and two-way ANOVA, which analyzes the effects of two factors.
4) ANOVA has many applications in fields like pharmacy, biology, agriculture, and business research to study the effects of different treatments, products, or interventions.
Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not.
Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
The t- and z-test methods developed in the 20th century were used for statistical analysis until 1918, when Ronald Fisher created the analysis of variance method.
ANOVA is also called the Fisher analysis of variance, and it is the extension of the t- and z-tests. The term became well-known in 1925, after appearing in Fisher's book, "Statistical Methods for Research Workers."
It was employed in experimental psychology and later expanded to subjects that were more complex.ANOVA (Analysis Of Variance) is a collection of statistical models used to assess the differences between the means of two independent groups by separating the variability into systematic and random factors. It helps to determine the effect of the independent variable on the dependent variable. Here are the three important ANOVA assumptions:
1. Normally distributed population derives different group samples.
2. The sample or distribution has a homogenous variance
3. Analysts draw all the data in a sample independently.
ANOVA test has other secondary assumptions as well, they are:
1. The observations must be independent of each other and randomly sampled.
2. There are additive effects for the factors.
3. The sample size must always be greater than 10.
4. The sample population must be uni-modal as well as symmetrical.
TYPES OF ANOVA
1. One way ANOVA analysis of variance is commonly called a one-factor test in relation to the dependent subject and independent variable. Statisticians utilize it while comparing the means of groups independent of each other using the Analysis of Variance coefficient formula. A single independent variable with at least two levels. The one way Analysis of Variance is quite similar to the t-test.
2 TWO WAY ANOVA
The pre-requisite for conducting a two-way anova test is the presence of two independent variables; one can perform it in two ways –
Two way ANOVA with replication or repeated measures analysis of variance – is done when the two independent groups with dependent variables do different tasks.
Two way ANOVA sans replication – is done when one has a single group that they have to double test like one tests a player before and after a football game
Today’s overwhelming number of techniques applicable to data analysis makes it extremely difficult to define the most beneficial approach while considering all the significant variables.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
Sir Ronald Fisher pioneered the development of ANOVA for analyzing results of agricultural experiments.1 Today, ANOVA is included in almost every statistical package, which makes it accessible to investigators in all experimental sciences. It is easy to input a data set and run a simple ANOVA, but it is challenging to choose the appropriate ANOVA for different experimental designs, to examine whether data adhere to the modeling assumptions, and to interpret the results correctly. The purpose of this report, together with the next 2 articles in the Statistical Primer for Cardiovascular Research series, is to enhance understanding of ANVOA and to promote its successful use in experimental cardiovascular research. My colleagues and I attempt to accomplish those goals through examples and explanation, while keeping within reason the burden of notation, technical jargon, and mathematical equations.
a full lecture presentation on ANOVA .
areas covered include;
a. definition and purpose of anova
b. one-way anova
c. factorial anova
d. mutiple anova
e MANOVA
f. POST-HOC TESTS - types
f. easy step by step process of calculating post hoc test.
2.0.statistical methods and determination of sample sizesalummkata1
statistical methods and determination of sample size
These guidelines focus on the validation of the bioanalytical methods generating quantitative concentration data used for pharmacokinetic and toxicokinetic parameter determinations.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
More Related Content
Similar to anovappt-141025002857-conversion-gate01 (1).pdf
Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not.
Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
The t- and z-test methods developed in the 20th century were used for statistical analysis until 1918, when Ronald Fisher created the analysis of variance method.
ANOVA is also called the Fisher analysis of variance, and it is the extension of the t- and z-tests. The term became well-known in 1925, after appearing in Fisher's book, "Statistical Methods for Research Workers."
It was employed in experimental psychology and later expanded to subjects that were more complex.ANOVA (Analysis Of Variance) is a collection of statistical models used to assess the differences between the means of two independent groups by separating the variability into systematic and random factors. It helps to determine the effect of the independent variable on the dependent variable. Here are the three important ANOVA assumptions:
1. Normally distributed population derives different group samples.
2. The sample or distribution has a homogenous variance
3. Analysts draw all the data in a sample independently.
ANOVA test has other secondary assumptions as well, they are:
1. The observations must be independent of each other and randomly sampled.
2. There are additive effects for the factors.
3. The sample size must always be greater than 10.
4. The sample population must be uni-modal as well as symmetrical.
TYPES OF ANOVA
1. One way ANOVA analysis of variance is commonly called a one-factor test in relation to the dependent subject and independent variable. Statisticians utilize it while comparing the means of groups independent of each other using the Analysis of Variance coefficient formula. A single independent variable with at least two levels. The one way Analysis of Variance is quite similar to the t-test.
2 TWO WAY ANOVA
The pre-requisite for conducting a two-way anova test is the presence of two independent variables; one can perform it in two ways –
Two way ANOVA with replication or repeated measures analysis of variance – is done when the two independent groups with dependent variables do different tasks.
Two way ANOVA sans replication – is done when one has a single group that they have to double test like one tests a player before and after a football game
Today’s overwhelming number of techniques applicable to data analysis makes it extremely difficult to define the most beneficial approach while considering all the significant variables.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA test to determine the influence that independent variables have on the dependent variable in a regression study.
Sir Ronald Fisher pioneered the development of ANOVA for analyzing results of agricultural experiments.1 Today, ANOVA is included in almost every statistical package, which makes it accessible to investigators in all experimental sciences. It is easy to input a data set and run a simple ANOVA, but it is challenging to choose the appropriate ANOVA for different experimental designs, to examine whether data adhere to the modeling assumptions, and to interpret the results correctly. The purpose of this report, together with the next 2 articles in the Statistical Primer for Cardiovascular Research series, is to enhance understanding of ANVOA and to promote its successful use in experimental cardiovascular research. My colleagues and I attempt to accomplish those goals through examples and explanation, while keeping within reason the burden of notation, technical jargon, and mathematical equations.
a full lecture presentation on ANOVA .
areas covered include;
a. definition and purpose of anova
b. one-way anova
c. factorial anova
d. mutiple anova
e MANOVA
f. POST-HOC TESTS - types
f. easy step by step process of calculating post hoc test.
2.0.statistical methods and determination of sample sizesalummkata1
statistical methods and determination of sample size
These guidelines focus on the validation of the bioanalytical methods generating quantitative concentration data used for pharmacokinetic and toxicokinetic parameter determinations.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
The affect of service quality and online reviews on customer loyalty in the E...
anovappt-141025002857-conversion-gate01 (1).pdf
1. ANALYSIS OF VARIANCE
(ANOVA)
1
DEPARTMENT OF PHARMACEUTICAL CHEMISTRY
GOKARAJU RANGARAJU COLLEGE OF PHARMACY
(Affiliated to Osmania university, Approved by AICTE and PCI.)
Bachupally, Ranga reddy, 72.
GUIDED BY: PRESENTED BY:
MRS. K.VINATHA K.LAXMIKANTHAM
M.Sc.Maths R.NO:170213884001
4. INTRODUCTION
The analysis of variance(ANOVA) is developed by
R.A.Fisher in 1920.
If the number of samples is more than two the Z-test and
t-test cannot be used.
The technique of variance analysis developed by fisher is
very useful in such cases and with its help it is possible to
study the significance of the difference of mean values
of a large no.of samples at the same time.
The techniques of variance analysis originated, in
agricultural research where the effect of various types of
soils on the output or the effect of different types of
fertilizers on production had to be studied.
4
5. The technique of the analysis of variance was extremely
useful in all types of researches.
The variance analysis studies the significance of the
difference in means by analysing variance.
The variances would differ only when the means are
significantly different.
The technique of the analysis of variance as developed
by Fisher is capable of fruitful application in a variety of
problems.
H0: Variability w/i groups = variability b/t groups, this
means that 1 = n
Ha: Variability w/i groups does not = variability b/t
groups, or, 1 n
5
6. F-STATISTICS
ANOVA measures two sources of variation in the data
and compares their relative sizes.
• variation BETWEEN groups:
• for each data value look at the difference between
its group mean and the overall mean.
• variation WITHIN groups :
• for each data value we look at the difference
between that value and the mean of its group.
6
7. The ANOVA F-statistic is a ratio of the Between Group
Variaton divided by the Within GroupVariation:
F=
A large F is evidence against H0, since it indicates that
there is more difference between groups than within
groups.
7
8. TECNIQUEOFANALYSINGVARIANCE
The technique of analysing the variance in case of a single
variable and in case two variables is similar.
In both cases a comparison is made between the variance
of sample means with the residual variance.
However, in case of a single variable, the total variance is
divided in two parts only, viz..,
variance between the samples and variance within the
samples.
The latter variance is the residual variance. In case of two
variables the total variance is divided in three parts, viz.
(i)Variance due to variable no.1
(ii)Variance due to variable no.2
(iii) Residual variance.
8
10. ONE-WAYCLASSIFICATION
In one-way classification we take into account only one
variable- say, the effect of different types of fertilizers on
yield.
Other factors like difference in soil fertility or the
availability of irrigation facilities etc. are not considered.
For one-way classification we may conduct the
experiment through a number of sample studies.
Thus, if four different fertilizers are being studied we
may have four samples of, say, 10 fields each and
conduct the experiment.
We will note down the yield on each one of the field of
various samples and then with help of F-test try to find
out if there is a significant difference in the mean yields
given by different fertilizers.
10
11. a.We will start with the Null Hypothesis that is, the mean
yield of the four fertilizers is not different in the universe,
or
H0: µ1 = µ2 = µ3 = µ4
The alternate hypothesis will be
H0: µ1 ≠ µ2 ≠ µ3 ≠ µ4
Treatments
1 2 3
1 X11 X12 X13
Replicants 2 X21 X22 X23
3 X31 X32 X33
Total ∑xC1 ∑xC2 ∑xC3
11
12. b. Compute grad total, G=∑xC1+∑xC2+∑xC3
Correction factor(C.F)=G2 ̸N—D
c. Total sum of samples(SST)=A-D
SST=∑xC1
2
+∑xC2
2
+∑xC3
2
− G2 ̸N
d. Sum of squares between samples(colums) SSC=B-D
SSC=(∑xC1 )
2
̸ nc1 +(∑xC2 )
2
̸ nc2 + ∑xC3 )
2
̸ nc3 - G2 ̸N
Where nc1 = no. of elements in first column etc.
e. Sum of squares with in samples, SSE=SST-SSC
SSE=A-D-(B-D)=A-B
12
13. f. The no.of d.f for between samples,ᶹ1 =C-1
g. The no.of d.f for Within the samples,ᶹ2 =N-C
h. Mean squares between columns,MSC=SSC̸ ᶹ1= SSC̸C-1
i.Mean squares within samples,
MSE=SSE̸ᶹ2=SSE̸N-C
F=MSC̸MSE if MSC>MSE or
MSE̸MSC if MSE>MSC
j. Conclusion: Fcal < Ftab = accept H0
13
14. Source of variance d.f Sum of
squares
Mean sum of
squares
F-Ratio
Between
samples(columns)
ᶹ1 =C-1 SSC=B-D MSC=SSC̸ᶹ1
Within
samples(Residual)
ᶹ2 =N-C SSE=A-B MSE=SSE̸ᶹ2 F=MSC̸MSE
Total N-1 SST=A-D
14
15. TWOWAYCLASSIFICATION
1.In a one-way classification we take into account the effect
of only one variable.
2.If there is a two way classification the effect of two
variables can be studied.
3.The procedure' of analysis in a two-way classification is
total both the columns and rows.
4.The effect of one factor is studied through the column
wise figures and total's and of the other through the row
wise figures and totals.
5.The variances are calculated for both the columns and
rows and they are compared with the residual variance or
error.
15
16. a.We will start with the Null Hypothesis that is, the mean yield of
the four fields is not different in the universe, or
H0: µ1 = µ2 = µ3 = µ4
The alternate hypothesis will be
H0: µ1 ≠ µ2 ≠ µ3 ≠ µ4
b.Compute grad total, G=∑xC1+∑xC2+∑xC3
Correction factor(C.F)=G2 ̸N—D
c. Total sum of samples(SST)=A-D
SST=∑xC1
2
+∑xC2
2
+∑xC3
2
− G2 ̸N
d.Sum of squares between samples(colums) SSC=B-D
SSC=(∑xC1 )
2
̸ nc1 +(∑xC2 )
2
̸ nc2 + ∑xC3 )
2
̸ nc3 - G2 ̸N
Where nc1 = no. of elements in first column etc.
16
17. e. Sum of the squares between rows
SSR= ∑xr1 )
2
̸ nr1 +(∑xr2 )
2
̸ nr2 + ∑xr3 )
2
̸ nr3 - G2 ̸N
nr1= no. of elements in first row
SSR=C-D
f. Sum of squares within samples,
SSE=SST-(SSC+SSR)=SSE=A-D-(B-D+C-D)
g.The no.of d.f for between samples ᶹ1 =C-1
h.The no.of d.f for between rows, ᶹ2 =r-1
i.The no.of d.f for within samples, ᶹ3 =(C-1)(r-1)
17
18. j. Mean squares between columns,
MSC=SSC̸ ᶹ1 =SSC ̸C-1
k. Mean squares between rows,
MSR=SSR ̸ ᶹ2
l. Mean squares within samples,
MSE=SSE ̸ ᶹ3 = SSE ̸ (C-1)(r-1)
m. Between columns F=MSC ̸ MSE
if Fcal < Ftab = accept H0
n. Between rows F=MSR ̸ MSE
if Fcal < Ftab = accept H0
18
19. ANOVATABLEFORTWO-WAY
Source of variance d.f Sum of
squares
Mean sum of
squares
F-Ratio
Between
samples(columns)
ᶹ1 =C-1 SSC=B-D MSC=SSC ̸ ᶹ1 F=MSC ̸ MSE
Between
Replicants(rows)
ᶹ2 =r-1 SSR=C-D MSR=SSR ̸ ᶹ2
Within
samples(Residual)
ᶹ3 =(c-1)(r-1) SSE=SST-
(SSC+SSR)
MSE=SSE ̸ ᶹ3 F=MSR ̸ MSE
Total n-1 SST=A-D
19
20. APPLICATIONSOFANOVA
Similar to t-test
More versatile than t-test
ANOVA is the synthesis of several ideas & it is used for
multiple purposes.
The statistical Analysis depends on the design and
discussion of ANOVA therefore includes common
statistical designs used in pharmaceutical research.
20
21. This is particularly applicable to experiment otherwise
difficult to implement such as is the case in Clinical trials.
In the bioequelence studies the similarities between the
samples will be analyzed with ANOVA only.
Pharmacokinetic data also will be evaluated using
ANOVA.
Pharmacodynamics (what drugs does to the body) data
also will be analyzed with ANOVA only.
That means we can analyze our drug is showing
significant pharmacological action (or) not.
21
22. Compare heights of plants with and without galls.
Compare birth weights of deer in different geographical
regions.
Compare responses of patients to real medication vs.
placebo.
Compare attention spans of undergraduate students in
different programs at PC.
22
23. General Applications:
Pharmacy
Biology
Microbiology
Agriculture
Statistics
Marketing
Business research
Finance
Mechanical calculations
23
24. REFERENCES
DN Elhance, B M Aggarwal Fundamentals of statistics, Page No:
(25.1-25.19).
Guptha SC, kapoorVK.Fundamentals of applied statistics. 4th Ed.
New Delhi: Sultan Chand and Sons; 2007.page no:(23.12-23.28).
Lewis AE. Biostatistics, 2nd Ed. NewYork: Reinhold Publishers
Corporation; 1984.Page no:
Arora PN, Malhan PK. Biostatistics. Mumbai: Himalaya
Publishing House; 2008.Page no:
Bolton S, Bon C, Pharmaceutical Statistics, 4th ed. NewYork:
Marcel Dekker Inc; 2004.
24