This document provides an overview of engineering statistics and probability concepts taught in an EN505 Engineering Statistics course. It defines key terms like random variables, sample spaces, events, probability, and distributions. Random variables can be either discrete or continuous, depending on whether they take countable or uncountable values. Probability is used to quantify the likelihood of events and is governed by rules like the addition rule and multiplication rule. Conditional probability and Bayes' theorem are also introduced to relate the probabilities of events given other information. Important discrete and continuous probability distributions are discussed.
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...qyjewyvu
Full download : http://alibabadownload.com/product/applied-statistics-and-probability-for-engineers-6th-edition-montgomery-solutions-manual/
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solutions Manual
Applied statistics and probability for engineers solution montgomery && rungerAnkit Katiyar
This document is the copyright page and preface for the book "Applied Statistics and Probability for Engineers" by Douglas C. Montgomery and George C. Runger. The copyright is held by John Wiley & Sons, Inc. in 2003. This book was edited, designed, and produced by various teams at John Wiley & Sons and printed by Donnelley/Willard. The preface states that the purpose of the included Student Solutions Manual is to provide additional help for students in understanding the problem-solving processes presented in the main text.
Solution manual for design and analysis of experiments 9th edition douglas ...Salehkhanovic
Solution Manual for Design and Analysis of Experiments - 9th Edition
Author(s): Douglas C Montgomery
Solution manual for 9th edition include chapters 1 to 15. There is one PDF file for each of chapters.
Probability and Stochastic Processes A Friendly Introduction for Electrical a...KionaHood
Full download : https://alibabadownload.com/product/probability-and-stochastic-processes-a-friendly-introduction-for-electrical-and-computer-engineers-3rd-edition-yates-solutions-manual/ Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers 3rd Edition Yates Solutions Manual , Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers,Yates,3rd Edition,Solutions Manual
This document provides study material for a course on probability and statistics. It covers topics such as sample spaces, events, axioms of probability, conditional probability, Bayes' theorem, random variables, probability distributions including binomial, Poisson, normal and other continuous distributions, joint and marginal distributions, mathematical expectation, decision making, sampling distributions and statistical inference. Various examples are provided to illustrate concepts such as probability calculations for events from finite sample spaces, conditional probability, independence of events and finding probabilities of unions and intersections of events.
I am Ronald G. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I have done Ph.D Statistics from New York University, USA. I have been helping students with their statistics assignments for the past 5 years. You can hire me for any of your statistics assignments.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with statistics.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solut...qyjewyvu
Full download : http://alibabadownload.com/product/applied-statistics-and-probability-for-engineers-6th-edition-montgomery-solutions-manual/
Applied Statistics and Probability for Engineers 6th Edition Montgomery Solutions Manual
Applied statistics and probability for engineers solution montgomery && rungerAnkit Katiyar
This document is the copyright page and preface for the book "Applied Statistics and Probability for Engineers" by Douglas C. Montgomery and George C. Runger. The copyright is held by John Wiley & Sons, Inc. in 2003. This book was edited, designed, and produced by various teams at John Wiley & Sons and printed by Donnelley/Willard. The preface states that the purpose of the included Student Solutions Manual is to provide additional help for students in understanding the problem-solving processes presented in the main text.
Solution manual for design and analysis of experiments 9th edition douglas ...Salehkhanovic
Solution Manual for Design and Analysis of Experiments - 9th Edition
Author(s): Douglas C Montgomery
Solution manual for 9th edition include chapters 1 to 15. There is one PDF file for each of chapters.
Probability and Stochastic Processes A Friendly Introduction for Electrical a...KionaHood
Full download : https://alibabadownload.com/product/probability-and-stochastic-processes-a-friendly-introduction-for-electrical-and-computer-engineers-3rd-edition-yates-solutions-manual/ Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers 3rd Edition Yates Solutions Manual , Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers,Yates,3rd Edition,Solutions Manual
This document provides study material for a course on probability and statistics. It covers topics such as sample spaces, events, axioms of probability, conditional probability, Bayes' theorem, random variables, probability distributions including binomial, Poisson, normal and other continuous distributions, joint and marginal distributions, mathematical expectation, decision making, sampling distributions and statistical inference. Various examples are provided to illustrate concepts such as probability calculations for events from finite sample spaces, conditional probability, independence of events and finding probabilities of unions and intersections of events.
I am Ronald G. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I have done Ph.D Statistics from New York University, USA. I have been helping students with their statistics assignments for the past 5 years. You can hire me for any of your statistics assignments.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with statistics.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
1) The document presents regression results from Chapter 7 of the textbook "Basic Econometrics" by Gujarati and Porter. It discusses multiple regression analysis and the problem of estimation.
2) Various regression models are estimated using different variables and datasets. The results, standard errors, and other regression outputs like R-squared are reported for each model.
3) Key concepts discussed include omitted variable bias, partial regression coefficients, elasticities, and the consequences of model misspecification.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
The document provides instructions for Quiz 1 of the MIT course 6.006 Introduction to Algorithms. It states that the quiz has 120 minutes and 120 total points. It is closed book except for one crib sheet. Students are to write their solutions in the provided space and show their work for partial credit. The quiz contains 7 problems worth various point values testing topics like asymptotics, recurrences, sorting algorithms, and graph algorithms.
The document discusses the curse of dimensionality when adding features to multivariate studies. It explains that the sampling density needed grows exponentially with the number of dimensions, so that most data points lie outside random samples as more variables are added. It also covers positive definite matrices, principal component analysis (PCA) for reducing dimensions while minimizing information loss, and how PCA works by finding new variables that maximize variance.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
Presentation on application of numerical method in our lifeManish Kumar Singh
This document discusses the application of numerical methods in real-life problems. It provides examples of using the bisection method to find the root of equations related to estimating ocean currents, modeling combustion flow, airflow patterns, and other applications. Specifically, it shows the steps to use the bisection method to estimate the depth at which a floating ball with given properties would be submerged. Over three iterations, it computes the estimated root, error, and number of significant digits estimated.
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
For more details, please have a look at:
1. https://www.mdpi.com/1099-4300/24/2/188
2. https://ieeexplore.ieee.org/document/9518004
Abstract:
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a notion of divergence between the underlying statistical models (e.g., in binary hypothesis testing, the total error probability is equal to the total variation between the models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (divergence reduces due to the data processing inequality for divergence). This paper considers linear dimensionality reduction such that the divergence between the models is \emph{maximally} preserved. Specifically, the paper focuses on the Gaussian models and characterizes an optimal projection of the data onto a lower-dimensional subspace with respect to four $f$-divergence measures (Kullback-Leibler, $\chi^2$, Hellinger, and total variation). There are two key observations. First, projections are not necessarily along the dominant modes of the covariance matrix of the data, and even in some situations, they can be along the least dominant modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the $f$-divergence measures considered, rendering a degree of universality to the design independent of the inference problem of interest.
This document contains questions from a M.Tech Applied Mathematics exam. It includes questions on various topics in applied mathematics, such as:
1) Finding the binary form of a number, approximating a number, and writing a Fortran program for matrix multiplication.
2) Solving sets of equations using Gauss elimination, finding matrix inverses, and converting eigenvalue problems.
3) Evaluating mixed partial derivatives, Taylor series expansions, and numerically evaluating integrals using Simpson's rule, Gauss-Legendre quadrature, and Adams-Bashforth methods.
4) Solving initial value problems, the transverse deflection of beams, and using finite difference methods to solve PDEs modeling heat transfer.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignment.
Subject Title: Engineering Numerical Analysis
Subject Code: ID-302
Contents of this chapter:
Mathematical preliminaries,
Solution of equations in one variable,
Interpolation and polynomial Approximation,
Numerical differentiation and integration,
Initial value problems for ordinary differential equations,
Direct methods for solving linear systems,
Iterative techniques in Matrix algebra,
Solution of non-linear equations.
Approximation theory;
Eigen values and vector;
This document contains a multiple choice question (MCQ) bank for the subject Numerical Methods. It is divided into three units:
1) Solutions of Numerical Algebraic and Transcendental Equations, which contains questions about root finding methods like bisection, Newton-Raphson, and Regula Falsi.
2) Solutions of Simultaneous Linear Algebraic Equations, containing questions about Gaussian elimination and Gauss-Jordan methods.
3) Interpolation, Central Difference Interpolation Formulae, with questions on polynomial interpolation, Gauss forward/backward interpolation, and Newton interpolation formulas.
The MCQ bank was prepared by Dr. N. Meenakhisundaram of the Department
I am Anthony F. I am a Math Exam Helper at liveexamhelper.com. I hold a Masters' Degree in Maths, University of Cambridge, UK. I have been helping students with their exams for the past 9 years. You can hire me to take your exam in Math.
Visit liveexamhelper.com or email info@liveexamhelper.com.
You can also call on +1 678 648 4277 for any assistance with Math Exams.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
Here are the steps to solve this problem:
(a) Let t = time and y = height. Then the differential equation is:
dy/dt = -32 ft/sec^2
Integrate both sides:
∫dy = ∫-32 dt
y = -32t + C
Initial conditions: at t = 0, y = 0
0 = -32(0) + C
C = 0
Therefore, the equation is: y = -32t
When y = 0 (the maximum height), t = 0.625 sec
(b) Put t = 0.625 sec into the equation:
y = -32(0.625) = -20 ft
Computational language have been used in physics research
for many years and there is a plethora of programs and packages on the Web which can be used to solve dierent problems. In this report I trying to use as many of these available solutions as possible and not reinvent the wheel. Some of these packages have been written in C program. As I stated above, physics relies heavily on graphical representations. Usually,the scientist would save the results
from some calculations into a file, which then can be read and used for display by a graphics package like Gnuplot.
This document contains multiple choice questions related to numerical methods. The questions cover topics like regular falsi method, Newton-Raphson method, numerical integration techniques like trapezoidal rule and Simpson's rule, and numerical differentiation techniques like forward and backward difference formulas. Numerical methods for solving differential equations like Euler's method and Runge-Kutta methods are also addressed.
This document discusses the bisection method for finding roots of equations. It begins by introducing the bisection method and explaining that it uses an initial interval that brackets a root to successively narrow down the interval containing the root. It then provides the steps of the bisection method algorithm. Finally, it includes an example of applying the bisection method to find the depth at which a floating ball is submerged. Over 10 iterations, the method converges on a root of 0.06252 within the specified error tolerance.
This document discusses techniques for setting linear algebra problems in a way that ensures relatively easy arithmetic. Some key techniques discussed include:
1. Using Pythagorean triples and sums of squares to generate vectors with integer norms in R2 and R3.
2. Using the PLU decomposition theorem to generate matrices with a given determinant, such as ±1, to avoid fractions.
3. Extending a basis for the kernel of a matrix to generate matrices with a given kernel.
4. Ensuring the coefficients for a Leontieff input-output model are nonnegative to generate a productive consumption matrix. Examples and Maple routines are provided.
The document provides information about numerical methods topics including:
1) Lagrange's interpolation formula for finding a polynomial that passes through given data points, either equally or unequally spaced. The formula uses divided differences to find the coefficients.
2) Newton's divided difference interpolation formula for unequal intervals that also uses divided differences.
3) The nature of divided differences - for a polynomial of degree n, the nth divided difference is constant.
4) Examples of evaluating divided differences and constructing divided difference tables are given.
The document discusses optimization techniques for finding the minimum or maximum of a function. It begins by distinguishing optimization from root location, noting that optimization involves finding extrema rather than zeros. Several one-dimensional optimization methods are then described, including golden section search, parabolic interpolation, and Newton's method. The document notes that multidimensional optimization poses additional challenges and describes some direct methods that do not require derivatives, such as random search, for tackling multidimensional problems.
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...Scilab
This document provides an introduction to discrete probabilities with Scilab. It begins with definitions of sets, including union, intersection, complement, difference, and cross product. It then defines discrete distribution functions and probability of events. Properties of probabilities are discussed, such as the probability of a union of disjoint events being the sum of the individual probabilities. The document also covers conditional probability and Bayes' formula. Examples using a six-sided die are provided throughout to illustrate the concepts.
Basics of Probability Theory ; set definitions about the conceptsps6005tec
1) Probability theory deals with studying random phenomena and outcomes of experiments that have underlying patterns. Probability can be defined in different ways such as the classical definition which defines probability as the number of favorable outcomes divided by the total number of possible outcomes.
2) There are two main definitions of probability - the classical definition and the relative frequency definition. The axiomatic approach defines probability through a set of axioms and is generally recognized as superior.
3) Events are subsets of the sample space that can be assigned probabilities. The collection of all events forms a field. Probability of events must satisfy three axioms - probabilities are non-negative numbers between 0 and 1, the probability of the entire sample space is 1
1) The document presents regression results from Chapter 7 of the textbook "Basic Econometrics" by Gujarati and Porter. It discusses multiple regression analysis and the problem of estimation.
2) Various regression models are estimated using different variables and datasets. The results, standard errors, and other regression outputs like R-squared are reported for each model.
3) Key concepts discussed include omitted variable bias, partial regression coefficients, elasticities, and the consequences of model misspecification.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
The document provides instructions for Quiz 1 of the MIT course 6.006 Introduction to Algorithms. It states that the quiz has 120 minutes and 120 total points. It is closed book except for one crib sheet. Students are to write their solutions in the provided space and show their work for partial credit. The quiz contains 7 problems worth various point values testing topics like asymptotics, recurrences, sorting algorithms, and graph algorithms.
The document discusses the curse of dimensionality when adding features to multivariate studies. It explains that the sampling density needed grows exponentially with the number of dimensions, so that most data points lie outside random samples as more variables are added. It also covers positive definite matrices, principal component analysis (PCA) for reducing dimensions while minimizing information loss, and how PCA works by finding new variables that maximize variance.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignments.
Presentation on application of numerical method in our lifeManish Kumar Singh
This document discusses the application of numerical methods in real-life problems. It provides examples of using the bisection method to find the root of equations related to estimating ocean currents, modeling combustion flow, airflow patterns, and other applications. Specifically, it shows the steps to use the bisection method to estimate the depth at which a floating ball with given properties would be submerged. Over three iterations, it computes the estimated root, error, and number of significant digits estimated.
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
For more details, please have a look at:
1. https://www.mdpi.com/1099-4300/24/2/188
2. https://ieeexplore.ieee.org/document/9518004
Abstract:
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a notion of divergence between the underlying statistical models (e.g., in binary hypothesis testing, the total error probability is equal to the total variation between the models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (divergence reduces due to the data processing inequality for divergence). This paper considers linear dimensionality reduction such that the divergence between the models is \emph{maximally} preserved. Specifically, the paper focuses on the Gaussian models and characterizes an optimal projection of the data onto a lower-dimensional subspace with respect to four $f$-divergence measures (Kullback-Leibler, $\chi^2$, Hellinger, and total variation). There are two key observations. First, projections are not necessarily along the dominant modes of the covariance matrix of the data, and even in some situations, they can be along the least dominant modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the $f$-divergence measures considered, rendering a degree of universality to the design independent of the inference problem of interest.
This document contains questions from a M.Tech Applied Mathematics exam. It includes questions on various topics in applied mathematics, such as:
1) Finding the binary form of a number, approximating a number, and writing a Fortran program for matrix multiplication.
2) Solving sets of equations using Gauss elimination, finding matrix inverses, and converting eigenvalue problems.
3) Evaluating mixed partial derivatives, Taylor series expansions, and numerically evaluating integrals using Simpson's rule, Gauss-Legendre quadrature, and Adams-Bashforth methods.
4) Solving initial value problems, the transverse deflection of beams, and using finite difference methods to solve PDEs modeling heat transfer.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignment.
Subject Title: Engineering Numerical Analysis
Subject Code: ID-302
Contents of this chapter:
Mathematical preliminaries,
Solution of equations in one variable,
Interpolation and polynomial Approximation,
Numerical differentiation and integration,
Initial value problems for ordinary differential equations,
Direct methods for solving linear systems,
Iterative techniques in Matrix algebra,
Solution of non-linear equations.
Approximation theory;
Eigen values and vector;
This document contains a multiple choice question (MCQ) bank for the subject Numerical Methods. It is divided into three units:
1) Solutions of Numerical Algebraic and Transcendental Equations, which contains questions about root finding methods like bisection, Newton-Raphson, and Regula Falsi.
2) Solutions of Simultaneous Linear Algebraic Equations, containing questions about Gaussian elimination and Gauss-Jordan methods.
3) Interpolation, Central Difference Interpolation Formulae, with questions on polynomial interpolation, Gauss forward/backward interpolation, and Newton interpolation formulas.
The MCQ bank was prepared by Dr. N. Meenakhisundaram of the Department
I am Anthony F. I am a Math Exam Helper at liveexamhelper.com. I hold a Masters' Degree in Maths, University of Cambridge, UK. I have been helping students with their exams for the past 9 years. You can hire me to take your exam in Math.
Visit liveexamhelper.com or email info@liveexamhelper.com.
You can also call on +1 678 648 4277 for any assistance with Math Exams.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
Here are the steps to solve this problem:
(a) Let t = time and y = height. Then the differential equation is:
dy/dt = -32 ft/sec^2
Integrate both sides:
∫dy = ∫-32 dt
y = -32t + C
Initial conditions: at t = 0, y = 0
0 = -32(0) + C
C = 0
Therefore, the equation is: y = -32t
When y = 0 (the maximum height), t = 0.625 sec
(b) Put t = 0.625 sec into the equation:
y = -32(0.625) = -20 ft
Computational language have been used in physics research
for many years and there is a plethora of programs and packages on the Web which can be used to solve dierent problems. In this report I trying to use as many of these available solutions as possible and not reinvent the wheel. Some of these packages have been written in C program. As I stated above, physics relies heavily on graphical representations. Usually,the scientist would save the results
from some calculations into a file, which then can be read and used for display by a graphics package like Gnuplot.
This document contains multiple choice questions related to numerical methods. The questions cover topics like regular falsi method, Newton-Raphson method, numerical integration techniques like trapezoidal rule and Simpson's rule, and numerical differentiation techniques like forward and backward difference formulas. Numerical methods for solving differential equations like Euler's method and Runge-Kutta methods are also addressed.
This document discusses the bisection method for finding roots of equations. It begins by introducing the bisection method and explaining that it uses an initial interval that brackets a root to successively narrow down the interval containing the root. It then provides the steps of the bisection method algorithm. Finally, it includes an example of applying the bisection method to find the depth at which a floating ball is submerged. Over 10 iterations, the method converges on a root of 0.06252 within the specified error tolerance.
This document discusses techniques for setting linear algebra problems in a way that ensures relatively easy arithmetic. Some key techniques discussed include:
1. Using Pythagorean triples and sums of squares to generate vectors with integer norms in R2 and R3.
2. Using the PLU decomposition theorem to generate matrices with a given determinant, such as ±1, to avoid fractions.
3. Extending a basis for the kernel of a matrix to generate matrices with a given kernel.
4. Ensuring the coefficients for a Leontieff input-output model are nonnegative to generate a productive consumption matrix. Examples and Maple routines are provided.
The document provides information about numerical methods topics including:
1) Lagrange's interpolation formula for finding a polynomial that passes through given data points, either equally or unequally spaced. The formula uses divided differences to find the coefficients.
2) Newton's divided difference interpolation formula for unequal intervals that also uses divided differences.
3) The nature of divided differences - for a polynomial of degree n, the nth divided difference is constant.
4) Examples of evaluating divided differences and constructing divided difference tables are given.
The document discusses optimization techniques for finding the minimum or maximum of a function. It begins by distinguishing optimization from root location, noting that optimization involves finding extrema rather than zeros. Several one-dimensional optimization methods are then described, including golden section search, parabolic interpolation, and Newton's method. The document notes that multidimensional optimization poses additional challenges and describes some direct methods that do not require derivatives, such as random search, for tackling multidimensional problems.
Introduction to Discrete Probabilities with Scilab - Michaël Baudin, Consort...Scilab
This document provides an introduction to discrete probabilities with Scilab. It begins with definitions of sets, including union, intersection, complement, difference, and cross product. It then defines discrete distribution functions and probability of events. Properties of probabilities are discussed, such as the probability of a union of disjoint events being the sum of the individual probabilities. The document also covers conditional probability and Bayes' formula. Examples using a six-sided die are provided throughout to illustrate the concepts.
Basics of Probability Theory ; set definitions about the conceptsps6005tec
1) Probability theory deals with studying random phenomena and outcomes of experiments that have underlying patterns. Probability can be defined in different ways such as the classical definition which defines probability as the number of favorable outcomes divided by the total number of possible outcomes.
2) There are two main definitions of probability - the classical definition and the relative frequency definition. The axiomatic approach defines probability through a set of axioms and is generally recognized as superior.
3) Events are subsets of the sample space that can be assigned probabilities. The collection of all events forms a field. Probability of events must satisfy three axioms - probabilities are non-negative numbers between 0 and 1, the probability of the entire sample space is 1
The document provides an overview of sets and logic. It defines basic set concepts like elements, subsets, unions and intersections. It explains Venn diagrams can be used to represent relationships between sets. Logic is introduced as the study of correct reasoning. Propositions are defined as statements that can be determined as true or false. Logical connectives like conjunction, disjunction and negation are explained through truth tables. Compound statements can be formed using these connectives.
1. The document discusses basic concepts in probability and statistics, including sample spaces, events, probability distributions, and random variables.
2. Key concepts are explained such as independent and conditional probability, Bayes' theorem, and common probability distributions like the uniform and normal distributions.
3. Statistical analysis methods are introduced including how to estimate the mean and variance from samples from a distribution.
The document discusses probability and key concepts in statistics such as sample space, events, counting sample points, probability of an event, additive rules, conditional probability, independence, and Bayes' rule. The sample space is the set of all possible outcomes of an experiment. An event is a subset of the sample space. Probability is evaluated by assigning weights between 0 and 1 to each outcome. The probability of an event is the sum of the probabilities of the outcomes in the event. Conditional probability is the probability of one event occurring given that another event has occurred. Events are independent if knowing one event has occurred does not change the probability of the other occurring.
STOMA FULL SLIDE (probability of IISc bangalore)2010111
This document provides an overview of a course on stochastic models and applications. It includes:
- A list of reference materials for the course.
- Background prerequisites in calculus, matrix theory, and basic probability concepts.
- Details on course grading with midterm exams, assignments, and a final exam comprising the overall grade.
- An introduction to probability theory and examples of applications in engineering, statistics, and algorithms.
- A review of basic probability concepts like sample space, events, axioms, and examples of finite and countably infinite sample spaces.
1) The document provides four probability problems involving combinations and permutations to further understanding of combinatorics. It gives the solutions and explanations for each problem involving probabilities of card hands.
2) It then introduces conditional probability and uses examples like the Monty Hall problem to illustrate how conditioning on additional information can change a probability. It provides the definition of conditional probability and proves Bayes' theorem.
3) The document discusses independence of events and uses examples like coin flips and natural disasters to demonstrate independence. It also introduces the continuity of probability and uses examples like the Cantor set to illustrate how it allows calculating probabilities of infinite sets.
This document contains permissions and copyright information for Chapter 2 of the Handbook of Applied Cryptography. It grants permission to retrieve, print, and store a single copy of this chapter for personal use, but does not extend permission to bind multiple chapters, photocopy, produce additional copies, or make electronic copies available without prior written permission. Except as specifically permitted, the standard copyright from CRC Press applies and prohibits reproducing or transmitting the book or any part in any form without prior written permission.
The document provides information about probability and statistics concepts including:
1) Mathematical, statistical, and axiomatic definitions of probability are given along with examples of mutually exclusive, equally likely, and independent events.
2) Laws of probability such as addition law, multiplication law, and total probability theorem are defined and formulas are provided.
3) Concepts of random variables, discrete and continuous random variables, probability mass functions, probability density functions, and expected value are introduced.
Hello our respected institutions and faculties
if you want to buy Editable materials (6 to 12th/Foundation/JEE/NEET/CET) for your institution
Contact me ... 8879919898
*CBSE 6 TO 10 TOPICWISE PER CHAPTER 100 QUESTION WITH ANSWER MATHEMATICS & SCIENCE & SST (Biology,Physics,Chemistry & Social studies)* Editable ms word
# *Neet/JEE(MAINS) PCMB*
# *IIT ( advance) PCM*
# *CET (PCMB) level with Details solutions*
(All jee,neet,advance,cet mcq's Count 1 lakh 50k ) data of all subjects*
*TOPICWISE WISE DPP PCMB NEW PATTERN AVILABLE*
Or also study material for *neet and jee* and *foundation* new Material with solutions
Like
*👉🏽Foundation( Class 6th to 10th) Editable Material Latest Available 👇..*
1. VMC - All Subjects
2. Carrer Point(Kota) - All Subjects
3. Bansal Classes - All Subjects
4. Narayana - All Subjects
5. Mentors - All Subjects
6. Brilliants - All Subjects
7. Resonance - All Subjects
8. Motioniitjee - All Subjects
9. Rao(Kota) - All Subjects
10. Insight - All Subjects
11. Allen - All Subjects
12. FITJEE - All Subjects
13. Abhyas Gurukul - All Subject
14. Parth Ashram - All Subject
Many Other+++
*👉🏽All IIT-JEE & NEET Coachings Editable 📚Study Material Latest Available..👇🏼*
1. Narayana - PCMB
2. Etoos India - PCMB
3. Brilliant- PCM
4.Career Point(Kota) - PCMB
5. Bansal - PCMB
6. Resonance - PCMB
7. Sri Chaitanya - PCM
8. Aakash(Delhi) - PCMB
9. Fitjee - PCM
10. Mastermind - PCMB
11. Mentors - PCB
12. Allen - PCMB
13. Plancess - PCMB
14. VMC - PCM
15. Motioniitjee - PCM
16. Nishith - PCM
17. Arride (Kota) - PCM
18. Rao IIT Acad. - PCM
19. Pulse - PCB
20. Abhyas Gurukul - PCMB
21. Drona - PCMB
22. Active Site - PCMB
23. Vision - PCM
24. Parth Ashram - PCMB
25. Brainsmiths - PCM
26. Infinite - PCM
27. Ekalavya - PCM
28. Trick Based - PCM
...
& Many Other Institute Complete Material Available ++.. Also Editable Books 📚 Available
⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️
👉 *Teaching Notes & PPTs (PCMB Editable) are Available in colourful*
**New material Exchange offer also available **
*Those who want Pls contact us...*
or also have {All Etoos, Akash(i-tutor) Digital, Allen, NeetPrep Digital, neuclion, Neuclius Education ,Plancess, SSC, Airforce, CAT, GATE IIT-JAM, NIMCET, IAS pre + mains (RAS Pre + Mains), UPTET,CTET,STET, vedio lacture *(also KG TO 12th Animated video Lecture Language English & Hindi)*} contact with us........👇
For Sample Massage me .
This document discusses basic probability concepts including sample spaces, events, counting rules, and probability definitions. It begins by defining a sample space as the set of all possible outcomes of an experiment. Events are defined as subsets of the sample space. Basic counting rules like the multiplication rule, permutations, and combinations are introduced. Probability is defined as a way to quantify the likelihood of events occurring. The document provides examples and explanations of these fundamental probability topics.
This document provides an overview of set theory concepts including:
- Sets, elements, and set operations like union, intersection, difference, and complement.
- Finite and countable sets versus infinite sets.
- Product sets involving ordered pairs from two sets.
- Classes of sets including the power set of a set, which contains all subsets.
The document discusses probability and experiments with random outcomes. It defines key probability concepts like sample space, events, and probability functions. It provides examples of common experiments with finite sample spaces like coin tosses, die rolls, and card draws. It also discusses experiments with infinite discrete sample spaces like repeated coin tosses until the first tail. The document establishes the basic properties and rules of probability, including that it is a function between 0 and 1, that probabilities of disjoint events add, and that probabilities of subsets are less than the original set.
This document contains notes on probability theory from a course. It begins with definitions of measures, σ-algebras, Borel σ-algebras, and related concepts. It then proves some key properties, including that the Borel σ-algebra on the real line can be generated by open intervals with rational endpoints. The document also contains proofs showing when two measures are equal and the Monotone Class Theorem for sets.
The document defines key concepts in probability theory including experiments, outcomes, sample spaces, events, operations on events like union and intersection, and properties of events like mutual exclusiveness and collective exhaustiveness. It also covers definitions and properties of probability, including relative frequency and axioms of probability. Additional concepts summarized are conditional probability, total probability theorem, independent events, and Bayes' theorem.
Probability Arunesh Chand Mankotia 2005Consultonmic
The document provides an overview of key probability concepts including:
- Sample space is the set of all possible outcomes of a random experiment.
- Mutually exclusive events cannot occur simultaneously.
- Venn diagrams can visually depict relationships between events like intersections.
- Classical probability is the ratio of favorable outcomes to total possible outcomes.
- Relative frequency probability is the limit of observed frequencies of an event over many trials.
- Bayes' theorem relates conditional and inverse conditional probabilities.
Here the concept of "TRUE" is defined according to Alfred Tarski, and the concept "OCCURING EVENT" is derived from this definition.
From here, we obtain operations on the events and properties of these operations and derive the main properties of the CLASSICAL PROB-ABILITY. PHYSICAL EVENTS are defined as the results of applying these operations to DOT EVENTS.
Next, the 3 + 1 vector of the PROBABILITY CURRENT and the EVENT STATE VECTOR are determined.
The presence in our universe of Planck's constant gives reason to\linebreak presume that our world is in a CONFINED SPACE. In such spaces, functions are presented by Fourier series. These presentations allow formulating the ENTANGLEMENT phenomenon.
Global Journal of Science Frontier Research: FMathematics and Decision Sciences Volume 18 Issue 2 Version 1.0 Year 2018
This document provides an introduction to probability theory. It defines key probability concepts like experiments, outcomes, events, and sample spaces. It explains how to calculate probabilities using classical probability theory. Specifically, it states that the probability of an event is the number of outcomes in the event divided by the total number of outcomes in the sample space. It also lists probability rules and provides examples of calculating probabilities of events for experiments like rolling dice or the gender of children. The document introduces set theory concepts used in probability like unions, intersections, and complements of sets. It explains how to represent events and relationships between them using Venn diagrams and set identities.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
En505 engineering statistics student notes
1. EN505 Engineering Statistics
Fernando Tovia, Ph.D.
1 RANDOM VARIABLES AND PROBABILITY
1.1 Random Variables
Definition 1.1 A random experiment is an experiment such that the outcome cannot be
predicted in advance with absolute precision.
Definition 1.2 The set of all possible outcomes of a random experiment is called the
sample space. The sample space is denoted by Ω. An element of the sample space is
denoted by ω.
Example 1.1 Construct the sample space for each of the following random experiments:
1. flip a coin
2. toss a die
3. flip a coin twice
Definition 1.3 A subset of Ω is called an event. Events are denoted by italicized, capital
letters.
Example 1.2 Consider the random experiment consisting of tossing a die. Describe the
following events.
1. A = the event that 2 appears
2. B = the event that an even number appears
3. C = the event that an odd number appears
4. D = the event that a number appears
5. E = the event that no number appears
The particular set that we are interested in depends on the problem being considered.
However, a good thing to do when beginning any probability modeling problem is to
clearly define all the events of interest.
One graphical method of describing events defined on a sample space is the Venn
diagram. The representation of an event using a Venn diagram is given in Figure 1.1 Note
that the rectangle corresponds to the sample space, and the shaded region corresponds to
the event of interest.
Figure 1.1 Venn Diagram for Event A
1
2. Definition 1.4 Let A and B be two event defined on a sample space Ω. A is a subset of B,
denoted by A ⊂ B, if an only if (iff), ∀ ω ∈ A, ω ∈ B. (Figure 1.2)
Figure 1.2 Venn Diagram for A ⊂ B
Definition 1.5 Let A be an event defined on a sample space Ω. ω ∈ Ac iff ω ∉ A. Ac is
called the complement of A. (Figure 1.3)
Figure 1.3 Venn Diagram for Ac
Definition 1.6 Let A and B be two events defined on the sample space Ω. ω ∈ A ∪ B iff
ω ∈ A or ω ∈ B (or both). A ∪ B is called the union of A and B (see Figure 1.4).
Figure 1.4 Venn Diagram for A ∪ B
Let {A1, A2, …} be a collection of events defined on a sample space.
∞
ω ∈ U Aj
j =1
iff ∃ some j = 1, 2, … ∋ ω ∈ Aj
∞
UA
j =1
j is called the union of {A1, A2, …}
Definition 1.7 Let A and B be two events defined on the sample space Ω. ω ∈ A ∩ B iff
ω ∈ A and ω ∈ B. A ∩ B is called the intersection of A and B (see Figure 1.5).
Figure 1.5 Venn Diagram for
Let {A1, A2, …} be a collection of events defined on a sample space.
∞
ω ∈ I Aj
j =1
iff ω ∈ A ∀ j = 1, 2, …
∞
IA
j =1
j is called the intersection of {A1, A2, …}
Example 1.3 (example 1.2 continued)
1. Bc = C
2
3. 2. B ∪ C = D
3. A ∩ B = B
Theorem 1.1 Properties of Complements
Let A be an event defined on a sample space Ω. Then
(a)
(b)
Theorem 1.2 Properties of the Unions
Let A, B, C be events defined on a sample space Ω. Then
(a)
(b)
(c)
(d)
(e)
Example Prove Theorem 1.2 (c)
Theorem 1.3 Properties of the Intersection
Let A, B, and C be events defined on the sample space Ω. Then
(a)
(b)
(c)
(d)
(e)
Example 1.6 Prove theorem 1.3 (b)
3
4. Theorem 1.4 Distribution of Union and Intersection
Let A, B and C be events defined in the sample space Ω. Then
(a) A ∩ (B ∪ C) = (A ∩ B) ∪ (A∩C)
(b) A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
Theorem 1.5 DeMorgan’s Law
Let A, B and C be events defined in the sample space Ω. Then
(a) (A ∪ B)c = Ac ∩ Bc
(b) (A ∩ B) c = Ac ∪ Bc
Definition 1.8 Let A and B be two events defined in the sample space Ω. A and B are said
to be mutually exclusive or disjoint iff A ∩ B = Ø (Figure 1.6). A collection of events
{A1, A2, … }, defined on a sample space Ω, is said to be disjoint iff every pair of events
in the collection is mutually exclusive.
Figure 1.6 Venn Diagram for Mutually Exclusive Events
Definition 1.9 A collection of events {A1, A2, …, An} defined on a sample space Ω, is
said to be a partition (Figure 1.7) of Ω iff
(a) the collection is disjoint
n
(b) UA j =Ω
j =1
Figure 1.7 Venn Diagram for a Partition
Example 1.7 (Example 1.2 continued) Using the defined event, identify:
(a) a set of mutually exclusive events
(b) a partition of the sample space
4
5. Definition 1.10 A collection of events, F, defined on a sample space Ω, is said to be a
field iff
(a) Ω ∈ F,
(b) if A Ω ∈ F, then Ac ∈ F, then
n
U A ∈ F,
j =1
j
We use fields to represent all the events that we are interested in study. To construct a
field:
1. we start with Ω
2. Ø is inserted by implication (Definition 1.10 (b)
3. we then add the events of interest
4. we then add complements and unions
Example 1.8 Suppose we perform a random experiment which consists of observing the
type of shirt worn by the next person entering a room. Suppose we are interested in the
following events.
L = the shirt that has long sleeves
S = the shirt has short sleeves
N = the shirt has no sleeves
Assuming that {L, S, N} is a partition of Ω, construct an appropriate field
Theorem 1.6 Intersection are in Fields
Let F be a field of events defined in the sample space Ω. Then if A1, A2, … , An ∈ F,
then
n
IA ∈ F
j =1
j
Example 1.9 Prove that if A, B ∈ F, then A ∩ B ∈ F.
5
6. Any meaningful expression containing events of interest, ∪ , ∩, and c can be shown to be
in the field.
Definition 1.11 Consider a set of elements, such as S = {a, b, c}. A permutation of the
elements is an ordered sequence of elements. The number of permutations of n different
elements is n! where
n! = n x (n-1) x (n-2) x …x 2 x 1
Example 1.10 List all the permutations of the elements S
Definition 1.12 The number of permutations of subsets of r elements selected from a set
of n different elements is
Another counting problem of interest is the number of subsets of r elements that can be
selected from a set of n elements. Here the order is not important, and are called
combinations.
Definition 1.13 The number of combinations, subsets of size r that can be selected from a
set of n elements, is denoted as
Example 1.11 The EN505 class has 13 students. If teams of 2 students can be selected,
how many different teams are possible?
1.2 Probability
Probability is used to quantify the likelihood, or chance, that an outcome of a random
experiment will occur.
Definition 1.14 A random variable is a real-valued function defined on a sample space.
Random variables are typically denoted by italicized capital letters. Specific values taken
on by a random variable are typically denoted by italicized, lower-case letters.
Definition 1.15 A random variable that can take on a countable number of values is said
to be a discrete random variable.
Definition 1.16 A random variable that can take on an uncountable number of values is
said to be a continuous random variable.
6
7. Definition 1.17 The set of possible values for a random variable is referred as a range of
a random variable.
Example 1.12 Consider the following experiments of random variables, define the
random variable, identify the range for each random variable, and classify it as discrete or
continuous.
1. flip a coin
2. toss a die until a 6 appears
3. quality inspection of a shipment of manufactured items.
4. arrival of customer to a bank
Definition 1.18 Let Ω be the random space for some random experiment. For any event
defined on Ω, Pr(·) is a function which assigns a number to the event. Pr(A) is called the
probability of event A provided the following conditions hold:
(a)
(b)
(c)
Probability is used to quantify the likelihood, or chance, that an event will occur within
the sample space.
7
8. Whenever a sample consists of N possible outcomes , the probability of each outcome is
1/N
Theorem 1.7 Probability Computational Rules
Let A and B events defined on a sample space Ω, and let {A1, A2, …, An} be a collection
of events defined on Ω. Then
(a)
(b)
(c)
(d)
(e)
(f)
Corollary 1.1 Union of Three or More Events
Let A, B, C and D be events defined on a sample space Ω. Then,
Pr( A ∪ B ∪ C ) = Pr( A) + Pr( B ) + Pr(C ) − Pr( A ∩ B ) − Pr( A ∩ C ) − Pr( B ∩ C ) + Pr( A ∩ B ∩ C )
and
Pr( A ∪ B ∪ C ∪ D ) = Pr( A) + Pr( B) + Pr(C ) + Pr( D) − Pr( A ∩ B) − Pr( A ∩ C ) − Pr( A ∩ D ) −
− Pr( B ∩ C ) − Pr( B ∩ D) − Pr(C ∩ D) + Pr( A ∩ B ∩ C ) + Pr( A ∩ B ∩ D) + Pr( B ∩ C ∩ D) +
− Pr( A ∩ B ∩ C ∩ D )
Example 1.11 Let A, B and C be events defined on a sample space Ω ∋
Pr(A) = 0.30
Pr(Bc) = 0.60
Pr(C) = 0.20
Pr(A ∪ B) = 0.50
Pr( B ∩ C ) = 0.05
A and C are mutually exclusive
Compute the following probabilities
(a) Pr(B)
8
9. (b) Pr( B ∪ C ) =
(c) Pr( A ∩ B )
(d) Pr( A ∪ C )
(e) Pr( A ∩ C )
(f) Pr( B ∩ C c )
(g) Pr( A ∪ B ∪ C ) =
1.3 Independence
Two events are independent if any one of the following equivalent statements is true
9
10. (1) P(A|B) = P(A)
(2) P(B|A) = P(B)
(3) P ( A ∩ B ) = P ( A) P ( B )
Example 2.29 (book work in class)
1.4 Conditional Probability
Definition 1.19. Let A and B events define on a sample space Ω ∋ B ≠ Ø. We refer to
Pr(A|B) as the conditional probability of event given the occurrence of event B, where
Pr( AC | B) =
probability of not A given B
10
11. note that Pr(A|Bc) ≠ 1-Pr(A|B)
Example 1.12
A semiconductor manufacturing facility is controlled in a manner such that 2% of
manufactured chips are subjected to high levels of contamination. If a chip is subjected to
high levels of contamination, there is a 12% chance that it will fail testing. What is the
probability that a chip is subjected to high levels of contamination and fails upon tests?
c=
f=
Pr(High c level) =
Pr(Fail | high c level) =
Pr(F∩C) =
Example 1.13
An air quality test is designed to detect the presence of two molecules (molecule 1 and
molecule 2). 17% of all samples contain both molecules, and 48% of all samples contain
molecule 1. If a sample contains molecule 1, what is the probability that it also contains
molecule 2?
M1 = molecule 1
M2 = molecule 2
Pr(M1∩M2) =
Pr(M1) =
Pr(M2|M1) =
Theorem 1.8 Properties of Conditional Probability
Let A and B be non-empty event defined on a sample space Ω. Then
(a) If A and B are mutually exclusive then Pr(A|B) = 0
(b) If A ⊂ B then Pr(A|B)>=Pr(A)
11
12. (c) If B ⊂ A then Pr(A|B) =1
Theorem 1.9 Law of Total Probability – Part 1
Let A and B be events defined on a sample space Ω ∋ A ≠ Ø, B ≠ Ø, Bc ≠ Ø. Then
Example 1.15
A certain machine’s performance can be characterized by the quality of a key component.
94% of machines with a defective key component will fail. Whereas only 1% of
machines with a non-defective key component will fail. 4% of machines have a defective
key component. What is the probability that the machine will fail?
F = fail
D = defective
Pr(D) =
Pr(F|D) =
Pr(F|Dc) =
Pr(F) =
Theorem 1.11 Bayes’ Theorem – Part 1
Let A and B be events defined on a sample space Ω ∋ A ≠ Ø, B ≠ Ø, Bc ≠ Ø. Then
Example 1.15 (Example 1.14 continues)
Suppose the machine fails. What is the probability that the key component was
defective?
Pr(D|F) =
12
13. Theorem 1.12 Law of Total Probability – Part 2
Let A be a non-empty event defined on a sample space Ω, and let {B1, B2, …, Bn}be a
partition of Ω ∋ Bj ≠ Ø ∀ j =1, 2, …, n. Then
Pr( A) =
Theorem 1.13 Bayes’ Theorem – Part 2
Let A be a non-empty event defined on a sample space Ω, and let {B1, B2, …, Bn}be a
partition of Ω ∋ Bj ≠ Ø ∀ j =1, 2, …, n. Then
Pr( A | B j ) Pr( B j ) Pr( A | B j ) Pr( B j )
Pr( B j | A) = = n
Pr( A)
∑ Pr( A | B ) Pr( B )
i =1
i i
13
15. 2 DISCRETE RANDOM VARIABLES AND PROBABILITY
DISTRIBUTIONS
A discrete random is a random variable that can take on at most countable number of
values.
Definition 2.1 Let X be a discrete random variable having cumulative distribution
function F. Let x1, x2 , ... denote the possible values of X. Then f(x) is the probability
mass function (pmf) of X, if
a) f(x) = P(X = x)
b) f(xj) > 0, j = 1, 2, …
c) f(x) = 0, if x = xj, j = 1, 2,
∞
d) ∑ f (x ) = 1
j =1
j
Definition 2.2 The cumulative distribution function of a discrete random variable X, is
denoted by F(X), and is given by
F(X) = ∑ f ( x)
xj ≤x
And satisfies the following properties
a) F(X) = P(X ≤ x) = ∑ f (x )
xj ≤x
j
b) 0 ≤ F( X ) ≤ 1
c) if x ≤ y then F( X ) ≤ F( Y )
Example 2.1 Suppose X is a discrete random variable having pmf f and cmf F, where
f(1) = 0.1, f(2) = 0.4, f(3) = 0.2, f(4) = 0.3.
1. Construct the cumulative distribution function of X.
15
16. 2. Compute Pr(X ≤ 2).
3. Compute Pr(X < 4).
4. Compute Pr(X ≥ 2).
Definition 2.3 The mean or expected value of X, denoted as µ or E(X), is
µ = E ( X ) = ∑ xf ( x)
x
The variance of X, denoted by Var (X) and given by
σ 2 = V ( X ) = E ( x − µ )2 = ∑ ( x − µ )2 f ( x) = ∑ x 2 f ( x) − µ 2
x x
The standard deviation of X is σ = σ 2
Definition 2.4 Let X be a discrete random variable with a probability mass function f(x).
The expected value of X is denoted by E(X) and given by
∞
E( X ) = ∑ x j f ( x j )
j =1
2.1 Discrete Distributions
2.1.1 Discrete Uniform Distribution
Suppose a random experiment has a finite set of equally likely outcomes. If X is a random
variable such that there is one-to-one correspondence between the outcomes and the set
of integers {a, a + 1, … , b}, then X is a discrete uniform random variable having
parameters a and b.
Notation
Range
Probability Mass Function
16
17. Parameters
Mean
Variance.
Example 2.2 Let X ~ DU(1, 6).
1. Compute Pr(X = 2).
2. Compute Pr(X > 4)
2.1.2 The Bernoulli Random Variable
Consider a random experiment that either “succeeds” or “fails”. If the probability of
success is p, and we let X = 0 if the experiment fails and X = 1 if it succeeds, then X is a
Bernoulli random variable with probability p. Such a random experiment is referred to
as a Bernoulli trial.
Notation
Range
Probability Mass Function
Parameter
17
18. Mean
Variance
2.1.3 The Binomial Distribution
The binomial distribution denotes the number of success in n independent Bernoulli
trials with probability p of success on each trial.
Notation
Range
Probability Mass Function
Cumulative Distribution Function
Parameters
Mean
Variance
Comments if n = 1, then X ~ Bernoulli(p)
18
19. Example 2.3 Each sample of air has a 10% chance of containing a particular rare
molecule.
1. Find the probability that in the next 18 samples, exactly 2 contains the rare molecule.
2. Determine the probability that at least four samples contain the rare molecule
3. Determine the probability that there will be at least one sample with the rare molecule
but less than four?
2.1.4 The Negative Binomial Random Variable
The negative binomial random variable denotes the number of trials
until the kth success in a sequence of independent Bernoulli trials with
probability p of success on each trial.
Notation
Range
Probability Mass Function
19
20. Cumulative Distribution Function
Parameters
Mean
Variance
Example 2.4 A high-performance aircraft contains three identical computers. Only one is
used to operate the aircraft; the other two are spares that can be activated in case the
primary system fails. During one hour of operation, the probability of a failure in the
primary computer is 0.0005.
1. Assuming that each hour represents an independent trial, what is the mean time
to failure of all the three components?
3. What is the probability that all three computers fail within a 5-hour flight?
20
21. Comments: If k = 1, then X ~ geom(p), i.e. X is a geometric random variable having a
probability of success p
2.1.5 The Geometric Distribution
In a series of independent Bernoulli trials, with constant probability p of a success, let the
random variable X denote the number of trials until the first success. Then X has a
geometric distribution.
Notation
Range
Probability Mass Function
Cumulative Distribution Function
Parameters
Mean
Variance
Example 2.3 Consider a sequence of independent Bernoulli trials with a probability of
success p = 0.2 for each trial.
(a) What is the expected number of trials to obtain the first success?
21
22. (b) After the eight successes occurs, what is the expected number of trials to obtain
the ninth success?
2.1.6 The Hypergeometric Random Variable
Consider a population consisting of N members, K of which are denoted as successes.
Consider a random experiment during which n members are selected random from the
population, and let X denote the number of successes in the random sample. If the
members in the sample are selected from the population without replacement, then X is
a hypergeometric random variable having parameters N, k and n.
Notation
Range
Probability mass function
Parameters
22
23. Comments If the sample is taken from the population with replacement, then
X~bin(n,K/N). Therefore, if n<<N, we can use the approximation bin(n,K/
N) ≈ HG(N, K, n).
Example 2.4 Suppose a shipment of 5000 batteries is received, 150 of them being
defective. A sample of 100 is taken from the shipment without replacement. Let X denote
the number of defective batteries in the sample.
1. What kind of random variable is X, and what is the range of X?
2. Compute Pr(X = 5).
3. Approximate Pr(X = 5) using the binomial approximation to the hypergeometric.
2.1.7 The Poisson Random Variable
The Poisson random variable denotes the number of events that occurs in an interval of
length t when events occur at a constant average rate λ.
Notation
23
24. Probability Mass Function
Cumulative Distribution Function
Parameters
Comments
The Poisson random variable X equals the number of counts in the time interval t. The
counts in each subinterval is independent of other subinterval.
If np>5 or (1-p) > 5, then we can use the approximation, Poisson ≈ bin (n, p).
Mean
Variance
It is important to use consistent units in the calculations of probabilities, means, and
variances involving Poisson random variable.
Example 2.5 Contamination is a problem in the manufacture of optical storage disks. The
number of particles of contamination that occur on an optical disk has a Poisson
24
25. distribution, and the average number of particles per centimeter squared of media surface
is 0.1. The area of a disk under study is 100 squared centimeters.
a) Find the probability that 12 particles occur in the area of a disk under study.
b) Find the probability that zero particles occur in the area of the disk under the study.
c) Find the probability that 12 or fewer particles occur in the area of a disk under study.
2.1.8 Poisson Process
Up to this time in the course, we have discussed the assignment of probabilities to events
and random variables, and by manipulating these probabilities we can analyze “snap-
shots” of systems behavior at certain point in the time, or under certain conditions. Now,
we are going to study one of the most commonly recognized continuous-time stochastic
processes that allow us to study important aspects of systems behavior over a time
interval t.
Definition 2.5 Let {N(t), t≥ 0} be a counting process. Then, {N(t), t≥ 0}is said to be a
Poisson process having rate λ, λ > 0, iff
a.
start counting from zero
25
26. b.
The number of outcomes occurring in one time or (specific region) is
independent of the number that occurs in any other disjoint time
interval or region space, which can be interpreted that the Poisson
process has no memory.
c. the number of events in any interval (s, s + t) is a Poisson random
variable with mean λt.
d. The probability that more that more than one outcome will occur at the
same time is negligible.
This is denoted by N(t) ~ PP(λ), which λ refers to the average rate at which events occur.
The part (c) of the definition implies that
1)
2)
4)
Note that in order to be a Poisson proves the average event occurrence MUST BE
CONSTANT over time, otherwise the Poisson process would be an inappropriate model.
Also note, that t can be interpreted as the specific “time”, “distance”, “area”, “volume” of
interest.
Example 2.6 Customers arrive to a facility according to a Poisson process with rate λ =
120 customer per hour. Suppose we begin observing the facility at some point in time.
a) What is the probability that 8 customers arrive during a 5-minute interval?
b) On average, how many customers will arrive during a 3.2-minute interval?
26
27. c) What is the probability that more than 2 customers arrive during a 1-minute
interval?
d) What is the probability that 4 customers arrive during the interval that begins 3.3
minutes after we start observing and ends 6.7 minutes after we start observing?
e) On average, how many customers will arrive during the interval that begins 16
minutes after we start observing and ends 17.8 minutes after we start observing?
f) What is the probability that 7 customers arrive during the first 12.2 minutes we
observe, given that 5 customers arrive during the first 8 minutes?
27
28. g) If 3 customers arrive during the first 1.2 minutes of our observation period, on
average, how many customers will arrive during the first 3.7 minutes?
h) If 1 customer arrives during the first 6 seconds or our observations, what is the
probability that 2 customers arrive during the interval that begins 12 seconds after
we observing and ends 30 seconds after we start observing?
i) If 5 customers arrive during the first 30 seconds of our observations, on average,
how many customers will arrive during the interval that begins 1 minute after we
start observing and ends 3 minutes after we start observing?
j) If 3 customers arrive during the interval that starts 1 minute after we start
observing and ends 2.2 minutes after we start observing, on average, how many
customers will arrive during the first 3.7 minutes?
28
29. Example 2.7 (Binomial approximation)
In a manufacturing process where glass products are produced, defects or bubbles occur,
occasionally rendering the piece undesirable for marketing. It is known that, on average,
1 in every 1000 of these items produced has one or more bubbles. What is the probability
that a random sample of 8000 will yield fewer than 7 items possessing bubbles?
29
30. 3 CONTINUOUS RANDOM VARIABLES AND PROBABILITY
DISTRIBUTIONS
As stated earlier, a continuous random variable is a random variable that can take on an
uncountable number of values.
Definition 3.1 The probability density function of a continuous random variable X is a
nonnegative function f(x) defined ∀ real x ∋ for any set A of real numbers
Theorem 3.1 Integral of a Density Function
The function f is a density function iff
All probability computations for a continuous random variable can be answered using the
density function.
Theorem3.2 Probability Computational Rules for Continuous Random Variables
Let X be a continuous random variable having cumulative distribution function F and
probability density function f. Then
(a)
(b)
(c)
30
31. (d)
(e)
The mean or expected value of X, denoted as µ or E(X), is
The variance of X, denoted by Var (X) and given by
Example 3.1 Consider a continuous random variable X having the following density
function where c is a constant
c(1 − x 2 ) 0 ≤ x ≤ 1
f ( x) =
0 otherwise
1. What is the value of c?
2. Construct the cumulative distribution function of X.
31
32. 3. Compute Pr(0.2< X ≤ 0.8) =
4. Compute Pr(X >0.5) =
Part (d) of Theorem 1.3.2 states that the probability density function is the derivative of
the cumulative distribution function. Altough this is true, it does not provide adequate
intuition as to the interpretation of the density function. For a discrete random variable,
the probability mass function actually assigned probabilities to the possible values of the
random variables. Theorem 1.3.2 (b) states that the probability of any specific value for a
continuous random variable is 0. The probability density function is not the probability of
a specific value. It is, however, the relative likelihood ((as compared to other possible
values) that the random variable will be near a certain value.
Continuous random variables are typically specified in terms of the form of their
probability density functions. In addition, some continuous random variables have been
widely-used in probability modeling. We will consider some of these more commonly-
used random variables, including:
1. the uniform random variable,
2. the exponential random variable,
3. the gamma random variable,
4. the Weibull random variable,
5. the normal random variable,
6. the lognormal random variable,
7. the beta random variable,
3.1 The Uniform Continuous Random Variable
Notation
Range
Probability Density Function
32
33. Cumulative Distribution Function
Parameters
Mean
Variance
Comments As its name implies, the uniform random variable is used to represent
quantities that occur randomly over some interval of the real line.
An observation of a U(0,1) random variable is referred to as a random number.
Example 3.2 Verify that the equation for the cumulative distribution of the uniform
random variable is correct.
Example 3.3 The magnitude (measured in N) of a load applied to a steel beam is
believed to be a U(2000, 5000) random variable. What is the probability that the load
exceeds 4200 N?
33
34. 3.2 The exponential Random
The random variable X that equal the distance (time) between successive counts of a
Poisson process with mean λ (rate, events per time unit, i.e. arrivals per hour, failures per
day, etc.) has an exponential distribution with parameter λ.
Notation
Range
Probability Density Function
Cumulative Distribution Function
Parameters
Mean
Variance
Comments λ is called the rate of the exponential distribution.
34
35. Example 3.4 In a large computer network, user log-ons to the system can be modeled as
a Poisson process with a mean of 25 logs-on per hour. What is the probability that there
are no log-ons in an interval of 6 minutes?
What is the probability that the time until the next log-on is between 2 and 3 minutes?
Upon converting all units to hours,
Determine the interval of time that the probability that no log-on occurs in the interval is
0.90. Te question asks for the length of time x such that Pr(X>x) = 0.90.
What is the mean time until the next log-on?
What is the standard deviation of the time until the next log-on?
35
36. Theorem 3.3 The Memoryless Property of the Exponential Distribution
Let X be a continuous random variable. The X is an exponential random variable iff
Theorem 3.4 The Conditional Form of the Memoryless Prperty
Let X be a continuous random variable. Then X is an exponential random variable iff
Furthermore, no other continuous random variable possesses this property.There are
several implications of the memoryless property of the exponential random variable.
First, if the exponential random variable is used to model the lifetime of a
device, then at every point in time until it fails, the device is as good as
new (from a probabilistic standpoint).
If the exponential random variable is used to model an arrival time, then at
every point in time until the arrival occurs, it is as just began “waiting” for
the arrival.
Example 3.5 Suppose that the life length of a component is an exponential random
variable with rate 0.0001. Note that time units are hours. Determine the following.
a) What is the probability that the component lasts more than 2000 hours?
b) Given that the component lasts at least 1000 hours, what is the probability that
it lasts more than 2000 hours?
36
37. Theorem 3.5 Expectation under the Memoryless Property
Let X be an exponential random variable. Then
Example 3.6 (Example 3.5 continued)
a) Given that the component lasts at least 1000 hours, what is the expected value
of its life length?
b) Given that the component has survived 1000 hours, on average, how much
longer will it survive?
3.3 The Normal Distribution
Notation
Range
Probability Density Function
Cumulative Distribution Function no closed form expression
Parameters
37
38. Mean
Variance
Comments
Standard Normal Random Variable
If µ = 0 and σ = 1, then X is referred as the standard normal random variable. The
standard random normal variable is often denoted by Z.
The cumulative distribution of the standard normal random variable is denoted as
Φ ( z ) = Pr( Z ≤ z )
Appendix A Table I provides cumulative probabilities for a standard random variable.
For example, assume that Z is a standard normal random variable. Appendix A Table I
provides probabilities of the form Pr(Z≤ 1.53). Find in the column z 1.5 and find in the
row 0.03, then Pr(Z≤ 1.53) = 0.93699.
The same value can be obtained in Excel, type function icon (fx), statistical,
NORMSDIST(z), enter 1.53, and Excel will give you the result in the cell
=NORMSDIST(1.53) = 0.936992
The function
Is denoted a probability from Appendix A Table I. It is the cumulative distribution
function of a standard normal random variable. (see figure 4-13 page 124 from the
Montgomery book).
Example 3.7 (Example 4-12Montgomery)
Some useful results concerning a normal distribution are summarized in Fig 4-1413.
(textbook). For any random variable
38
39. 1)
2)
3)
4)
If X ~ N(µ,σ2), then (X -µ)/ σ ~N(0,1), which is known as the z-transformation. That is, Z
is a standard normal random variable.
Suppose X is a normal random variable with mean µ and standard deviation σ. Then,
39
40. Example 3.8 One key characteristic of a certain type of drive shaft is its diameter, and
the diameter is a normal distributed random variable having µ = 5 cm and σ = 0.08 cm.
a) What is the probability that the diameter of a given drive shaft is between 4.9
and 5.05 cm?
b) What diameter is exceeded by 90% of drive shafts?
c) Provide tolerances, symmetric about the mean, that capture 99% of drive shafts.
40
41. Example 3.9
The diameter of a shaft in an optical storage drive is normally distributed with mean
0.2508 inch and standard deviation 0.0005 inch. The specifications on the shaft are
±0.0015 inch. What proportion of shafts conforms to specifications?
3.3.1 Normal Approximation to the Binomial and Poisson Distributions
Binomial Approximation
If X is a binomial random variable with parameter n and p
Is approximately a standard random variable. To approximate a binomial probability with
a normal distribution a correction (continuity) factor is apllied.
41
42. The approximation is good for np > 5 and n(1-p) > 5.
Poisson Approximation
If X is a Poisson random variable with E(X) = λ and V(X) = λ,
is approximately a standard random variable. The approximation is good for λ > 5.
Example 3.10
The manufacturing of semiconductor chips produces 2% defective chips. Assume that
chips are independent and that a lot contains 1000 chips.
a) Approximate the probability that more than 25 chips are defective.
b) Approximate the probability that between 20 and 30 chips are defective
42
43. 3.4 Lognormal Distribution
Variables in the system sometimes follow an exponential relationship, where the
exponent is a random variable, say W, X = exp(W). If W has a normal distribution then
the distribution of X is called a lognormal distribution.
Notation
Range
Probability density function
Cumulative Distribution Function no closed form expression
Parameters
Comments If Y ~ N(µ, σ2) and X = eY, then X ~ LN((µ, σ2)
The lognormal random variable is often used to represent
elapsed times, especially equipment repair times, and
material properties.
Mean
Variance
43
44. Example 3.11 A wood floor system can be evaluated in one way by measuring its
modulus of elasticity (MOE) measured in 106 psi. One particular type of system is such
that its MOE is a lognormal random variable having µ = 0.375 and σ = 0.25.
1. What is the probability that a system’s MOE is less than 2?
2. Find the value of MOE that is exceeded by only 1% of the systems?
3.5 The Weibull Distribution
The Weibull distribution is often used to model the time until failure of many different
physical systems. It is used in Reliability time-dependent failures models, where the
failure distribution may be used to model both increasing and decreasing failure rates.
Notation
44
45. Range
Probability Density Function
Cumulative Distribution Function
Parameters
Mean
Variance
Comments If β = 1, then X ~ expon(1/η)
The Weibull random variable is most often used to
represent elapsed time, especially time to failure of a unit
of equipment.
Example 3.12 The time to failure of a power supply is a Weibull random variable having
β = 2.0 and η = 1000.0 hours. The manufacturer sells a warranty such that only 5% of the
power supplies fail before the warranty expires. What is the time period of the warranty?
45
46. 4 JOINT PROBABILITY DISTRIBUTIONS
Up to this point we have considered issues related to a single random variable. Now we
are going to consider situations in which we have two or more random variables that we
are interested in studying.
4.1 Two or more discrete random variables
Definition 4.1 The function f(x, y) is a joint probability distribution or probability
mass function of discrete random variables X and Y if
1.
2.
3.
Example 4.1 Let X denote the number of times a certain numerical control machine will
malfunction: 1, 2 or 3 times on a given day. Let Y denote the number of times a
technician is called on an emergency call. Their joint probability distribution is given as
f (x, y ) x
1 2 3
1 0.05 0.05 0.1
y 2 0.05 0.1 0.35
3 0 0.2 0.1
a) Find P(X<3, Y = 1)
b) Find the probability that the technician is called at least 2 times and the machine fails
no more than 1 time.
46
47. c) Find P(X>Y)
When studying joint probability distribution we are also interested in the probability
distributions of each variable individually, which is referred as the marginal probability
distribution.
Theorem 4.1 Let X and Y be discrete random variables having joint probability mass
functions f(x, y). Let x1 , x2 ,... denote the possible values of X, and let y1 , y2 ,... denote the
possible values of Y. Let f x ( x) denote the marginal probability mass function of X, and
let f y ( y ) denote the (marginal) probability mass function of Y. Then,
47
48. Example 4.2 Let X and Y be discrete random variables such that
f(1, 1) = 1/9 f(1, 2) = 1/6 f(1, 3) = 1/8
f(2, 1) = 1/18 f(2, 2) = 1/9 f(2, 3) = 1/9
f(3, 1) = 1/9 f(3, 2) = 1/9 f(3, 3) = 1/6
Find the marginal probability mass function of X and Y
48
49. Definition 4.2 The function f(x, y) is a joint probability density function of continuous
random variables X and Y if
1.
2.
3.
Example 4.3 A candy company distributes boxes of chocolates with a mixture of creams,
toffees, and nuts coated in both light and dark chocolates. For a randomly selected box,
let X and Y, respectively, be the proportions of the light and dark chocolates that are
creams and suppose that joint density function is
2
(2 x + 3 y ), 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
f ( x, y ) = 5
0,
elsewhere
a) verify condition 2
49
50. c) Find P[( X , Y ∈ A], where A={(x,y),|0 ≤ x ≤ 1/2, 1/4 ≤ y ≤ 1/2}
Theorem 4.2 Marginal Probability Density Function
Let X and Y be continuous random variables having probability density function f(x, y).
le. Let f x ( x) denote the marginal probability density function of X, and let f y ( y ) denote
the (marginal) probability density function of Y. Then,
50
51. Example 4.4 Let X and Y be continuous random variables such that
f(x, y) = 0.75 e-0.3y
Find the marginal probability density function of X and Y.
Theorem 4.3 The Law of the Unconscious Statistician
Let X and Y be discrete (continuous) random variables having joint probability mass
(density) function f(x,y). Let x1, x2, … denote the possible values of X, and let y1, y2, …
denote the possible values of Y. Let g(X,Y) be a real-valued function. Then
51
52. Example 4.5 Suppose X and Y are discrete random variables having joint probability
mass function f(x,y). Let x1, x2, … denote the possible values of X, and let y1, y2, … denote
the possible values of Y. What is E(X+Y)?
52
53. Theorem 4.4 Expectation of a Sum of Random Variables
Let X1, X2, …, Xn be random variables, and let a1, a2, …an be constants. Then
Example 4.6 What is the E(3X – 2Y + 4)?
Theorem 4.5 Independent Discrete Random Variables Let X and Y be random
variables having joint probability mass function f(x, y). Let fx(x) denote the marginal
probability mass function of X, and let fy(y) denote the marginal probability mass
function of Y. Then X and Y are said to be independent iff
53
54. Theorem 4.6 Independent Continuous Random Variables Let X and Y be random
variables having joint probability density function f(x, y). Let fx(x) denote the marginal
probability density function of X, and let fy(y) denote the marginal probability density
function of Y. Then X and Y are said to be independent iff
Example 4.6 Consider example 4.2. Are X and Y independents?
Example 4.7 Consider example 4.4. Are X and Y independent?
Definition 4.3 Let X and Y be random variables. The covariance of X and Y is denoted as
Cov(X,Y) and given by
54
55. A positive covariance indicates that X tends to increase (decrease) as Y increases
(decreases). A negative covariance indicates that X tends to decrease (increase) as Y
increases (decreases).
Example 4.8 Example 4.2 continued. Find the covariance of X and Y.
Theorem 4.7 Covariance of Independent Random Variables
Let X and Y be random variables. If X and Y are independent, then
Cov(X, Y) = 0.
Theorem 4.8 Variance of the Sum of Random Variables
Let X1, X2, … , XN be random variables. Then
Theorem 4.9 Variance of the Sum of Independent Random Variables
Let X1, X2, … , XN be independent random variables. Then
55
56. Definition 4.4 Let X and Y be two random variables. The correlation between X and Y is
denoted by ρxy and given by
Note that correlation and covariance have the same interpretation regarding the
relationship between the two variables. However, correlation does not have units and is
restricted to the range (-1, 1). Therefore, the magnitude of the correlation provides some
idea of the strength of the relationship between the two random variables.
56
57. 5 RANDOM SAMPLES, STATISTICS AND THE CENTRAL
LIMIT THEOREM
Definition 5.1 Independent random variables X1, X2, … ,Xn are called a random sample.
A randomly selected sample means that if a sample of n objects is selected, each subset
of size n is equally likely to be selected. If the number of objects in the population is
much larger than n, the random variables X1, X2, … ,Xn that represents the observations
from the sample can be shown to be approximately independent random variables with
the same distribution.
Definition 5.2 A statistic is a function of the random variables in a random sample.
Given the data, we calculate statistics all the time, such as the sample mean X and the
sample standard deviation S. Each statistic has a distribution and it is the distribution that
determines how well it estimates a quantity such as μ.
We begin our discussions by focusing on a single random variable, X. To perform any
meaningful statistical analysis regarding X, we must have data.
Let X be some random variable of interest. A random sample on X consists of n
observations on X: x1, x2, … , xn. We assume that these observations are independent and
identically distributed. The value of n is referred to as the sample size.
Definition 5.3 Descriptive statistics refers to the process of collecting data on a random
variable and computing meaningful quantities (statistics) that characterize the underlying
probability distribution of the random variable.
There are three points of interest regarding this definition.
• Performing any type of statistical analysis requires that we collect data on one or
more random variables.
• A statistic is nothing more than a numerical quantity computed using collected
data.
• If we knew the probability distribution which governed the random variable of
interest, collecting data would be unnecessary.
Types of Descriptive Statistics
1. measures of central tendency
• sample mean (sample average)
• sample median
• sample mode (discrete random variables only)
57
58. 2. measures of variability
• sample range
• sample variance
• sample standard deviation
• sample quartiles
Microsoft Excel has a Descriptive Statistics tool within its Data Analysis
ToolPak.
Computing the Sample Mean
• Most of your calculators have a built-in method for entering data and computing
the sample mean.
• Note the sample mean is a point estimate of the true mean of X. In other
words,
Computing the Sample Median
To compute the sample median, we first rank the data in ascending order and re-
number it: x(1), x(2), …. , x(n).
The sample median corresponds to the value that has 50% of the data above it and
50% of the data below it.
Computing the Sample Mode
The sample mode is the most frequently occurring value in the sample. It is
typically only of interest in sample data from a discrete random variable, because
sample data on a continuous random variable often does not have any repeated
values.
58
59. Compute the Sample Range
Computing the Sample Variance
• Why do we divide by n − 1? We divide by n − 1 because we have n − 1 degrees
of freedom. This refers to the fact that if we know the sample mean and n − 1
of the data values, we can compute the remaining data point.
• Note that the sample variance is a point estimate of the true variance. In
other words,
Computing the Sample Standard Deviation
• Note that the sample standard deviation is a point estimate of the true standard
deviation.
Theorem 5.1. If X1, X2, … ,Xn is a random sample of size n taken from a population with
mean μ and variance σ2, μ and variance σ2, and if X is the sample mean, the limiting form
of the distribution of
59
60. as n→∞, is the standard normal distribution.
5.1 Populations and Random Samples
The field of statistical inference consists of those methods used to draw
conclusions about a population. These methods utilize the information contained in a
random sample of observations from the population.
Statistical inference may be divided into two major areas:
• parameter estimation
• hypothesis testing
Both of these areas require a random sample of observations from one or more
populations, therefore, we will begin our discussions by addressing the concepts of
random sampling.
Definition 5.4 A population consists of the totality of the observations with which
we are concerned.
• We almost always use a random variable/probability distribution to model the
behavior of a population.
Definition 5.5 The number of observations in the population is called the size of the
population.
• Populations may be finite or infinite. However, we can typically assume the
population is infinite.
• In some cases, a population is conceptual. For example, the population of items
to be manufactured is a conceptual population.
Definition 5.6 A sample is a subset of observations selected from a population.
• We model these observations using random variables.
• If our inferences are to be statistically valid, then the sample must be
representative of the entire population. In other words, we want to ensure that we
take a random sample.
60
61. Definition 5.7 The random variables X1, X2, … , Xn are a random sample of size n
if X1, X2, … , Xn are independent and identically distributed.
• After the data has been collected, the numerical values of the observations are
denoted as x1, x2, … , xn.
• The next step in statistical inference is to use the collected data to compute one or
more statistics of interest.
5.2 Point Estimates
ˆ
Definition 5.8 A statistic, Θ , is any function of the observations in a random sample.
• In parameter estimation, statistics are used to estimate quantities of interest.
•
• The measures of central tendency and variability we considered in “Descriptive
Statistics” are all statistics.
•
Definition 5.9 A point estimate of some population parameter θ is a single
ˆ ˆ
numerical value θ of a statistic Θ .
•
Estimation problems occur frequently in engineering. The quantities that we will focus
on are:
• the mean µ of a population
• the standard deviation σ of a population
• the proportion p of items in a population that belong to a class of interest – p is the
probability of success for a Bernoulli trial
The point estimates that we use are:
•
•
•
61
62. 5.3 Sampling Distributions
A statistic is a function of the observations in the random sample. These observations
are random variables, therefore, the statistic itself is a random variable. All random
variables have probability distributions.
Definition 5.10 The probability distribution of a statistic is called a sampling
distribution.
• The sampling distribution of a statistic depends on the probability distribution
which governs the entire population, the size of the random sample, and the
method of sample selection.
Theorem 5.3 The Sampling Distribution of the Mean
If X1, X2, … , Xn are IID N(µ,σ2) random variables, then the sample mean is a
normal random variable having mean and variance
.
Thus, if we are sampling from a normal population then the sampling distribution of the
mean is normal. But what if we are not sampling from a normal population?
Theorem 5.4 The Central Limit Theorem
If X1, X2, … , Xn is a random sample of size n taken from a population with mean
µ and variance σ2, then as n → ∞,
is a standard normal random variable.
• The quality of the normal approximation depends on the true probability
distribution governing the population and the sample size.
• For most cases of practical interest, n ≥ 30 ensures a relatively good
approximation.
• If n < 30, then the underlying probability distribution must not be severely
non-normal.
Example 5.1 A plastics company produces cylindrical tubes for various industrial
applications. One of their production processes is such that the diameter of a tube is
normally distributed with a mean of 1 inch and a standard deviation of 0.02 inch.
(a) What is the probability that a single tube has a diameter of more than
1.015 inches?
62
63. X = diameter of a tube (measured in inches) ~ N( )
(b) What is the probability that the average diameter of five tubes is more than
1.015 inches?
n= X = average diameter ~ N( )
(c) What is the probability that the average diameter of 25 tubes is more than
1.015 inches?
n= X = average diameter ~ N( )
Example 5.2 The life length of an electronic component, T, is exponentially
distributed with a mean of 10,000 hours.
(a) What is the probability that a single component lasts more than 7500
hours?
(b) What is the probability that the average life length for 200 components is
more than 9500 hours?
E(T) = hours
σT = hours
63
64. Note that .
(c) What is the probability that the average life length for 10 components is
more than 9500 hours?
n is too small to use the CLT approximation
S 10
Note that T = .
10
If we had tried to use the CLT:
Now consider the case in which we are interested in studying two independent
populations. Let the first population have mean µ1 and standard deviation σ1, and let the
second population have mean µ2 and standard deviation σ2.
If we are interested in comparing the two means, then the obvious point estimate of
interest is
µ1 − µ 2 = X 1 − X 2 .
ˆ
What is the sampling distribution of this statistic?
64
65. Theorem 5.4 The Sampling Distribution of the Difference in Two Means
If we have two independent populations with means µ1 and µ2 and standard
deviations σ1 and σ2, and if a random sample of size n1 is taken from the first population
and a random sample of size n2 is taken from the second population, then the sampling
distribution of
is standard normal as n1 and n2 → ∞. If the two populations are normal, then the
sampling distribution of Z is exactly standard normal.
• Again, the approximation is relatively accurate if n1 ≥ 30 and n2 ≥ 30.
Example 5.3 The life length of batteries produced by Battery Manufacturer A is a
continuous random variable having a mean of 1500 hours and a standard deviation of 100
hours. The life length of batteries produced by Battery Manufacturer B is a continuous
random variable having a mean of 1400 hours and a standard deviation of 200 hours.
(a) Suppose 50 batteries of each type are tested. What is the probability that Battery
Manufacturer A’s sample average life length exceeds Battery Manufacturer B’s
by more than 75 hours?
(b) How would your answer change if only 12 batteries of each type were tested?
There is not enough information to answer the question. If we assume normality,
then we could proceed.
65
66. 5.4 Confidence Intervals
A point estimate provides only a single number for drawing conclusions about a
parameter. And if another random sample were selected, this point estimate would
almost certainly be different. In fact, this difference could be drastic.
For this reason, a point estimate typically does not supply adequate information to
an engineer. In such cases, it may be possible and useful to construct a confidence
interval which expresses the degree of uncertainty associated with a point estimate.
Definition 5.11 If θ is the parameter of interest, then the point estimate and
sampling distribution of θ can be used to identify a 100(1 − α )% confidence interval on
θ. This interval is of the form:
L and U are called the lower-confidence limit and upper-confidence limit. If L and U
are constructed properly, then
.
The quantity (1 − α) is called the confidence coefficient.
• The confidence coefficient is a measure of the accuracy of the confidence
interval. For example, if a 90% confidence interval is constructed, then the
probability that the true value of θ is contained in the interval is 0.9.
• The length of the confidence interval is a measure of the precision of the point
estimate. A general rule of thumb is that increasing the sample size improves the
precision of a point estimate.
Confidence intervals are closely related to hypothesis testing. Therefore, we will address
confidence intervals within the context of hypothesis testing.
66
67. 6 FORMULATING STATISTICAL HYPOTHESES
For many engineering problems, a decision must be made as to whether a particular
statement about a population parameter is true or false. In other words, we must either
accept the statement as being true or reject the statement as being false.
Example 6.1 Consider the following statements regarding the population of engineering
students at the Philadelphia University.
1. The average GPA is 3.0.
2. The standard deviation of age is 5 years.
3. 30% are afraid to fly
4. The average age of mothers is the same as the average age of fathers.
Definition 6.1 A statistical hypothesis is a statement about the parameters of one or
more populations.
• It is worthwhile to note that a statistical hypothesis is a statement about the
underlying probability distributions, not the sample data.
Example 6.2 (Ex. 6,1 continued) Convert each of the statements into a statistical
hypothesis.
1.
2.
3.
4.
To perform a test of hypotheses, we must have a contradictory statement about the
parameters of interest.
Example 6.3 Consider the following contradictory statements.
1. No, it’s more than that.
2. No, it’s not.
3. No, it’s less than that.
67
68. 4. No, fathers are older.
The result of our original statement and our contradictory statement is a set of two
hypotheses.
Example 6.4 (Ex. 6.1 continued) Combine the two statements for each of the examples.
1.
2.
3.
4.
Our original statement is referred to as the null hypothesis (H0).
• The value specified in the null hypothesis may be a previously established value
(in which case we are trying to detect changes to that value), a theoretical value
(in which case we are trying to verify the theory), or a design specification (in
which case we are trying to determine if the specification has been met).
The contradictory statement (H1) is referred to as the alternative hypothesis.
• Note that an alternative hypothesis can be one-sided (1, 3, 4) or two-sided (2).
• The decision as to whether the alternative hypothesis should be one-sided or two-
sided depends on the problem of interest.
Type I Error
Rejecting the null hypothesis H0 when it is true
68
69. For example, the true mean of example 2.4.1 is 3.0. However, for the randomly selected
sample we could observe that the test statistic x falls into the critical region. Therefore,
we could reject the null hypothesis in favor of the alternative hypothesis H1.
Type II Error
Failing to reject the null hypothesis when it is false.
6.1 Performing a Hypothesis Test
Definition 6.2 A procedure leading to a decision about a particular null and
alternative hypothesis is called a hypothesis test.
• Hypothesis testing involves the use of sample data on the population(s) of
interest.
• If the sample data is consistent with a hypothesis, then we “accept” that
hypothesis and conclude that the corresponding statement about the population is
true.
• We “reject” the other hypothesis, and conclude that the corresponding statement
is false. However, the truth or falsity of the statements can never be known with
certainty, so we need to define our procedure so that we limit the probability of
making an erroneous decision.
• The burden of proof is placed upon the alternative hypothesis.
Basic Hypothesis Testing Procedure
A random sample is collected on the population(s) of interest, a test statistic is computed
based on the sample data, and the test statistic is used to make the decision to either
accept (some people say “fail to reject”) or reject the null hypothesis.
Example 6.5 A manufactured product is used in such a way that its most important
dimension is its width. Let X denote the width of a manufactured product. Suppose
historical data suggests that X is a normal random variable having σ = 4 cm. However,
the mean can change due to fluctuations in the manufacturing process. Therefore, we
wish to perform the following hypothesis test.
H0:
H1:
69
70. The following procedure has been proposed.
Inspect a random sample of 25 products. Measure the width of each product. If the
sample mean is less than 188 cm or more than 192 cm, reject H0.
For the proposed procedure, identify the following:
(a) sample size
(b) test statistic
(c) critical region
(d) acceptance region
Is the procedure defined in Ex. 6.5 a good procedure? Since we are only taking a random
sample, we cannot guarantee that the results of the hypothesis test will lead to us making
the correct decision. Therefore, the question “Is this a good procedure?” can be broken
down into two additional questions.
1. If the null hypothesis is true, what is the probability that we accept H0?
2. If the null hypothesis is not true, what is the probability that we accept H0?
Example 6.6 (Ex. 6.5 continued) If the null hypothesis is true, what is the probability
that we accept H0?
70
71. note assumptions
Therefore, if the null hypothesis is true, then there is a 98.76% chance that we will make
the correct decision. However, that also means that there is a 1.24% chance that we will
make the incorrect decision (reject H0 when H0 is true).
• Such a mistake is called a Type I error, or a false positive.
• α = P(Type I error) = level of significance
• In our example, α = 0.0124. When constructing a hypothesis test, we get to
specify α.
If the null hypothesis is not true (i.e. the alternative hypothesis is true), then accepting H0
would be a mistake.
• Accepting H0 when H0 is false is called a Type II error, or a false negative.
•
•
• Unfortunately, we can’t answer this question (find a value for β) in general. Since
the alternative hypothesis is µ ≠ 190 cm, there are an uncountable number of
situations in which the alternative hypothesis is true.
• We must identify specific situations of interest and analyze each one individually.
Example 6.7 (Ex. 6.5 continued) Find the probability of a Type II error when µ = 189 cm
and µ = 193 cm.
For µ = 189 cm:
71
72. For µ = 193 cm:
Note that as µ moves away from the hypothesized value (190 cm), β decreases.
If we experiment with other sample sizes and critical/acceptance regions, we will see that
the values of α and β can change significantly. However, there are some general “truths”
for hypothesis testing.
1. We can explicitly control α (given that the underlying assumptions are true).
2. Type I and Type II error are inversely related.
3. Increasing the sample size is the only way to simultaneously reduce α and β.
4. We can only control β for one specific situation.
Since we can explicitly control α, the probability of a Type I error, rejecting H0 is a
strong conclusion. However, we can only control Type II errors in a very limited
fashion. Therefore, accepting H0 is a weak conclusion. In fact, many statisticians use
the terminology, “fail to reject H0” as opposed to “accept H0.”
• Since “reject H0” is a strong conclusion, we should put the statement about which
it is important to make a strong conclusion in the alternative hypothesis.
Example 6.8 How would the procedure change if we wished to perform the following
hypothesis test?
H0: µ ≥ 190 cm
72
73. H1: µ < 190 cm
Proposed hypothesis testing procedure: Inspect a random sample of 25 observations on
the width of a product. If the sample mean is less than 188 cm, reject H0.
6.1.1 Generic Hypothesis Testing Procedure
All hypothesis have a common procedure. The textbook identifies eight steps in this
procedure.
1. From the problem context and assumptions, identify the parameter of interest.
2. State the null hypothesis, H0.
3. Specify an appropriate alternative hypothesis, H1.
4. Choose a significance level α.
5. State an appropriate test statistic.
6. State the critical region for that statistic.
7. Collect a random sample of observations on the random variable (or from the
population) of interest, and compute the test statistic.
8. Compare the test statistic value to the critical region and decided whether or not to
reject H0.
6.2 Performing Hypothesis Tests on µ when σ is Known
In this section, we consider making inferences about the mean µ of a single population
where the population standard deviation σ is known.
• We will assume that a random sample X1, X2, … , Xn has been taken from the
population.
• We will also assume that either the population is normal or the conditions of the
Central Limit Theorem apply.
Suppose we wish to perform the following hypothesis test.
73
74. It is somewhat obvious that inferences regarding µ would be based on the value of the
sample mean. However, it is usually more convenient to standardize the sample mean.
Using what we know about the sampling distribution of the mean, it is reasonable to
conclude that the test statistic will be
If the null hypothesis is true, then the test statistic is a standard normal random variable.
Therefore, we only reject the null hypothesis if the value of Z0 is unusual for an
observation on a standard normal random variable.
Specifically, we reject H0 if:
where α is the specified level of significance. The acceptance region is therefore
Obviously, the acceptance and critical regions can be converted to expressions in terms of
the sample mean.
Reject H0 if X > a or X < b where
74
75. Example 6.9 Let X denote the GPA of an engineering student at the Philadelphia
University. It is widely known that, for this population, σ = 0.5. The population mean is
not widely known, however, it is commonly believed that the average GPA is 3.0. We
wish to test this hypothesis using a sample of size 25 and a level of significance of 0.05.
(a) Identify the null and alternative hypotheses.
(b) List any required assumptions.
(c) Identify the test statistic and the critical region.
Reject H0 if
(d) Suppose 25 students are sampled and the sample average GPA is 3.18. State and
interpret the conclusion of the test.
Z0 =
(e) What is the probability of a Type I error for this test?
(f) How would the results change if we had used α = 0.10?
75
76. Critical region changes.
We may also modify this procedure if the test is one-sided. This modification only
requires a change in the critical/acceptance regions. If the alternative hypothesis is
then a negative value of Z0 would not indicate a need to reject H0. Therefore, we
only reject H0 if
Likewise, if the alternative hypothesis is
Example 6.10 The Glass Bottle Company (GBC) manufactures brown glass beverage
containers that are sold to breweries. One of the key characteristics of these bottles is
their volume. GBC knows that the standard deviation of volume is 0.08 oz. They wish to
ensure that the mean volume is not more than 12.2 oz using a sample size of 30 and a
level of significance of 0.01.
(a) Identify the null and alternative hypotheses.
76
77. (b) Identify the test statistic and the critical region.
(c) Suppose 30 bottles are measured and the sample mean is 12.23. State and
interpret the conclusion of the test.
6.2.1 Computing P-Values
We have already seen that the choice of the value for the level of significance can
impact the conclusions derived form a test of hypotheses. As a result, we may be
interested in answering the question: How close did we come to making the opposite
conclusion? We answer this question using an equivalent decision approach that can be
used as an alternative to the critical/acceptance regions. This approach is called the P-
value approach.
Definition 6.3 The P-value for a hypothesis test is the smallest level of significance that
would lead to rejection of the null hypothesis.
How we compute the P-value depends on the form of the alternative hypothesis.
We reject H0 if
77
78. Example 6.11 (Ex. 6.9 continued) Compute the P-value for the test.
Note when α = 0.05,
But, when α = 0.10,
Example 612 (Ex. 6.10 continued) Compute the P-value for the test.
Since α = 0.01, P > α
6.2.2 Type II Error
In hypothesis testing, we get to specify the probability of a Type I error ( α). However,
the probability of a Type II error (β) depends on the choice of sample size (n).
Consider first the case in which the alternative hypothesis is H1: µ ≠ µ0.
Before we can proceed, we must be more specific about “H0 is false”. We will
accomplish this by saying:
where δ ≠ 0.
78
79. X − µ0
β = P − Z α / 2 ≤
≤ Zα / 2 µ = µ 0 + δ
σ n
σ σ
β = P µ 0 − Z α / 2
≤ X ≤ µ 0 + Zα / 2 µ = µ0 + δ
n n
σ σ
µ 0 − Zα / 2 − ( µ0 + δ ) µ 0 + Zα / 2 − (µ0 + δ )
β = P
n n
≤Z≤
σ σ
n n
If the alternative hypothesis is H1: µ > µ0, then
.
If the alternative hypothesis is H1: µ < µ0.
.
Example 6.13 (Ex. 6.10 continued) Let X denote the GPA of an engineering student at
the Philadelphia University. It is widely known that, for this population, σ = 0.5. The
population mean is not widely known, however, it is commonly believed that the average
GPA is 3.0. We wish to test this hypothesis using a sample of size 25 and a level of
significance of 0.05. In Example 16.5, we formulated this hypothesis test as
79
80. The corresponding test statistic and critical region are given by
X − 3.0
Z0 =
0.5 25
(a) If µ = 3.2, what is the Type II error probability for this test?
δ = µ − µ0 =
β = P( − 3.96 ≤ Z ≤ −0.04 ) = 0.4840
(b) If µ = 2.68, what is the Type II error probability for this test?
δ = µ − µ0 =
β = P(1.24 ≤ Z ≤ 5.16 ) = 0.1075
(c) If µ = 2.68, what is the power of the test?
power =
80
81. (d) If µ = 3.32, what is the power of the test?
power = 0.8925
Example 6.14 (Ex. 6.11 continued) The Glass Bottle Company (GBC) manufactures
brown glass beverage containers that are sold to breweries. One of the key characteristics
of these bottles is their volume. GBC knows that the standard deviation of volume is
0.08 oz. They wish to ensure that the mean volume is not more than 12.2 oz using a
sample size of 30 and a level of significance of 0.01. Example 16.6, we formulated this
hypothesis test as
H0: µ ≤ 12.2
H1: µ > 12.2
The corresponding test statistic and critical region are given by
Reject H0 if
(a) If µ = 12.27 oz, what is the Type II error probability for this test?
δ = µ − µ0 = 0.07
0.07 30
β = P Z ≤ 2.3263 − = P( Z ≤ −2.47 ) = 0.0068
0.08
(b) If µ = 12.15 oz, what is the Type II error probability for this test?
This is a poor question. If µ = 12.15 oz, then “technically” the null hypothesis is
true. If we are truly concerned with detecting this, we should have used a two-
sided alternative hypothesis.
81
82. 6.2.3 Choosing the Sample Size
The expressions for β allow the determination of an appropriate sample size. To choose
the proper sample size for our test, we must specify a value of β for a specified value of
δ.
For the case in which H1: µ ≠µ0, the symmetry of the test allows us to always specify a
positive value of δ. If we specify a relatively small value of β (≤ 0.1), then the lower side
of the equation becomes negligible. So, the equation for β reduces to:
This yields:
δ n
Z 1− β = − Z β = Z α / 2 −
σ
δ n
= Zα / 2 + Z β
σ
For both cases in which the alternative hypothesis is one-sided:
82
83. Example 6.14 (Ex. 6.10 continued) Let X denote the GPA of an engineering student at
the Philadelphia University. It is widely known that, for this population, σ = 0.5. The
population mean is not widely known, however, it is commonly believed that the average
GPA is 3.0. We wish to test this hypothesis using a sample of size n and a level of
significance of 0.05. In Example 16.5, we formulated this hypothesis test as
H0: µ = 3.0
H1: µ ≠ 3.0
The corresponding test statistic and critical region are given by
(a) If we want β = 0.10 at µ = 3.2, what sample size should we use?
δ = 0.2
n = 66
(b) If we want β = 0.10 at µ = 3.25, what sample size should we use?
δ = 0.25
n=
( Z 0.025 + Z 0.10 ) 2 0.5 2 = (1.96 + 1.282) 2 0.5 2 = 42.04
0.25 2 0.25 2
n = 43
(c) If we want β = 0.05 at µ = 3.2, what sample size should we use?
δ = 0.2
n=
( Z 0.025 + Z 0.05 ) 2 0.5 2 = (1.96 + 1.645) 2 0.52 = 81.2
0.2 2 0.2 2
83
84. n = 82
Example 6.15 (Ex. 6.11 continued) The Glass Bottle Company (GBC) manufactures
brown glass beverage containers that are sold to breweries. One of the key characteristics
of these bottles is their volume. GBC knows that the standard deviation of volume is
0.08 oz. They wish to ensure that the mean volume is not more than 12.2 oz using a
sample size of n and a level of significance of 0.01. Example 16.6, we formulated this
hypothesis test as
The corresponding test statistic and critical region are given by
If we wish to have a test power of 0.95 at µ = 12.25 oz, what is the required sample size
for this test?
6.2.4 Choosing the Sample Size
The expressions for β allow the determination of an appropriate sample size. To choose
the proper sample size for our test, we must specify a value of β for a specified value of
δ.
For the case in which H1: µ ≠µ0, the symmetry of the test allows us to always specify a
positive value of δ. If we specify a relatively small value of β (≤ 0.1), then the lower side
of the equation becomes negligible. So, the equation for β reduces to:
δ n
β = P Z ≤ Z α / 2 −
σ
84
85. This yields:
δ n
Z 1− β = − Z β = Z α / 2 −
σ
δ n
= Zα / 2 + Z β
σ
(Z α/2 + Zβ ) σ 2
2
n=
δ2
For both cases in which the alternative hypothesis is one-sided:
(Z α + Zβ ) σ 2
2
n=
δ2
Example 6.14 (Ex. 6.10 continued) Let X denote the GPA of an engineering student at
the Philadelphia University. It is widely known that, for this population, σ = 0.5. The
population mean is not widely known, however, it is commonly believed that the average
GPA is 3.0. We wish to test this hypothesis using a sample of size n and a level of
significance of 0.05. In Example 16.5, we formulated this hypothesis test as
H0: µ = 3.0
H1: µ ≠ 3.0
The corresponding test statistic and critical region are given by
X − 3.0
Z0 =
0.5 n
Reject H0 if Z0 < −Zα/2 = −Z0.025 = −1.96 or if Z0 > Zα/2 = 1.96
(a) If we want β = 0.10 at µ = 3.2, what sample size should we use?
δ = 0.2
n=
( Z 0.025 + Z 0.10 ) 2 0.5 2 = (1.96 + 1.282) 2 0.5 2 = 65.7
0.2 2 0.2 2
n = 66
(b) If we want β = 0.10 at µ = 3.25, what sample size should we use?
85
86. δ = 0.25
n=
( Z 0.025 + Z 0.10 ) 2 0.5 2 = (1.96 + 1.282) 2 0.5 2 = 42.04
0.25 2 0.25 2
n = 43
(c) If we want β = 0.05 at µ = 3.2, what sample size should we use?
δ = 0.2
n=
( Z 0.025 + Z 0.05 ) 2 0.5 2 = (1.96 + 1.645) 2 0.52 = 81.2
0.2 2 0.2 2
n = 82
Example 6.15 (Ex. 6.11 continued) The Glass Bottle Company (GBC) manufactures
brown glass beverage containers that are sold to breweries. One of the key characteristics
of these bottles is their volume. GBC knows that the standard deviation of volume is
0.08 oz. They wish to ensure that the mean volume is not more than 12.2 oz using a
sample size of n and a level of significance of 0.01. Example 16.6, we formulated this
hypothesis test as
H0: µ ≤ 12.2
H1: µ > 12.2
The corresponding test statistic and critical region are given by
X − 12.2
Z0 =
0.08 n
Reject H0 if Z0 > Zα = Z0.01 = 2.3263
If we wish to have a test power of 0.95 at µ = 12.25 oz, what is the required sample size
for this test?
δ = 0.05
β = 0.05
n=
( Z 0.01 + Z 0.05 ) 2 0.082 = ( 2.326 + 1.645) 2 0.082 = 40.4
0.05 2 0.05 2
86
87. n = 41
6.3 Statistical Significance
A hypothesis test is a test for statistical significance. When we reject H 0, we are
stating that the data indicates a statistically significant difference between the true mean
and the hypothesized value of the mean. When we accept H 0, then we are stating that
there is not a statistically significant difference. Statistical difference and practical
significance are not the same. This is especially important to recognize when the sample
size is large.
6.3.1 Introduction to Confidence Intervals
As we have previously discussed, the sample mean is the most often used point
estimate for the population mean. However, we also pointed out that two different
samples would most likely result in two different sample means. Therefore, we define
confidence intervals as a means of quantifying the uncertainty in our point estimate.
If θ is the parameter of interest, then the point estimate and sampling distribution of θ can
be used to identify a 100(1 − α )% confidence interval on θ. This interval is of the
form:
L ≤ θ ≤ U.
L and U are called the lower-confidence limit and upper-confidence limit.
If L and U are constructed properly, then
P(L ≤ θ ≤ U) = 1 − α.
The quantity (1 − α) is called the confidence coefficient. The confidence coefficient is a
measure of the accuracy of the confidence interval. For example, if a 90% confidence
interval is constructed, then the probability that the true value of θ is contained in the
interval is 0.9.
87
88. The length of the confidence interval is a measure of the precision of the point
estimate. A general rule of thumb is that increasing the sample size improves the
precision of a point estimate.
6.3.2 Confidence Interval on µ when σ is Known
We can use what we have learned to construct a 100(1 − α )% confidence
interval on the mean, assuming that (a) the population standard deviation is known, and
(b) the population is normally distributed (or the conditions of the Central Limit Theorem
apply).
P( − Z α / 2 ≤ Z ≤ Z α / 2 ) = 1 − α
X −µ
P − Z α / 2 ≤ ≤ Zα / 2 = 1−α
σ n
σ σ
P X − Z α / 2
≤ µ ≤ X + Zα / 2 = 1−α
n n
Such a confidence interval is called a two-sided confidence interval. We can also
construct one-sided confidence intervals for the same set of assumptions (σ known,
normal population or Central Limit Theorem conditions apply).
The 100(1 − α)% upper-confidence interval is given by
σ
P µ ≤ X + Z α
= 1−α
n
and the 100(1 − α)% lower-confidence interval is given by
σ
P µ ≥ X − Z α
= 1−α .
n
Example 6.16 Let X denote the GPA of an engineering student at the Philadelphia
University. It is widely known that, for this population, σ = 0.5. The population mean is
not widely known, however, we have a collected a sample of size 25 from the population.
The resulting sample mean was 3.18.
88
89. (a) What assumptions, if any, are required to use this data to construct a confidence
interval on the mean GPA?
GPA is normally distributed.
(b) Construct a 95% confidence interval on µ and interpret its meaning.
σ 0.5
X ± Z 0.025 = 3.18 ± 1.96
n 25
2.984 ≤ µ ≤ 3.376
P( 2.984 ≤ µ ≤ 3.376 ) = 0.95
(c) Construct a 99% confidence interval on µ and compare it to the confidence
interval obtained in part (b).
σ 0.5
X ± Z 0.005 = 3.18 ± 2.58
n 25
2.922 ≤ µ ≤ 3.438
more accurate, but less precise
(d) Construct a 95% upper-confidence interval on µ and interpret its meaning.
σ 0.5
X + Z 0.05 = 3.18 + 1.645
n 25
µ ≤ 3.3445
P( µ ≤ 3.3445) = 0.95
(e) Construct a 95% lower-confidence interval on µ and interpret its meaning.
σ 0.5
X − Z 0.05 = 3.18 − 1.645
n 25
µ ≥ 3.0155
P( µ ≥ 3.0155) = 0.95
(f) Combine the two confidence intervals obtained in parts (d) and (e). Is this
confidence interval superior to the one constructed in part (b)?
89
90. 3.0155 ≤ µ ≤ 3.3445
No, it is only a 90% confidence interval.
6.3.3 Choosing the Sample Size for a Confidence Interval on µ when σ is Known
The percentage of a confidence interval is a measure of the accuracy of the
confidence interval.
The half-width of the confidence interval, E, is a measure of the precision of the
confidence interval. For a two-sided confidence interval, E = (U – L)/2. For an upper-
confidence interval, E = U − θ and for a lower-confidence interval, E = θ − L.
For a given level of accuracy (α), we can control the precision of the confidence
interval using the sample size. For the two-sided confidence interval on µ, we specify a
value of E and note that:
σ
E = Zα / 2 .
n
Then, we can solve for n.
2
Z σ
n = α/2
E
For the one-sided confidence intervals:
2
Z σ
n= α .
E
Example 6.17 (Ex. 6.16 continued)
(a) If we wish to construct a 95% confidence interval on µ that has a half-width of
0.1, how many students should we survey?
2 2
Z σ 1.96 ⋅ 0.5
n = 0.025 = = 96.04
E 0.1
n = 97
90
91. (b) If we wish to construct a 95% upper-confidence interval on µ that has a half-
width of 0.1, how many students should we survey?
2 2
Z σ 1.645 ⋅ 0.5
n = 0.05 = = 67.65
E 0.1
n = 68
(c) If we wish to construct a 90% confidence interval on µ that has a half-width of
0.1, how many students should we survey?
2 2
Z σ 1.645 ⋅ 0.5
n = 0.05 = = 67.65
E 0.1
n = 68
6.3.4 Using Confidence Intervals to Perform Hypothesis Tests on µ when σ is
Known
Thus far, we have considered two methods of evaluating hypothesis tests: critical
regions and P-values. A third, equivalent method is to use a confidence interval.
1. Specify: µo, α, n
2. If H1: µ ≠ µo, construct a 100(1 − α)% confidence interval on µ.
If H1: µ > µo, construct a 100(1 − α)% lower-confidence interval on µ.
If H1: µ < µo, construct a 100(1 − α)% upper-confidence interval on µ.
3. Reject H0 is µo is not contained in that confidence interval.
Example 6.17 (Ex. 6.10 continued) Let X denote the GPA of an engineering student at
the Philadelphia University. It is widely known that, for this population, σ = 0.5. The
population mean is not widely known, however, it is commonly believed that the average
GPA is 3.0. We wish to test this hypothesis using a sample of size 25 and a level of
significance of 0.05.
From Ex. 6.10:
H0: µ = 3.0
H1: µ ≠ 3.0
91
92. Suppose the sample mean is 3.18. Use a confidence interval to evaluate the hypothesis
test.
α = 0.05, H1: ≠
95% confidence interval
From Ex. 6.16:
2.984 ≤ µ ≤ 3.376
3.0 is in the confidence interval
fail to reject H0
Example 6.18 (Ex. 6.11 continued) The Glass Bottle Company (GBC) manufactures
brown glass beverage containers that are sold to breweries. One of the key characteristics
of these bottles is their volume. GBC knows that the standard deviation of volume is
0.08 oz. They wish to ensure that the mean volume is not more than 12.2 oz using a
sample size of 30 and a level of significance of 0.01.
From Ex. 3.2.2:
H0: µ ≤ 12.2
H1: µ > 12.2
Suppose the sample mean is 12.23. Use a confidence interval to evaluate the hypothesis
test.
α = 0.01, H1: >
99% lower-confidence interval
σ 0.08
X − Z 0.01 = 12.23 − 2.3263
n 30
µ ≥ 12.1960
12.2 is in the confidence interval
fail to reject H0
92
93. 6.4 Hypothesis Test on μ and σ unkown
What if σ is Unknown?
Suppose we are interested in studying the mean of a population, but we do not
know the value of the population standard deviation?
• We can use the procedures defined in section 2.3 and replace σ with S, provided
that the sample size is large (n ≥ 30).
• When the sample size is small and σ is unknown, then we must assume that the
population is normally distributed.
The t Distribution
Suppose we wish to perform the following hypothesis test.
H0: µ = µ0
H1: µ ≠ µ0
Suppose we have collected a random sample of size n and that we have used this
sample data to compute the sample mean X and the sample standard deviation S.
If σ were known then we would compute the test statistic:
X − µ0
Z0 = .
σ n
Therefore, a logical approach is to replace σ with S. The resulting test statistic is:
X − µ0
T0 = .
S n
Before we can proceed, we should analyze the sampling distribution of this test statistic.
Theorem 6.1 The t Distribution
Let X1, X2, … , Xn be a random sample from a normal population having mean µ.
The quantity
X −µ
T=
S
n
93
94. has a t distribution with n – 1 degrees of freedom.
While we won’t discuss the details of the t distribution, it is important to recognize
two points regarding the t probability density function.
• First, it is symmetric about 0.
• Second, as the number of degrees of freedom increases, the t distribution
approaches the standard normal distribution. This explains why it is OK to use
the procedures from section 2.3 when n ≥ 30 (at 29 degrees of freedom there is
little difference between t and Z).
Example 6.19 Suppose T has a t distribution with 7 degrees of freedom. Find the
following:
(a) P(T > 2.365)
Excel function TDIST(x, degrees of freedom, 1 or 2 tails)=P(2.3625, 7, 1) = 0.025
Note Excel gives you the value P(X>x)
(b) P(T > 1.415)
0.10
(c) P(T < −3.499)
P(T > 3.499) = 0.005
(d) P(T > −2.8) = 1 – Pt (T<2.88) = 0.9867
(e) the value a such that P(T >a) = 0.05
a = t0.05,7 = 1.895
(f) the value of a such that P(T > a) = 0.01
a = t0.01,7 = 2.998
(g) the value of a such that P(T < a) = 0.9975
a = t0.0025,7 = 4.029
94