This document provides information on measures of position including z-scores, percentiles, quartiles, and outliers. It includes definitions and formulas for calculating z-scores, percentiles, and quartiles. Examples are provided to demonstrate calculating these measures of position for given data sets and identifying outliers. The document also discusses exploratory data analysis and box-and-whisker plots.
This document defines and discusses quartiles, deciles, and percentiles. Quartiles divide a data set into four equal parts, with the first quartile (Q1) representing the lowest 25% of values. Deciles divide data into ten equal parts. Percentiles indicate the value below which a certain percentage of observations fall. Examples are provided for calculating Q1, Q3, D1 using formulas for grouped and ungrouped data sets. Quartiles, deciles, and percentiles are commonly used to summarize and report on statistical data.
The heart receives its blood supply from two coronary arteries - the right and left coronary arteries. The right coronary artery supplies the right atrium, right ventricle, parts of the left atrium and ventricles. The left coronary artery is larger and divides into the anterior interventricular and circumflex arteries. These arteries and their branches supply the remaining parts of the heart. The arteries anastomose to allow for blood flow if one gets blocked. Most venous blood from the heart drains into the coronary sinus, which empties into the right atrium.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The document provides examples and explanations for calculating and interpreting quartiles and box-and-whisker plots. It defines key terms like lower quartile, upper quartile, median, minimum, and maximum. Examples show how to find the quartiles for data sets and construct box-and-whisker plots. The document also includes practice problems for students to find quartiles and interpret box-and-whisker plots.
This document discusses different methods for organizing data, including percentiles, quartiles, and deciles. It provides the definitions and formulas for calculating each. Percentiles indicate the value below which a given percentage of observations fall. Quartiles divide a data set into four equal parts, with the median (Q2) separating the lower and upper halves. Deciles divide a data set into ten equal parts. The document gives examples of calculating percentiles, quartiles, and deciles for sample data sets.
This document provides information about various statistical measures of central tendency including the median, mode, and quartiles. It defines each measure and provides examples of how to calculate them from both grouped and ungrouped data sets. Formulas are given for calculating the median, quartiles, deciles, and percentiles for grouped data. The mode is defined as the value that occurs most frequently in a data set, and a formula is provided for calculating it from grouped frequency distributions.
Electricity is associated with the presence and motion of electric charge. There are two types of electricity: static electricity and current electricity. Static electricity results from an imbalance of negative and positive charges in an object that can build up until being discharged. Electric charge is measured in coulombs and there are two types: positive and negative.
The electric field is the region of space surrounding an electrically charged object where an electric force can be detected. It is represented by electric field lines. The electric field intensity is the electric force per unit charge and is measured in newtons per coulomb. Coulomb's law describes the electric force between two point charges. Gauss's law relates the electric flux through a closed surface to the net
This document defines and discusses quartiles, deciles, and percentiles. Quartiles divide a data set into four equal parts, with the first quartile (Q1) representing the lowest 25% of values. Deciles divide data into ten equal parts. Percentiles indicate the value below which a certain percentage of observations fall. Examples are provided for calculating Q1, Q3, D1 using formulas for grouped and ungrouped data sets. Quartiles, deciles, and percentiles are commonly used to summarize and report on statistical data.
The heart receives its blood supply from two coronary arteries - the right and left coronary arteries. The right coronary artery supplies the right atrium, right ventricle, parts of the left atrium and ventricles. The left coronary artery is larger and divides into the anterior interventricular and circumflex arteries. These arteries and their branches supply the remaining parts of the heart. The arteries anastomose to allow for blood flow if one gets blocked. Most venous blood from the heart drains into the coronary sinus, which empties into the right atrium.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The document provides examples and explanations for calculating and interpreting quartiles and box-and-whisker plots. It defines key terms like lower quartile, upper quartile, median, minimum, and maximum. Examples show how to find the quartiles for data sets and construct box-and-whisker plots. The document also includes practice problems for students to find quartiles and interpret box-and-whisker plots.
This document discusses different methods for organizing data, including percentiles, quartiles, and deciles. It provides the definitions and formulas for calculating each. Percentiles indicate the value below which a given percentage of observations fall. Quartiles divide a data set into four equal parts, with the median (Q2) separating the lower and upper halves. Deciles divide a data set into ten equal parts. The document gives examples of calculating percentiles, quartiles, and deciles for sample data sets.
This document provides information about various statistical measures of central tendency including the median, mode, and quartiles. It defines each measure and provides examples of how to calculate them from both grouped and ungrouped data sets. Formulas are given for calculating the median, quartiles, deciles, and percentiles for grouped data. The mode is defined as the value that occurs most frequently in a data set, and a formula is provided for calculating it from grouped frequency distributions.
Electricity is associated with the presence and motion of electric charge. There are two types of electricity: static electricity and current electricity. Static electricity results from an imbalance of negative and positive charges in an object that can build up until being discharged. Electric charge is measured in coulombs and there are two types: positive and negative.
The electric field is the region of space surrounding an electrically charged object where an electric force can be detected. It is represented by electric field lines. The electric field intensity is the electric force per unit charge and is measured in newtons per coulomb. Coulomb's law describes the electric force between two point charges. Gauss's law relates the electric flux through a closed surface to the net
The document provides definitions and concepts related to matrices and determinants. It begins with definitions of matrices, operations on matrices like transpose and trace. It then discusses row echelon form, elementary row operations, and using matrices to represent systems of linear equations. The document will cover topics like inverse matrices, matrix rank and nullity, polynomials of matrices, properties of determinants, minors and cofactors, and Cramer's rule.
Weighted arithmetic mean assigns different weights or levels of importance to values before calculating the average. It is calculated by multiplying each value by its weight, summing these products, and dividing by the total of the weights. Combined arithmetic mean calculates the average of two or more groups by taking the weighted average of the means of each group, where the weights are the number of items in each group. Weighted and combined arithmetic means are useful when not all values have equal importance or when calculating averages of related subgroups.
This document discusses quantiles, which are statistical measures used to divide a frequency distribution into equal parts. It defines quartiles, deciles, and percentiles as quantiles that partition the distribution into 4, 10, and 100 equal parts, respectively. The median is the second quartile. Formulas are provided to calculate quantiles from cumulative frequency data. An example calculates the first quartile, sixth decile, and seventeenth percentile of childcare manager ages. Quantiles are useful for describing the relative location of data values and comparing data, as illustrated by an example of comparing oil company sales.
This document discusses logic and truth tables which are used in mathematics and computer science. It defines primitive statements, logical connectives like conjunction, disjunction, negation, implication and biconditional. Truth tables are used to determine the truth values of compound statements formed using these connectives. Examples are given to show how compound statements can be written symbolically and their truth values determined from truth tables. Decision structures like if-then and if-then-else used in programming languages are also discussed.
In the previous lesson we discussed a measure of location known as the measure of central tendency. There are other measures of location which are useful in describing the distribution of the data set. These measures of location include the maximum, minimum, percentiles, deciles and quartiles. How to compute and interpret these measures are also discussed in this lesson.
Install Addin Excel - Data Analysis Tool Pak - ThiyaguThiyagu K
The Analysis ToolPak is an Excel add-in program
that provides data analysis tools for
financial, statistical and engineering data analysis. We can do statistical analysis in very easily with the use of Data Analysis Tool Pak of Excel Addin. This presentation describes the steps of installing the Addin Software - Data Analysis ToolPak in Excel.
This document provides an overview of measures of central tendency and dispersion from a statistics textbook. It defines and explains the mean, median, mode, standard deviation, and other key concepts. Examples are provided to demonstrate how to calculate the mean, median, mode, and weighted mean from raw data and frequency tables. Factors that can impact the mean and other measures are discussed. The document also covers skewed distributions and how measures of central tendency and dispersion help analyze and understand data.
This document contains the scores of 8 students in a management statistics course. It shows the individual scores ranging from 17 to 37. It then calculates the first quartile (Q1) as 21.5 and the third quartile (Q3) as 30.5. The interquartile range (IQR) is calculated as the difference between Q3 and Q1, which is 4.5. Formulas are also provided for calculating Q1 and Q3 based on the class size, total number of scores, and cumulative frequency.
Quartiles divide a sorted data set into quarters based on the values. The first quartile (Q1) is the median between the smallest number and the overall median. The second quartile (Q2) is the median. The third quartile (Q3) is the median between the overall median and highest value. In an example data set of 11 numbers, the quartiles were Q1=5, Q2=7, and Q3=9.
1. The document provides information about measures of position (quartiles, deciles, percentiles) and how to calculate them. It gives an example of finding the first quartile (Q1), second quartile (Q2), and third quartile (Q3) from a data set of students' test scores.
2. Steps for calculating quartiles include arranging the data in order, dividing it into four equal parts, and finding the values that split the data into the 25th, 50th, and 75th percentiles.
3. Interpolation may be needed if the quartile value falls between two data points; this involves calculating the difference between points and multiplying by the decimal portion.
This document discusses basic matrix operations including:
- Defining a matrix as a rectangular arrangement of numbers in rows and columns with an order specified by the number of rows and columns.
- Adding and subtracting matrices requires they have the same order and involves adding or subtracting corresponding entries.
- Multiplying a matrix by a scalar involves multiplying each entry in the matrix by the scalar value.
- Matrix multiplication is not commutative and can only be done if the number of columns in the first matrix equals the number of rows in the second matrix. It involves multiplying entries and summing the products based on their positions.
The cardiac cycle consists of two main phases - diastole and systole. During diastole, the ventricles relax and fill with blood from the atria through the open atrioventricular valves. Atrial contraction occurs at the end of diastole, pushing additional blood into the ventricles. Systole begins with ventricular contraction, causing the atrioventricular valves to close and isolating the ventricles as pressure rises. When pressure exceeds that in the arteries, the semilunar valves open and ejection occurs, followed by their closing as pressure falls at the end of systole. The cycle then repeats with rapid ventricular filling and relaxation.
Measures of central tendency describe the middle or center of a data set using a single value. The three most common measures are the mode, median, and mean. The mode is the most frequently occurring value, the median is the middle value when data are ordered from lowest to highest, and the mean is the average calculated by summing all values and dividing by the total count. Each measure provides a different perspective on the center of the data set.
The document discusses the venous anatomy of the heart, including the coronary sinus and persistent left superior vena cava (LSVC). It begins with the embryological development of the venous system. It then describes the various tributaries that drain into the coronary sinus and provides an overview of the venous drainage patterns. It discusses surgical implications of anomalies such as LSVC connection variations, coronary sinus atresia, and partial unroofing of the coronary sinus.
This document provides information about frequency distributions and constructing frequency distribution tables. It defines a frequency distribution as a representation of data in a tabular format showing the number of observations within intervals. It then outlines the general process for constructing a frequency table which includes determining the range, number of classes, class width, and recording the frequencies in a table. An example is provided of constructing a frequency table from data on the ages of 50 men who died from gunfire using 7 classes. Guidelines for constructing frequency tables are also listed.
This document contains a 50-question mathematics examination covering topics like permutations, combinations, probability, and statistics. The exam asks students to identify terms, calculate outcomes of experiments and events, determine probabilities, and solve word problems involving arrangements of objects and sampling with or without replacement. It provides context that the exam was administered to 10th grade mathematics students in the Philippines and includes instructions to write answers in capital letters on a half-sheet of paper, with no erasures or superimpositions allowed.
Human: You are an expert at summarizing documents. You provide concise summaries in 3 sentences or less that provide the high level and essential information from the document. Summarize the following document. Begin your response with "[SUMMARY
Aortic dissection occurs when a tear forms in the inner layer of the aorta, allowing blood to flow between the layers. This blood flow dissects the medial layer and causes the layers to separate longitudinally. The blood-filled space between the layers is called the false lumen. Aortic dissection can occur in the ascending aorta, descending aorta, or abdominal aorta. The most common causes are hypertension, cystic medial degeneration, and connective tissue disorders. Imaging techniques like CT and MRI are effective for diagnosing aortic dissection and determining its location and extent.
The document discusses various statistical concepts including range, mean deviation, variance, and standard deviation. It provides formulas and steps to calculate each measure. The range is the distance between the highest and lowest values. Mean deviation measures the average deviation from the mean. Variance is the average of the squared deviations from the mean and standard deviation is the square root of the variance, representing the average distance from the mean. Examples are given to demonstrate calculating each measure for both ungrouped and grouped data.
This document is a lesson on calculating quartiles, deciles, and percentiles from grouped and ungrouped data. It provides examples and step-by-step instructions on arranging data in ascending order and using the Mendenhall-Sincich method and formulas to determine the lower quartile, upper quartile, 5th decile, 50th percentile, and other values. It then provides practice problems for the student to solve involving grouped data from test scores and smoking levels. The document emphasizes rounding rules and identifying the correct interval before applying the formulas.
The document contains calculations of hydraulic conductivity (K) for various materials including concrete, soil, filters, and dam sections. K values are calculated using Darcy's law with hydraulic head (H) and velocity (v) terms provided. The materials and flow directions are identified for 44 pairs labeled A1 through A44 and B1 through B44.
The document provides definitions and concepts related to matrices and determinants. It begins with definitions of matrices, operations on matrices like transpose and trace. It then discusses row echelon form, elementary row operations, and using matrices to represent systems of linear equations. The document will cover topics like inverse matrices, matrix rank and nullity, polynomials of matrices, properties of determinants, minors and cofactors, and Cramer's rule.
Weighted arithmetic mean assigns different weights or levels of importance to values before calculating the average. It is calculated by multiplying each value by its weight, summing these products, and dividing by the total of the weights. Combined arithmetic mean calculates the average of two or more groups by taking the weighted average of the means of each group, where the weights are the number of items in each group. Weighted and combined arithmetic means are useful when not all values have equal importance or when calculating averages of related subgroups.
This document discusses quantiles, which are statistical measures used to divide a frequency distribution into equal parts. It defines quartiles, deciles, and percentiles as quantiles that partition the distribution into 4, 10, and 100 equal parts, respectively. The median is the second quartile. Formulas are provided to calculate quantiles from cumulative frequency data. An example calculates the first quartile, sixth decile, and seventeenth percentile of childcare manager ages. Quantiles are useful for describing the relative location of data values and comparing data, as illustrated by an example of comparing oil company sales.
This document discusses logic and truth tables which are used in mathematics and computer science. It defines primitive statements, logical connectives like conjunction, disjunction, negation, implication and biconditional. Truth tables are used to determine the truth values of compound statements formed using these connectives. Examples are given to show how compound statements can be written symbolically and their truth values determined from truth tables. Decision structures like if-then and if-then-else used in programming languages are also discussed.
In the previous lesson we discussed a measure of location known as the measure of central tendency. There are other measures of location which are useful in describing the distribution of the data set. These measures of location include the maximum, minimum, percentiles, deciles and quartiles. How to compute and interpret these measures are also discussed in this lesson.
Install Addin Excel - Data Analysis Tool Pak - ThiyaguThiyagu K
The Analysis ToolPak is an Excel add-in program
that provides data analysis tools for
financial, statistical and engineering data analysis. We can do statistical analysis in very easily with the use of Data Analysis Tool Pak of Excel Addin. This presentation describes the steps of installing the Addin Software - Data Analysis ToolPak in Excel.
This document provides an overview of measures of central tendency and dispersion from a statistics textbook. It defines and explains the mean, median, mode, standard deviation, and other key concepts. Examples are provided to demonstrate how to calculate the mean, median, mode, and weighted mean from raw data and frequency tables. Factors that can impact the mean and other measures are discussed. The document also covers skewed distributions and how measures of central tendency and dispersion help analyze and understand data.
This document contains the scores of 8 students in a management statistics course. It shows the individual scores ranging from 17 to 37. It then calculates the first quartile (Q1) as 21.5 and the third quartile (Q3) as 30.5. The interquartile range (IQR) is calculated as the difference between Q3 and Q1, which is 4.5. Formulas are also provided for calculating Q1 and Q3 based on the class size, total number of scores, and cumulative frequency.
Quartiles divide a sorted data set into quarters based on the values. The first quartile (Q1) is the median between the smallest number and the overall median. The second quartile (Q2) is the median. The third quartile (Q3) is the median between the overall median and highest value. In an example data set of 11 numbers, the quartiles were Q1=5, Q2=7, and Q3=9.
1. The document provides information about measures of position (quartiles, deciles, percentiles) and how to calculate them. It gives an example of finding the first quartile (Q1), second quartile (Q2), and third quartile (Q3) from a data set of students' test scores.
2. Steps for calculating quartiles include arranging the data in order, dividing it into four equal parts, and finding the values that split the data into the 25th, 50th, and 75th percentiles.
3. Interpolation may be needed if the quartile value falls between two data points; this involves calculating the difference between points and multiplying by the decimal portion.
This document discusses basic matrix operations including:
- Defining a matrix as a rectangular arrangement of numbers in rows and columns with an order specified by the number of rows and columns.
- Adding and subtracting matrices requires they have the same order and involves adding or subtracting corresponding entries.
- Multiplying a matrix by a scalar involves multiplying each entry in the matrix by the scalar value.
- Matrix multiplication is not commutative and can only be done if the number of columns in the first matrix equals the number of rows in the second matrix. It involves multiplying entries and summing the products based on their positions.
The cardiac cycle consists of two main phases - diastole and systole. During diastole, the ventricles relax and fill with blood from the atria through the open atrioventricular valves. Atrial contraction occurs at the end of diastole, pushing additional blood into the ventricles. Systole begins with ventricular contraction, causing the atrioventricular valves to close and isolating the ventricles as pressure rises. When pressure exceeds that in the arteries, the semilunar valves open and ejection occurs, followed by their closing as pressure falls at the end of systole. The cycle then repeats with rapid ventricular filling and relaxation.
Measures of central tendency describe the middle or center of a data set using a single value. The three most common measures are the mode, median, and mean. The mode is the most frequently occurring value, the median is the middle value when data are ordered from lowest to highest, and the mean is the average calculated by summing all values and dividing by the total count. Each measure provides a different perspective on the center of the data set.
The document discusses the venous anatomy of the heart, including the coronary sinus and persistent left superior vena cava (LSVC). It begins with the embryological development of the venous system. It then describes the various tributaries that drain into the coronary sinus and provides an overview of the venous drainage patterns. It discusses surgical implications of anomalies such as LSVC connection variations, coronary sinus atresia, and partial unroofing of the coronary sinus.
This document provides information about frequency distributions and constructing frequency distribution tables. It defines a frequency distribution as a representation of data in a tabular format showing the number of observations within intervals. It then outlines the general process for constructing a frequency table which includes determining the range, number of classes, class width, and recording the frequencies in a table. An example is provided of constructing a frequency table from data on the ages of 50 men who died from gunfire using 7 classes. Guidelines for constructing frequency tables are also listed.
This document contains a 50-question mathematics examination covering topics like permutations, combinations, probability, and statistics. The exam asks students to identify terms, calculate outcomes of experiments and events, determine probabilities, and solve word problems involving arrangements of objects and sampling with or without replacement. It provides context that the exam was administered to 10th grade mathematics students in the Philippines and includes instructions to write answers in capital letters on a half-sheet of paper, with no erasures or superimpositions allowed.
Human: You are an expert at summarizing documents. You provide concise summaries in 3 sentences or less that provide the high level and essential information from the document. Summarize the following document. Begin your response with "[SUMMARY
Aortic dissection occurs when a tear forms in the inner layer of the aorta, allowing blood to flow between the layers. This blood flow dissects the medial layer and causes the layers to separate longitudinally. The blood-filled space between the layers is called the false lumen. Aortic dissection can occur in the ascending aorta, descending aorta, or abdominal aorta. The most common causes are hypertension, cystic medial degeneration, and connective tissue disorders. Imaging techniques like CT and MRI are effective for diagnosing aortic dissection and determining its location and extent.
The document discusses various statistical concepts including range, mean deviation, variance, and standard deviation. It provides formulas and steps to calculate each measure. The range is the distance between the highest and lowest values. Mean deviation measures the average deviation from the mean. Variance is the average of the squared deviations from the mean and standard deviation is the square root of the variance, representing the average distance from the mean. Examples are given to demonstrate calculating each measure for both ungrouped and grouped data.
This document is a lesson on calculating quartiles, deciles, and percentiles from grouped and ungrouped data. It provides examples and step-by-step instructions on arranging data in ascending order and using the Mendenhall-Sincich method and formulas to determine the lower quartile, upper quartile, 5th decile, 50th percentile, and other values. It then provides practice problems for the student to solve involving grouped data from test scores and smoking levels. The document emphasizes rounding rules and identifying the correct interval before applying the formulas.
The document contains calculations of hydraulic conductivity (K) for various materials including concrete, soil, filters, and dam sections. K values are calculated using Darcy's law with hydraulic head (H) and velocity (v) terms provided. The materials and flow directions are identified for 44 pairs labeled A1 through A44 and B1 through B44.
Solution to first semester soil 2015 16chener Qadr
This document contains a soil mechanics exam with four questions. Question 1 involves determining properties like void ratio, dry density, and bulk density of a saturated soil sample. It also asks about changes if saturation is reduced and calculating soil requirements for an embankment. Question 2 asks to classify a soil sample using soil classification charts and determine its suitability as backfill. Question 3 involves calculating seepage flow from a canal into a ditch based on soil properties. Question 4, which is not fully summarized, involves analyzing seepage in an embankment dam.
A comparison on slope stability analysis of aydoghmoosh earth damdgjd
1. The document compares slope stability analysis of the Aydoghmoosh Earth Dam in Iran using limit equilibrium methods, finite element analysis, and finite difference methods.
2. Safety factors calculated using the simplified Bishop method and finite element analysis were similar at 1.494 and 1.596, respectively.
3. The finite difference method produced a safety factor of 1.79, around 12% higher than the finite element method. This is because the finite element method accounts for elasticity modulus in its calculations.
The document contains two examples involving binomial distributions:
1) The probability of a component surviving a shock test is 3/4. The question asks for the probability that exactly 2 of the next 4 components will survive.
2) The probability that a patient recovers from a rare blood disease is 0.4. The question asks for the probabilities of: a) at least 10 of 15 people surviving, b) between 3 to 8 of 15 people surviving, and c) exactly 5 of 15 people surviving.
The document contains hydraulic conductivity values for various dam and soil materials for multiple students. It lists the material, hydraulic conductivity value in units of m/sec or other, and the name of the student. There are 48 entries with a student name prefix A for one column and B for a second column of entries.
The document discusses the theory of control charts for quality management. It explains that variation exists in manufacturing processes due to random and assignable causes. Control charts graphically show whether a process is stable or unstable over time by plotting measures of central tendency and dispersion. The X-bar and R charts are control chart methods for variables that plot average and range values of subgroups. They establish upper and lower control limits based on the mean and standard deviation to identify processes that are out of statistical control.
This document provides lecture notes on frequency distributions. It begins by defining a frequency distribution as a table that divides data into classes and shows the number of items in each class. It then gives an example of flood frequency data from a river in Italy from 1939-1972. The notes explain how to construct a frequency table, including deciding on class numbers, calculating class width, and determining lower and upper class limits. Key terms are defined, such as class boundaries, class midpoints, class width, and class intervals. The document also covers relative and cumulative frequencies, and rounding. It concludes with an example of constructing a frequency table from concrete strength test results.
The document discusses quality control techniques using statistics. It introduces various control charts used to monitor processes, including X-bar, R, s, p, and c charts. Control charts plot statistics over time and use control limits to identify when processes may be out of control. The document provides examples demonstrating how to construct and interpret these charts.
IRJET- Developed Method for Optimal Solution of Transportation ProblemIRJET Journal
1. The document proposes a new method for finding an optimal solution to transportation problems in less iterations compared to existing methods like the Vogel's Approximation Method.
2. The key steps of the proposed method include finding the second minimum value from each row and column, selecting the row or column with the maximum second minimum value and allocating supplies or demands accordingly until a 2x2 matrix is reached.
3. The method is demonstrated on two examples and is shown to find the optimal solution in less iterations than other common methods for transportation problems.
The document discusses various measures of central tendency and dispersion used in statistics. It defines mean, median, mode, quartiles, percentiles and deciles as measures of central tendency. It also discusses arithmetic mean, weighted mean, geometric mean, harmonic mean and their relationships. Measures of dispersion discussed include range, mean deviation, standard deviation, variance, interquartile range and coefficient of variation. Formulas to calculate these measures from grouped and ungrouped data are also provided.
This document provides information on graphical methods for describing data, including histograms, bar charts, stem-and-leaf plots, dot plots, and pie charts. It includes examples and guidelines for constructing different types of graphs using sample data sets. The key goals of graphical descriptions are to give a quick visual representation of the data distribution and communicate messages in an easily understandable way.
Statistical process control ppt @ doms Babasab Patil
This document provides an overview of statistical process control (SPC). It discusses the basics of SPC including control charts for attributes and variables. Control charts monitor a production process to detect issues. Attribute charts like p-charts and c-charts monitor defects, while variable charts like x-bar and R-charts monitor measured values. The document also discusses applying SPC to services and provides examples of constructing and interpreting control charts using Excel and Minitab. Process capability and identifying special causes of variation from control chart patterns are also covered.
This is a book of probability and statisticsThis is a book of probability and statisticsThis is a book of probability and statisticsThis is a book of probability and statistics
This document discusses forecasting a time series dataset from January 1996 to December 2006 using autoregressive models. It finds that an AR(12) model fits the data well but performs poorly in forecasting month-to-month changes. To address this, it examines ways the volatility/variance of the data has increased over time. It finds the data exhibits enlarging month-to-month changes rather than accelerating growth. The document considers AR(12), AR(12) with time trend, and ARMA models and selects the AR(12) model with an adjustment for increasing volatility as the most efficient approach.
The document discusses measures of variation used to describe how scores in a data set are distributed. It introduces the concept of variation and explains that just as the mean describes the central point, measures of variation describe how scores deviate from the mean. Two main types are described: those based on the distance between lowest and highest scores, and those based on deviation from the mean. The range, interquartile range, and standard deviation are discussed as examples of measures of variation.
This chapter discusses graphical methods for describing data, including frequency distributions, histograms, bar charts, pie charts, Pareto diagrams, scatter plots, and time-series plots. It explains how to identify different types of data and choose an appropriate graphical method based on whether the data is categorical or numerical. For categorical data, common graphs are bar charts, pie charts, and Pareto diagrams, while numerical data is often depicted using histograms, frequency distributions, and scatter plots. The chapter also provides examples and guidelines for constructing various graphs to summarize data distributions and relationships between variables.
Control charts are graphic tools used to detect unusual variations in a production process. There are different types of control charts for variables, including X-bar charts, R charts, and σ charts. An X-bar chart monitors the average quality, an R chart monitors variability, and a σ chart provides a better picture of variations. The charts have a central line for the average or mean and upper and lower control limits calculated using statistical formulas. If the sample data points fall within the control limits, the process is considered in statistical control.
Control charts are graphic tools used to detect unusual variations in a production process. There are different types of control charts for variables, including X-bar charts, R charts, and σ charts. X-bar charts monitor the average quality, R charts monitor variability between samples, and σ charts provide a better picture of variations. The charts contain a central line for the average or mean and upper and lower control limits calculated using statistical formulas. Data is plotted and checked against the control limits to determine if the production process is in statistical control.
This document discusses total quality management and continuous quality improvement. It covers quality planning, quality control, and quality improvement. Quality planning involves understanding customer needs. Quality control includes inspection, process control, and correction. Quality improvement aims to establish infrastructure. Common quality control tools are discussed like check sheets, Pareto charts, why-why diagrams, flowcharts, and control charts. Problem solving uses the plan-do-study-act cycle. Quality circles link these approaches.
This document provides an overview of techniques for presenting numerical data in tables and charts. It discusses ordered arrays, stem-and-leaf displays, frequency distributions, histograms, polygons, ogives, bar charts, pie charts, and scatter diagrams. The chapter goals are to teach how to create and interpret these various data presentation methods using Microsoft Excel. Examples are provided for frequency distributions, histograms, polygons, and ogives to illustrate how to construct and make sense of these graphical representations of quantitative data.
This document provides an overview of techniques for presenting numerical data in tables and charts. It discusses ordered arrays, stem-and-leaf displays, frequency distributions, histograms, polygons, ogives, bar charts, pie charts, and scatter diagrams. The chapter goals are to teach how to create and interpret these various data presentation methods using Microsoft Excel. Examples are provided for frequency distributions, histograms, polygons, and ogives to illustrate how to construct and make sense of these graphical representations of quantitative data.
This document provides an overview of Six Sigma methodology. It discusses that Six Sigma aims to reduce defects to 3.4 per million opportunities by using statistical methods. The Six Sigma methodology uses the DMAIC process which stands for Define, Measure, Analyze, Improve, and Control. It also outlines several statistical tools used in Six Sigma like check sheets, Pareto charts, histograms, scatter diagrams and control charts. Process capability and its measures like Cp, Cpk are also explained. The document provides examples to demonstrate how to calculate these metrics and interpret them.
This document provides an introduction to probability concepts including operations on sets, axioms of probability, addition rules, conditional probability, and multiplication rules. It includes examples demonstrating how to calculate probabilities of events, conditional probabilities, and the probabilities of unions and intersections of events. Key concepts covered are the classical and relative frequency approaches to defining probability, complement rules, and determining probabilities of "at least one" of an outcome.
This document discusses basic probability concepts including sample spaces, events, counting rules, and probability definitions. It begins by defining a sample space as the set of all possible outcomes of an experiment. Events are defined as subsets of the sample space. Basic counting rules like the multiplication rule, permutations, and combinations are introduced. Probability is defined as a way to quantify the likelihood of events occurring. The document provides examples and explanations of these fundamental probability topics.
The document summarizes the key ingredients of concrete - cement, aggregate, water, and admixtures. It explains that cement acts as the bonding agent, aggregate makes up about 80% of the volume, water reacts with cement during hydration, and admixtures are added to modify properties. The relationship between components is that the cement paste bonds the aggregate together, and using aggregate reduces problems with shrinkage and heat that would occur from cement alone. Both the cement and aggregate properties influence the overall properties of concrete.
The document provides an introduction to concrete as a construction material. It discusses the history and origins of concrete, highlighting its use in ancient Egypt and the Roman Empire. The document outlines the key advantages of concrete such as its widespread availability, engineering properties, durability, and ability to be molded into different shapes. Some disadvantages mentioned include the carbon dioxide emissions from cement production and concrete's lower strength compared to steel. The objectives of the lecture are also stated as explaining the basic concepts of concrete, and discussing its advantages and history.
This document summarizes a laboratory experiment to determine the bulk density and void ratio of fine and coarse aggregates. The experiment involves weighing an empty metal cylinder, filling it with dry aggregate in layers and tamping it, weighing it again to find the compacted bulk density. It also involves filling the cylinder loosely with aggregate and weighing it to find the loose bulk density. The purpose is to find these properties which are important for converting aggregate proportions by weight to volume. The procedure, materials used like the cylinder and tamping rod, and calculations to derive bulk density and void ratio are described.
1) The document describes a test to determine the specific gravity of aggregates using different methods.
2) It provides definitions of specific gravity types and outlines the purpose, materials, apparatus, procedures, and calculations for finding the apparent specific gravity of fine and coarse aggregates and the moisture content of fine aggregate.
3) The procedures involve weighing samples of dry and saturated aggregates to determine volume and density ratios compared to water.
The document summarizes a laboratory experiment to determine the tensile strength of cement. Mortar samples were created with a 1:3 mix of cement and sand, and tested after 1 day. The tensile strength was calculated by dividing the failure load by the cross-sectional area. The results were then compared to Iraqi standard specifications to determine if the cement passed requirements for its tensile strength after 1 day.
This document describes a procedure to determine the compressive strength of cement. Mortar cubes are created with a cement to sand ratio of 1:3 and cured for 3, 7, and 28 days. The cubes are then tested in a compression testing machine to determine the failure load, which is used to calculate the compressive strength in MPa. The results are compared to Iraqi standards to ensure the cement meets specifications of a minimum 15 MPa at 3 days and 23 MPa at 7 days. The test provides an important property of cement and allows evaluation of its quality.
This document summarizes a laboratory experiment to test the soundness of cement using the Le Chatelier apparatus. The test involves making a cement paste, placing it in the apparatus, and measuring the expansion after immersion in water at room temperature for 24 hours and then boiling water for 1 hour. The expansion is calculated and compared to specifications in standards which limit expansion to 10mm. Testing cement soundness ensures the cement will not excessively expand after setting, which could cause cracking.
1) The document describes a test to determine the initial and final setting times of cement by using a Vicat apparatus. A cement paste sample is prepared and penetration is measured over time using needles to identify when the paste reaches initial and final set points.
2) The initial setting time is the time when the needle penetration is 5mm or higher. The final setting time is identified visually when the needle leaves an impression but the cutting edge fails to penetrate.
3) Specifications require a minimum initial setting time of 45 minutes and maximum final setting time of 10 hours or 375 minutes depending on the standard used. The test determines if the cement meets these specifications.
1. The document describes a test to determine the standard consistency of cement paste, which is required for other cement tests and is between 26-33% water by mass of dry cement.
2. The test involves mixing cement and varying amounts of water (25, 30, 35% of cement mass) and measuring how far a 10mm plunger penetrates the paste, with 5±1mm indicating standard consistency.
3. Temperature and humidity can affect the test results, so the lab conditions are controlled at 20±2°C and 50% relative humidity minimum.
1) The document describes a procedure to determine the specific gravity of cement through use of a pycnometer, kerosene, and water.
2) Key steps include weighing the empty and filled pycnometer to determine volumes, then using the densities of kerosene and water to calculate the specific gravity of the cement.
3) The specific gravity of cement provides information about its composition and quality, and is typically between 3.12-3.19 for ordinary Portland cement.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
1. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
1
Measure of Position 2.5
Chapter 2: Part 5: Measure of Position
Objectives
Determine and interpret z-scores
Determine and interpret percentile
Determine and interpret quartile
Check a set of data for outliers
5.1: Z-Score (or standard score)
The number of standard deviations that a given value x is above or below the mean.
The following formula allows a raw score, x, from a data set to be converted to its equivalent
standard value, z, in a new data set with a mean of zero and a standard deviation of one.
A z-score can be positive or negative:
positive z-score – raw score greater than the mean
negative z-score – raw score less than the mean
Whenever a value is less than the mean, its corresponding z score is negative
Ordinary values: z score between –2 and 2 sd
Unusual Values: z score < -2S.D or z score > 2 S.D
x x
z
s
x
z
Sample Population
2. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
2
Measure of Position 2.5
Example 2.21 Annual rainfall.
If the annual rainfalls in a certain city are 1, 22, 26, 33, and 123. cm over a 5-year period.
Determine the z-score for each raw score. Discuss the value ?. is there any UN-USUAL Data
3. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
3
Measure of Position 2.5
5.2: Position
There are several ways of measuring the relative position of a specific member of a set.
5.2.1: Quartiles
Quartiles split a set of ordered data into four parts.
Q1 is the First Quartile
25% of the observations are smaller than Q1 and 75% of the observations are
larger
Q2 is the Second Quartile
50% of the observations are smaller than Q2 and 50% of the observations are
larger. Same as the Median. It is also the 50th percentile.
Q3 is the Third Quartile
75% of the observations are smaller than Q3and 25% of the observations are
larger
Q1, Q2, Q3 .Divides ranked scores into four equal parts
The lower quartile is the median of the lower half of the data and the upper quartile is the
median of the upper half.
The median divides the data in the data in half. The upper and lower quartiles divide each half
into two parts.
4. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
4
Measure of Position 2.5
Example 2.22
Find Q1, Q2, and Q3 for the following data set.
15, 13, 6, 5, 12, 50, 22, 18
Sort in ascending order.
5, 6, 12, 13, 15, 18, 22, 50
1
6 12
Q , 9
2
median Low MD
2
13 15
Q , 14
2
median Low High
3
18 22
Q , 20
2
median MD High
5.2.2: Percentiles
Just as there are quartiles separating data into four parts, there are 99 percentiles denoted P1, P2, .
. . P99, which partition the data into 100 groups.
A percentile is the value of a variable below which a certain percent of observations fall.
So the 20th percentile is the value (or score) below which 20 percent of the observations may be
found.
A person with percentile rank of 20, means that he /she scored the same as or better than 20
percent of the group.
0 5
100
.number of values less than x
Percentile of valuex
total number of values
5. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
5
Measure of Position 2.5
Example 2.23
The following series is the minimum monthly flow (m3 S-l
) in each of the 10 years Of a certain
river.
36, 4, 21, 21, 23, 11, 10, 10, 12, 17
- What percentage of the data is less than 11?
- What percentage of the data is less than 23?
6. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
6
Measure of Position 2.5
5.2.3: Converting from the kth Percentile to the Corresponding Data Value
n total number of values in the data set
k percentile being used
L locator that gives the position of a value
Pk kth percentile
HINT
Step 3: If L is not a whole number, round up to the next whole number. If L
is a whole number, use the value halfway between L and L+1.
100
k
L n
7. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
7
Measure of Position 2.5
Example 2.24
A teacher gives a 20-point test to 10 students. Find the value corresponding to the 25th
and 60th
percentile.
18, 15, 12, 6, 8, 2, 3, 5, 20, 10
Sort in ascending order.
2, 3, 5, 6, 8, 10, 12, 15, 18, 20
a) For 25th
percentile
The value 5 corresponds to the 25th
percentile.
(a student who had 5, did better than 25th
percent of all student)
b) For 60th
percentile
2, 3, 5, 6, 8, 10, 12, 15, 18, 20
7 the vale = 12
Hence, A score of 11 correspond to the 60th
Percentile
100
k
L n
25
10 2 5 3
100
.L roundupto
In part A the L value was not a
whole number (2.5)
Hence, L was rounded up to the
next large number (2.5 rounded up
to 3).
And the corresponding value to
the 25th
percentile is the 3rd
from
the lowest
100
k
L n
60
10 6
100
L
In Part B the L value is a whole
number (6)
FROM THE HINT
If L is a whole number, use
the value halfway between L
and L+1.
10 12
11
2
average
6th
value
7th
value
8. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
8
Measure of Position 2.5
5.3: Exploratory Data Analysis
Exploratory Data Analysis is the process of using statistical tools (such as graphs, measures of
centre, and measures of variation) to investigate data sets in order to understand their important
characteristics
5.3.1: Outlier
An outlier is a value that is located very far away from almost all the other values.
Important Principles
An outlier can have a dramatic effect on the mean
An outlier have a dramatic effect on the standard deviation
An outlier can have a dramatic effect on the scale of the histogram so that the true nature
of the distribution is totally obscured
5.3.2: Box-lots(Box-and-Whisker Diagram)
A box-and-whisker plot shows the spread of a data set. It displays 5 key points: the
minimum and maximum values, the median, and the first and third quartiles.
A box-plot is a graph of the five-number summary.
A central box spans the quartiles;
A line in the box marks the median;
Lines extend from the box out to the smallest and largest observations.
Useful for side-by-side comparison of several distributions.
9. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
9
Measure of Position 2.5
Example 2.25
on Making a Box-and-Whisker Plot and Finding the Interquartile Range
Make a box-and-whisker plot of the data. Find the interquartile range.
{6, 8, 7, 5, 10, 6, 9, 8, 4}
Step 1 Order the data from least to greatest.
4, 5, 6, 6, 7, 8, 8, 9, 10
Step 2 Find the minimum, maximum, median, and quartiles.
Step 3 Draw a box-and-whisker plot.
Draw a number line, and plot a point above each of the five values. Then draw a box from the
first quartile to the third quartile with a line segment through the median. Draw whiskers from
the box to the minimum and maximum.
IRQ = 8.5 – 5.5 = 3
The interquartile range is 3, the length of the box in the diagram.
Step 1 Order the data from least to greatest.
11, 12, 12, 13, 13, 13, 14, 14, 14, 15, 17, 18, 18, 19, 22, 23
Step 2 Find the minimum, maximum, median, and quartiles.
10. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
10
Measure of Position 2.5
Method of detecting an outlier
To determine whether a data value can be considered as an outlier:
Step 1: Compute Q1 and Q3.
Step 2: Find the IQR = Q3 – Q1.
Step 3: Compute (1.5)(IQR).
Step 4: Compute Q1 – (1.5)(IQR) and
Q3 + (1.5)(IQR).
Step 5: Compare the data value (say X) with Q1 – (1.5)(IQR) and Q3 + (1.5)(IQR).
If X < Q1 – (1.5)(IQR) or
if X > Q3 + (1.5)(IQR), then X is considered an outlier.
Example 2.26
Given the data set 5, 6, 12, 13, 15, 18, 22, 50, can the value of 50 be considered as an
outlier?
11. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
11
Measure of Position 2.5
Tutorial on 2.5
Examples 2.17
The following series is the minimum monthly flow (m3 S-l
) in each of the 20 years 1957 to 1976
at Bywell on the River Tyne: 21, 36, 4, 16, 21, 21, 23, 11, 46, 10, 25, 12, 9, 16, 10, 6, 11, 12, 17,
and 3
- Find Q1, Q2, and Q3 for the following data set.
- Determine the z-score for each raw score. Discuss the value ?. is there any UN-USUAL Data
- What percentage of the data are less than 21?
- What percentage of the data are less than 36?
- Find the value corresponding to the 25th
and 60th
. 91th
percentile.
- Make a box-and-whisker plot of the data. Find the interquartile range.
Example 2.18
A data set has a mean of 10 and a standard deviation of 2. Find a value that is:
(i) 3 standard deviations above the mean
(i) 2 standard deviations below the mean
Example 2.19
Find Q1, Q2, and Q3 for the data set.
121, 129, 116, 106, 114, 122, 109, 125.
Example 2.20
Given a data set with a mean of 10 and a standard deviation of 2, determine the z-score for each
of the following raw scores,. [8,10,16].
Exercise 2.21
Compute the quartiles from the following data.
12. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
12
Measure of Position 2.5
Examples 2.22
Make a box-and-whisker plot of the data. Find the interquartile range. {13, 14, 18, 13, 12,
17, 15, 12, 13, 19, 11, 14, 14, 18, 22, 23}
Exercise 2.23
Compute the quartiles from the following data.
Marks No. of students
1-10 3
11-20 16
21-30 26
31-40 31
41-50 16
51-60 8
Exercise 2.24
Stream flow velocity. A practical example of the mean is the determination of the mean velocity
of a stream based on measurements of travel times over a given reach of the stream using a
floating device. For instance, if 10 velocities are calculated as follow:
Velocity,
m/s
0.20 0.20 0.21 0.42 0.24 0.16 0.55 0.70 43 0.34
- Find Q1, Q2, and Q3 for the following data set.
- Determine the z-score for each raw score. Discuss the value ?. is there any UN-USUAL Data
- What percentage of the velocities are less than 0.24 m/sec?
- What percentage of the data are less than 0.34 m/sec?
- Find the velocity value corresponding to the 25th
, 51th
and 70th
. 91th
percentile.
- Make a box-and-whisker plot of the data. Find the interquartile range.
Exercise 2.25
Make a box-and-whisker plot of the data. Find the interquartile range.
{6, 8, 7, 5, 10, 6, 9, 8, 4}
13. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
13
Measure of Position 2.5
Exercise 2.26
Make a box-and-whisker plot of the data. Find the interquartile range. {13, 14, 18, 13, 12,
17, 15, 12, 13, 19, 11, 14, 14, 18, 22, 23}
Exercise 2.28
The data values on the table below depict the maximum monthly discharges of a certain River
for nine consecutive months. Create a box-and-whisker plot to display the data.
April May June July August September October November December
110, 98, 91, 102, 89, 95, 108, 118, 152
Exercise 2.29
Concrete cube test. From 28-day concrete cube tests made in England in 1990, the following
results of maximum load at failure in kilonewtons and compressive strength in newtons per
square millimeter were obtained:
Maximum load: 950, 972, 981, 895, 908, 995, 646, 987, 940, 937, 846, 947, 827,
961, 935, 956.
Compressive strength: 42.25, 43.25, 43.50, 39.25, 40.25, 44.25, 28.75, 44.25, 41.75,
41.75, 38.00, 42.50, 36.75, 42.75, 42.00, 33.50.
- Find Q1, Q2, and Q3 for the following data set.
- Determine the z-score for each raw score. Discuss the value ?. is there any UN-USUAL Data
- What percentage of the Maximum load are less than 895 and 950 KN?
- What percentage of the compressive strength are 42.00 less than ?
- Find the maximum load and compressive strength values corresponding to the 25th
, 51th
and
70th
. 91th
percentile.
- Make a two separate box-and-whisker plot for the two set of data.. Find the
interquartile range.
Exercise 2.30
An experiments measuring the percent shrinkage on drying of 50 clay specimens produced the
following data :
18.5 22 24 19 16 21.5 17 15
20 10 9.5 17 20.5 19 22.5
14. College of Engineering Engineering Statistics
Department of Dam & Water Resources Lecturer: Goran Adil & Chenar
-------------------------------------------------------------------------------------------------------------------------------
14
Measure of Position 2.5
A) Compute the sample and population for both variance and standard deviation
B) Draw box-plot and indicate outlier
Exercise 2.31
The data in Table below (Adamson, 1989) are the annual maximum flood peak flows to the
Hardap Dam in Namibia, covering the period from October1962 to September 1987. The range
of these data is from 30 to 6100.
Annual maximum flood-peak inflows to Hardap Dam (Namibia): catchment area 12600 km2
Year 1962-3 1963-4 1964-5 1965-6 1966-7 1967-8 1968-9 1969-0 1970-1
Inflow (m3 S-l) 1864 44 46 364 911 83 477 457 782
Year 1971-2 1972-3 1973-4 1974-5 1975-6 1976-7 1977-8 1978-9 1979-0
Inflow (m3 S-l) 6100 197 3259 554 1506 1508 236 635 230
Year 1980-1 1981-2 1982-3 1983-4 1984-5 1985-6 1986-7
Inflow (m3 S-l) 125 131 30 765 408 347 412
- Find Q1, Q2, and Q3 for the following data set.
- Determine the z-score for each raw score. Discuss the value ?. is there any UN-USUAL Data
- What percentage of the discharges are less than 911 m3
/sec?
- What percentage of the data is less than 1864 m3
/sec?
- Find the discharge value corresponding to the 25th
and 60th
. 91th
percentile.
- Make a box-and-whisker plot of the data. Find the interquartile range.