This document provides an overview of quantitative methods topics covered in a course, including preliminary data analysis techniques. It discusses measuring central tendency through modes, medians, and means. It also covers measuring dispersion through range, mean deviation, variance, and standard deviation. Examples are provided to demonstrate calculating and interpreting these preliminary analysis metrics.
An introduction to SigmaXL's various Graphical tools
Established in 1998, SigmaXL Inc. is a leading provider of user friendly Excel Add-ins for Lean Six Sigma graphical and statistical tools and Monte Carlo simulation.
SigmaXL® customers include market leaders like Agilent, Diebold, FedEx, Microsoft, Motorola and Shell. SigmaXL® software is also used by numerous colleges, universities and government agencies.
Our flagship product, SigmaXL®, was designed from the ground up to be a cost-effective, powerful, but easy to use tool that enables users to measure, analyze, improve and control their service, transactional, and manufacturing processes. As an add-in to the already familiar Microsoft Excel, SigmaXL® is ideal for Lean Six Sigma training and application, or use in a college statistics course.
DiscoverSim™ enables you to quantify your risk through Monte Carlo simulation and minimize your risk with global optimization. Business decisions are often based on assumptions with a single point value estimate or an average, resulting in unexpected outcomes.
DiscoverSim™ allows you to model the uncertainty in your inputs so that you know what to expect in your outputs.
This document provides an overview of key concepts in statistics including:
- Descriptive statistics such as frequency distributions which organize and summarize data
- Inferential statistics which make estimates or predictions about populations based on samples
- Types of variables including quantitative, qualitative, discrete and continuous
- Levels of measurement including nominal, ordinal, interval and ratio
- Common measures of central tendency (mean, median, mode) and dispersion (range, standard deviation)
This document provides an introduction to business statistics. It defines statistics as the science of collecting, organizing, summarizing, presenting, analyzing, and drawing conclusions from data. The document outlines the key components of statistics including descriptive statistics, which summarizes data, and inferential statistics, which makes generalizations about a population based on a sample. It also discusses different types of data, data sources, and the scope and importance of statistics in business decision making.
The document discusses various measures of variability that can be used to describe the spread or dispersion of data, including the range, interquartile range, mean absolute deviation, variance, standard deviation, and coefficient of variation. It also covers how to calculate and interpret these measures of variability for both ungrouped and grouped data. Various other concepts are introduced such as the empirical rule, z-scores, skewness, the 5-number summary, and how to construct and interpret a box-and-whisker plot.
Frequency Measures for Healthcare Professioanlsalberpaules
Frequency distributions summarize data by grouping values of a variable and counting the number of observations in each group. This document discusses measures used to describe frequency distributions, including measures of central tendency (mode, median, mean) and measures of variability. The mode is the most frequent value, median is the middle value, and mean averages all values. These measures summarize the central or typical value in a data set.
This document provides an overview of descriptive statistics as taught in a statistics course (STS 102) at Crescent University, Nigeria. It covers topics like statistical data collection methods, presentation of data through tables and graphs, measures of central tendency and dispersion. The key objectives of descriptive statistics are to summarize and describe characteristics of data through measures, charts and diagrams. Inferential statistics is also introduced as a way to make inferences about populations based on samples.
This document provides an overview of key numerical measures used to describe data, including measures of central tendency (mean, median, mode) and measures of dispersion (range, variance, standard deviation). It defines each measure, provides examples of calculating them, and discusses their characteristics, uses, and advantages/disadvantages. The document also covers weighted means, geometric means, Chebyshev's theorem, and calculating measures for grouped data.
The mean deviation is a measure of how spread out values are from the average. It is calculated by:
1) Finding the mean of all values.
2) Calculating the distance between each value and the mean.
3) Taking the average of those distances. This provides the mean deviation, which tells us how far on average values are from the central mean. Examples show calculating mean deviation for both grouped and ungrouped data sets.
An introduction to SigmaXL's various Graphical tools
Established in 1998, SigmaXL Inc. is a leading provider of user friendly Excel Add-ins for Lean Six Sigma graphical and statistical tools and Monte Carlo simulation.
SigmaXL® customers include market leaders like Agilent, Diebold, FedEx, Microsoft, Motorola and Shell. SigmaXL® software is also used by numerous colleges, universities and government agencies.
Our flagship product, SigmaXL®, was designed from the ground up to be a cost-effective, powerful, but easy to use tool that enables users to measure, analyze, improve and control their service, transactional, and manufacturing processes. As an add-in to the already familiar Microsoft Excel, SigmaXL® is ideal for Lean Six Sigma training and application, or use in a college statistics course.
DiscoverSim™ enables you to quantify your risk through Monte Carlo simulation and minimize your risk with global optimization. Business decisions are often based on assumptions with a single point value estimate or an average, resulting in unexpected outcomes.
DiscoverSim™ allows you to model the uncertainty in your inputs so that you know what to expect in your outputs.
This document provides an overview of key concepts in statistics including:
- Descriptive statistics such as frequency distributions which organize and summarize data
- Inferential statistics which make estimates or predictions about populations based on samples
- Types of variables including quantitative, qualitative, discrete and continuous
- Levels of measurement including nominal, ordinal, interval and ratio
- Common measures of central tendency (mean, median, mode) and dispersion (range, standard deviation)
This document provides an introduction to business statistics. It defines statistics as the science of collecting, organizing, summarizing, presenting, analyzing, and drawing conclusions from data. The document outlines the key components of statistics including descriptive statistics, which summarizes data, and inferential statistics, which makes generalizations about a population based on a sample. It also discusses different types of data, data sources, and the scope and importance of statistics in business decision making.
The document discusses various measures of variability that can be used to describe the spread or dispersion of data, including the range, interquartile range, mean absolute deviation, variance, standard deviation, and coefficient of variation. It also covers how to calculate and interpret these measures of variability for both ungrouped and grouped data. Various other concepts are introduced such as the empirical rule, z-scores, skewness, the 5-number summary, and how to construct and interpret a box-and-whisker plot.
Frequency Measures for Healthcare Professioanlsalberpaules
Frequency distributions summarize data by grouping values of a variable and counting the number of observations in each group. This document discusses measures used to describe frequency distributions, including measures of central tendency (mode, median, mean) and measures of variability. The mode is the most frequent value, median is the middle value, and mean averages all values. These measures summarize the central or typical value in a data set.
This document provides an overview of descriptive statistics as taught in a statistics course (STS 102) at Crescent University, Nigeria. It covers topics like statistical data collection methods, presentation of data through tables and graphs, measures of central tendency and dispersion. The key objectives of descriptive statistics are to summarize and describe characteristics of data through measures, charts and diagrams. Inferential statistics is also introduced as a way to make inferences about populations based on samples.
This document provides an overview of key numerical measures used to describe data, including measures of central tendency (mean, median, mode) and measures of dispersion (range, variance, standard deviation). It defines each measure, provides examples of calculating them, and discusses their characteristics, uses, and advantages/disadvantages. The document also covers weighted means, geometric means, Chebyshev's theorem, and calculating measures for grouped data.
The mean deviation is a measure of how spread out values are from the average. It is calculated by:
1) Finding the mean of all values.
2) Calculating the distance between each value and the mean.
3) Taking the average of those distances. This provides the mean deviation, which tells us how far on average values are from the central mean. Examples show calculating mean deviation for both grouped and ungrouped data sets.
1) The document discusses variance and standard deviation, including their definitions and formulas. Variance measures how far data points are from the mean, while standard deviation describes how dispersed the data are from the mean.
2) Examples are provided to demonstrate calculating variance and standard deviation step-by-step. This includes finding the mean, deviations from the mean, summing the squared deviations, and taking the square root.
3) Formulas are given for calculating the mean, variance, and standard deviation of discrete random variables from their probability mass functions.
The document discusses using Pareto analysis to minimize monthly electricity bills. It provides a table showing the electricity consumption of different devices in the home. It then explains that Pareto analysis involves organizing factors contributing to a problem by level of impact. Typically, 20% of factors account for 80% of the problem. The document suggests using Pareto analysis to identify which devices consume the most electricity and focusing on reducing usage of those devices to minimize the monthly bill.
When fitting loss data (insurance) to a distribution, often the parameters that provide a good overall fit will understate the density in the tail.
This method allows one to split the distribution into 2 portions, and use a Pareto distribution to fit the tail.
Presented at the CAS Spring Meeting in Seattle, May 2016.
Statistical Analysis using Central TendenciesCelia Santhosh
This document discusses various statistical measures of central tendency, including the mean, median, and mode. It provides definitions and formulas for calculating the arithmetic mean using direct, shortcut, and step deviation methods for individual, discrete, and continuous data series. It also discusses how to calculate the median and weighted mean. The document compares the merits and demerits of the arithmetic mean and provides examples to illustrate the different calculation techniques for central tendencies.
The document discusses various measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating the arithmetic mean, weighted arithmetic mean, harmonic mean, and geometric mean. Examples are given to demonstrate calculating each type of mean from both ungrouped and grouped data. The properties, merits, and limitations of each mean are also outlined. Relationship among the different means are explained.
Accuracy refers to how close a measurement is to the true value, while precision refers to the reproducibility of measurements. Accuracy is determined by calculating percentage error compared to the accepted value. Precision depends on the number of significant figures in a measurement as determined by the measuring tool. Random and systematic errors can affect accuracy, while random errors affect precision. The uncertainty of a measurement combines its precision and accuracy errors and is reported with the mean value and at a given confidence level, typically 95%. Propagation of error calculations allow determining the total uncertainty when a value depends on multiple measurements.
This was a presentation I gave to my firm's internal CPE in December 2012. It related to correlation and simple regression models and how we can utilize these statistics in both income and market approaches.
Forecast uncertain demand products
Choose the suitable place for production
Better forecast
Use both quantitative + qualitative method
Analyze historical data => forecast future trend
Use weighted factors (weighted average independent forecasts)
Shorten forecasting duration
Basic Statistics for Class 11, B.COm, BSW, B.A, BBA, MBAGaurav Rana
The document provides an overview of key concepts in statistics for social work. It discusses topics such as data collection methods, organization and presentation of data, measures of central tendency including mean, median and mode, and measures of dispersion. For example, it explains how to calculate the arithmetic mean for both grouped and ungrouped data using direct, assumed mean and step deviation methods. It also discusses how to calculate the median and mode for discrete and continuous data series.
Here are the steps I would take to work out the correlation in each pair:
1. Adult IQ and Annual Income: Both variables are cardinal. Use Pearson's correlation coefficient formula.
2. Consumer Price Index and Sensex: Both variables are cardinal indexes. Use Pearson's formula.
3. Dealer Seniority and Dealer Performance: Seniority is ordinal, performance could be cardinal or ordinal. Convert performance to ranks and use rank correlation coefficient formula.
4. Gold Prices and Real Estate Prices: Both variables are prices and cardinal. Use Pearson's formula.
5. Birth Rate in Germany and Voter Turnout in Kerala: Both variables are percentages and cardinal. Use Pearson's formula.
This document discusses various quantitative forecasting techniques. It describes time series forecasting and the components of time series data including trend, seasonal, cyclical, and random variations. It then explains different forecasting methods such as the naive approach, moving averages, exponential smoothing, and least squares regression. It provides examples of how to calculate forecasts using these methods and compares their forecast errors using measures like mean absolute deviation, mean squared error, and mean absolute percent error to evaluate forecast accuracy.
This document provides an overview of risk management and quality control using statistical process control charts. It discusses [1] managing quality risk through control charts, [2] different types of risks including material, consequential, social, legal, and political risks, and [3] best practices for risk management including policies, methodologies, and resources. The document also covers control chart fundamentals, calculating control limits, identifying assignable causes, and process improvement.
A Monte Carlo simulation involves modeling a system with random variables to estimate outcomes. It repeats calculations using randomly generated values for the variables and averages the results. The document discusses using Monte Carlo simulations to model demand in business situations with uncertain variables. Examples show generating random numbers to simulate daily product demand over multiple days and calculating the average demand from the results.
Why Study Statistics Arunesh Chand Mankotia 2004Consultonmic
Statistics is the study of collecting, organizing, summarizing, and interpreting data. It helps understand uncertainty and make informed decisions based on data. Key concepts in statistics include measures of central tendency like mean, median and mode, measures of variability like range and standard deviation, and understanding data distributions and shapes. Statistical thinking focuses on understanding variation in systems and using descriptive and inferential statistics to transform data into information and knowledge.
This document discusses measures of variation in data, including range, variance, and standard deviation. It provides examples of calculating these measures for both individual data points and grouped data. The key measures are:
- Range is the highest value minus the lowest value.
- Variance is the average of the squared distances from the mean.
- Standard deviation is the square root of the variance, measuring average deviation from the mean.
- Coefficient of variation allows comparison of variables with different units by expressing standard deviation as a percentage of the mean.
- Chebyshev's theorem and the empirical rule specify what proportion of data falls within a given number of standard deviations of the mean.
This document provides an overview of introductory statistics concepts including:
- Descriptive statistics such as frequency distributions, histograms, and measures of central tendency are used to summarize and present data.
- Inferential statistics such as estimation and hypothesis testing are used to draw conclusions about populations based on sample data.
- Data can be organized and presented through tables, graphs including bar charts, pie charts, and scatter plots.
This document discusses metrics to measure the adoption of a new application and user satisfaction. It examines adoption time, average customer satisfaction, and most used features. Adoption time measures how long it takes users to reach normal efficiency with the new application. User satisfaction is measured through customer satisfaction surveys and ease of use. The document also provides recommendations to improve the user interface and keyboard for greater usability.
The document discusses variance and standard deviation. Variance measures how dispersed or spread out values are from the mean, while standard deviation is the positive square root of variance. Standard deviation indicates the average amount of variation from the mean. A low standard deviation means values are close to the mean, while a high standard deviation means more variation and dispersion from the mean. The coefficient of variation measures standard deviation relative to the mean and is used to compare the variability of different data sets even if the means are different.
The document discusses various measures of dispersion used to quantify how data values are spread around the average value. It describes measures such as range, interquartile range, mean deviation, standard deviation, variance, and coefficient of variation. Standard deviation is highlighted as the most important measure of dispersion as it is widely used and capable of further algebraic treatment. Different methods for calculating standard deviation for individual series, discrete series, and continuous series are provided along with examples. The key properties and appropriate uses of each measure are also outlined.
The document discusses various measures used to summarize sample data, including measures of central tendency (location) and spread (dispersion). It describes how to calculate the arithmetic mean, mode, and median of raw data and frequency tables. The mean is the average value, the mode is the most frequent observation, and the median is the middle value when data is ordered from lowest to highest. For skewed data, the mode or median may better indicate central tendency than the mean. The document also introduces the interquartile range as a measure of spread and shows how to calculate percentiles from raw and grouped frequency data.
The document discusses Toyota's recalls of millions of vehicles in 2009-2010 due to issues with accelerators getting stuck. This damaged Toyota's reputation for quality and reliability. Rapid global expansion may have compromised quality systems as production moved overseas. Solutions included overhauling quality processes, communicating directly with customers, and regaining trust through a public relations campaign. The recalls significantly hurt Toyota's financial performance in the short term through lost sales and costs. Marketing would play a role in rebuilding Toyota's brand image and regaining customer confidence.
Reebok entered the Indian sportswear market in 1995 through a joint venture. It targeted the premium segment with higher priced shoes. Through customization, extensive retail presence, and endorsement of Indian cricket stars, Reebok established itself as the market leader with 51% share by 2007. In contrast, Nike entered through a licensing agreement and was slow to develop products for India, position itself effectively, and expand retail presence, allowing Reebok to outperform it initially in the Indian market.
1) The document discusses variance and standard deviation, including their definitions and formulas. Variance measures how far data points are from the mean, while standard deviation describes how dispersed the data are from the mean.
2) Examples are provided to demonstrate calculating variance and standard deviation step-by-step. This includes finding the mean, deviations from the mean, summing the squared deviations, and taking the square root.
3) Formulas are given for calculating the mean, variance, and standard deviation of discrete random variables from their probability mass functions.
The document discusses using Pareto analysis to minimize monthly electricity bills. It provides a table showing the electricity consumption of different devices in the home. It then explains that Pareto analysis involves organizing factors contributing to a problem by level of impact. Typically, 20% of factors account for 80% of the problem. The document suggests using Pareto analysis to identify which devices consume the most electricity and focusing on reducing usage of those devices to minimize the monthly bill.
When fitting loss data (insurance) to a distribution, often the parameters that provide a good overall fit will understate the density in the tail.
This method allows one to split the distribution into 2 portions, and use a Pareto distribution to fit the tail.
Presented at the CAS Spring Meeting in Seattle, May 2016.
Statistical Analysis using Central TendenciesCelia Santhosh
This document discusses various statistical measures of central tendency, including the mean, median, and mode. It provides definitions and formulas for calculating the arithmetic mean using direct, shortcut, and step deviation methods for individual, discrete, and continuous data series. It also discusses how to calculate the median and weighted mean. The document compares the merits and demerits of the arithmetic mean and provides examples to illustrate the different calculation techniques for central tendencies.
The document discusses various measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating the arithmetic mean, weighted arithmetic mean, harmonic mean, and geometric mean. Examples are given to demonstrate calculating each type of mean from both ungrouped and grouped data. The properties, merits, and limitations of each mean are also outlined. Relationship among the different means are explained.
Accuracy refers to how close a measurement is to the true value, while precision refers to the reproducibility of measurements. Accuracy is determined by calculating percentage error compared to the accepted value. Precision depends on the number of significant figures in a measurement as determined by the measuring tool. Random and systematic errors can affect accuracy, while random errors affect precision. The uncertainty of a measurement combines its precision and accuracy errors and is reported with the mean value and at a given confidence level, typically 95%. Propagation of error calculations allow determining the total uncertainty when a value depends on multiple measurements.
This was a presentation I gave to my firm's internal CPE in December 2012. It related to correlation and simple regression models and how we can utilize these statistics in both income and market approaches.
Forecast uncertain demand products
Choose the suitable place for production
Better forecast
Use both quantitative + qualitative method
Analyze historical data => forecast future trend
Use weighted factors (weighted average independent forecasts)
Shorten forecasting duration
Basic Statistics for Class 11, B.COm, BSW, B.A, BBA, MBAGaurav Rana
The document provides an overview of key concepts in statistics for social work. It discusses topics such as data collection methods, organization and presentation of data, measures of central tendency including mean, median and mode, and measures of dispersion. For example, it explains how to calculate the arithmetic mean for both grouped and ungrouped data using direct, assumed mean and step deviation methods. It also discusses how to calculate the median and mode for discrete and continuous data series.
Here are the steps I would take to work out the correlation in each pair:
1. Adult IQ and Annual Income: Both variables are cardinal. Use Pearson's correlation coefficient formula.
2. Consumer Price Index and Sensex: Both variables are cardinal indexes. Use Pearson's formula.
3. Dealer Seniority and Dealer Performance: Seniority is ordinal, performance could be cardinal or ordinal. Convert performance to ranks and use rank correlation coefficient formula.
4. Gold Prices and Real Estate Prices: Both variables are prices and cardinal. Use Pearson's formula.
5. Birth Rate in Germany and Voter Turnout in Kerala: Both variables are percentages and cardinal. Use Pearson's formula.
This document discusses various quantitative forecasting techniques. It describes time series forecasting and the components of time series data including trend, seasonal, cyclical, and random variations. It then explains different forecasting methods such as the naive approach, moving averages, exponential smoothing, and least squares regression. It provides examples of how to calculate forecasts using these methods and compares their forecast errors using measures like mean absolute deviation, mean squared error, and mean absolute percent error to evaluate forecast accuracy.
This document provides an overview of risk management and quality control using statistical process control charts. It discusses [1] managing quality risk through control charts, [2] different types of risks including material, consequential, social, legal, and political risks, and [3] best practices for risk management including policies, methodologies, and resources. The document also covers control chart fundamentals, calculating control limits, identifying assignable causes, and process improvement.
A Monte Carlo simulation involves modeling a system with random variables to estimate outcomes. It repeats calculations using randomly generated values for the variables and averages the results. The document discusses using Monte Carlo simulations to model demand in business situations with uncertain variables. Examples show generating random numbers to simulate daily product demand over multiple days and calculating the average demand from the results.
Why Study Statistics Arunesh Chand Mankotia 2004Consultonmic
Statistics is the study of collecting, organizing, summarizing, and interpreting data. It helps understand uncertainty and make informed decisions based on data. Key concepts in statistics include measures of central tendency like mean, median and mode, measures of variability like range and standard deviation, and understanding data distributions and shapes. Statistical thinking focuses on understanding variation in systems and using descriptive and inferential statistics to transform data into information and knowledge.
This document discusses measures of variation in data, including range, variance, and standard deviation. It provides examples of calculating these measures for both individual data points and grouped data. The key measures are:
- Range is the highest value minus the lowest value.
- Variance is the average of the squared distances from the mean.
- Standard deviation is the square root of the variance, measuring average deviation from the mean.
- Coefficient of variation allows comparison of variables with different units by expressing standard deviation as a percentage of the mean.
- Chebyshev's theorem and the empirical rule specify what proportion of data falls within a given number of standard deviations of the mean.
This document provides an overview of introductory statistics concepts including:
- Descriptive statistics such as frequency distributions, histograms, and measures of central tendency are used to summarize and present data.
- Inferential statistics such as estimation and hypothesis testing are used to draw conclusions about populations based on sample data.
- Data can be organized and presented through tables, graphs including bar charts, pie charts, and scatter plots.
This document discusses metrics to measure the adoption of a new application and user satisfaction. It examines adoption time, average customer satisfaction, and most used features. Adoption time measures how long it takes users to reach normal efficiency with the new application. User satisfaction is measured through customer satisfaction surveys and ease of use. The document also provides recommendations to improve the user interface and keyboard for greater usability.
The document discusses variance and standard deviation. Variance measures how dispersed or spread out values are from the mean, while standard deviation is the positive square root of variance. Standard deviation indicates the average amount of variation from the mean. A low standard deviation means values are close to the mean, while a high standard deviation means more variation and dispersion from the mean. The coefficient of variation measures standard deviation relative to the mean and is used to compare the variability of different data sets even if the means are different.
The document discusses various measures of dispersion used to quantify how data values are spread around the average value. It describes measures such as range, interquartile range, mean deviation, standard deviation, variance, and coefficient of variation. Standard deviation is highlighted as the most important measure of dispersion as it is widely used and capable of further algebraic treatment. Different methods for calculating standard deviation for individual series, discrete series, and continuous series are provided along with examples. The key properties and appropriate uses of each measure are also outlined.
The document discusses various measures used to summarize sample data, including measures of central tendency (location) and spread (dispersion). It describes how to calculate the arithmetic mean, mode, and median of raw data and frequency tables. The mean is the average value, the mode is the most frequent observation, and the median is the middle value when data is ordered from lowest to highest. For skewed data, the mode or median may better indicate central tendency than the mean. The document also introduces the interquartile range as a measure of spread and shows how to calculate percentiles from raw and grouped frequency data.
The document discusses Toyota's recalls of millions of vehicles in 2009-2010 due to issues with accelerators getting stuck. This damaged Toyota's reputation for quality and reliability. Rapid global expansion may have compromised quality systems as production moved overseas. Solutions included overhauling quality processes, communicating directly with customers, and regaining trust through a public relations campaign. The recalls significantly hurt Toyota's financial performance in the short term through lost sales and costs. Marketing would play a role in rebuilding Toyota's brand image and regaining customer confidence.
Reebok entered the Indian sportswear market in 1995 through a joint venture. It targeted the premium segment with higher priced shoes. Through customization, extensive retail presence, and endorsement of Indian cricket stars, Reebok established itself as the market leader with 51% share by 2007. In contrast, Nike entered through a licensing agreement and was slow to develop products for India, position itself effectively, and expand retail presence, allowing Reebok to outperform it initially in the Indian market.
The document discusses Reebok and Nike's entry and performance in the Indian sportswear market in the 1990s and 2000s. Some key points:
- Reebok entered India in 1995 through a joint venture, while Nike entered through a licensing agreement. Reebok customized products for the Indian market and established an extensive retail presence, becoming the market leader with 51% share by 2007.
- In contrast, Nike was slow to develop products for India and relied on its licensing partner for distribution, limiting its market penetration. It initially positioned itself as a lifestyle brand rather than focusing on sports.
- To compete with Reebok's strong cricket brand associations, Nike became the official app
Toyota recalled over 6 million vehicles in the US in late 2009 and early 2010 due to issues with accelerators sticking in certain models. This was a major blow to Toyota's reputation for quality and reliability. Toyota suspended sales and production of some models as a result. The recalls pointed to potential dangers large corporations face in a global economy and the importance of quality for Toyota's operations and brand image, which had been built on its quality systems and processes. However, some analysts felt Toyota may have sacrificed quality for rapid global expansion and the goal of becoming the largest automaker.
The document provides guidance on creating an effective press release. It defines a press release as a written communication distributed to media to provide information and draw attention to something. It should be 1-2 pages, written clearly and concisely. The document outlines key sections of a press release, including the headline, dateline, lead paragraph answering who, what, when, where, why and how, quotes, and ending with "-30-". It emphasizes keeping press releases short, factual, and written from a journalist's perspective.
This document provides information on how to write reports. It discusses that reports present focused content to a specific audience, often as the result of an investigation. Reports serve to give information, record events for decision making, and recommend specific actions. The document outlines the different types of reports and the typical five stages of report writing: defining the problem and purpose, identifying issues, conducting research, analyzing data, and providing conclusions and recommendations. It also discusses the common structure and layout of reports, including front matter, main body, and back matter sections.
This document discusses different types of claims made in business communications. It defines a claim as a request for an adjustment and distinguishes between routine claims, which assume quick approval due to guarantees or contracts, and persuasive claims, which require explanations and arguments to obtain approval. The document advises businesses to welcome all claims as a way to improve customer satisfaction, retain customers, and gain a positive reputation for addressing issues. It also provides tips for writing effective persuasive claims.
The document provides information about memos, including their purpose, characteristics, formats, parts, and an example. Memos are used for inter-office communication to bring attention to problems, provide information, and persuade. They are short, direct, and use a block or modified block format. Key parts include the header, opening, summary, discussion, action, and attachments. The example memo discusses changing an advertising strategy based on market research findings. It focuses the discussion on internet and television advertising that targets young adults.
1) Meetings provide an opportunity for stakeholders to come to a common understanding and allow for discussion to move things forward.
2) An effective meeting follows the PROOF framework: Planning, Reaching out, Organizing, Orchestrating, and Following through.
3) Preparing for a meeting requires clarity on purpose, participants, expectations, time, and logistics. The agenda should be circulated in advance.
The document discusses the various parts of a formal business letter and different letter formats. It outlines the standard elements of a letter which typically include: the heading with return address and date, inside address, salutation/greeting, subject line (optional), body paragraphs, complimentary close, signature, and other optional elements like enclosures. It also compares the American and British styles of letter formatting and punctuation. Finally, it provides examples of three common letter formats: block, modified block, and semi-block indented.
The document provides principles for effective business communication. It discusses how most people are poor communicators and listeners. It emphasizes the importance of clear, concise written communication and provides 12 principles to improve writing skills, including: orienting writing towards the receiver; using simple vocabulary; using concrete rather than abstract words; using active voice; and ensuring coherence, unity, and flow. The document also covers style and tone considerations for business writing.
This document discusses working capital management. It defines current assets and outlines factors that influence working capital requirements, such as a firm's nature of business and production seasonality. The document also discusses determining the optimal level of current assets by balancing liquidity and carrying costs. Additionally, it examines financing current assets through a mix of long-term and short-term sources and calculating cash requirements for working capital based on a firm's operating cycle.
Dividends and _dividend_policy_powerpoint_presentation[1]Pooja Sakhla
The document discusses various aspects of dividends and dividend policy. It begins by defining different types of cash dividends that companies can issue, such as regular cash dividends paid quarterly. It also explains the dividend payment process and timeline. The document then discusses whether dividend policy truly matters or if it is irrelevant under certain assumptions. It also outlines different dividend policies companies may follow, such as residual dividend policies, and considers why companies may prefer high or low dividend payouts. The document concludes by discussing stock repurchases and stock dividends as alternatives to cash dividends.
The document outlines the systems development process and project management. It discusses the importance of involving end users and using prototyping. The systems development life cycle includes systems investigation, feasibility study, systems analysis, systems design, and implementation. The feasibility study evaluates if a project is operationally, economically, technically, legally and humanly feasible. Systems analysis studies user information needs and produces functional requirements. Systems design develops the logical and physical design of the system. Prototyping allows for rapid testing and refinement of designs with end users.
The document outlines the systems development process and project management. It discusses the importance of involving users, prototyping, and following steps like feasibility analysis, systems analysis, design, and implementation. The learning objectives cover using the systems development process as a problem-solving framework, describing the development cycle steps, explaining prototyping, understanding project management, and identifying implementation and evaluation activities.
This document discusses decision support systems and artificial intelligence applications in business. It covers topics like management information systems, online analytical processing, dashboards, expert systems, neural networks, and more. The key learning objectives are to identify how these technologies can support business decisions and to give examples of their uses. Case studies provide real-world illustrations of dashboard tools, automated decision making, and AI implementation challenges.
This chapter discusses decision support systems (DSS) and how they differ from traditional management information systems (MIS). DSS provide interactive support to managers during semistructured decision making through tools like analytical models, databases, and computer modeling. MIS produce predefined reports to support more structured decisions. The chapter outlines several types of DSS including executive information systems, enterprise portals, online analytical processing (OLAP), geographic information systems, and data visualization systems. It also discusses how various analytical techniques can be used in DSS to support decision making.
The document outlines the key learning objectives of Chapter 1 which introduce fundamental concepts of information systems. It provides examples of how information systems support business functions at a company called Sew What? Inc. The chapter defines what an information system is, the difference between an information system and information technology, and the types of systems used by businesses like transaction processing, management information, and expert systems. It also discusses the challenges and opportunities of information technology and careers in the field.
An information system is defined as software that helps organize and analyze data to turn it into useful information for decision making in an organization. The document discusses the need for information systems and their structure, providing an example. It introduces information and information systems, and explains that the purpose of an IS is to take raw data and make it into useful information that can be used for decision making.
1) The document discusses pricing strategies and promotional schemes used by domestic airlines in India to compete in the oligopolistic aviation market. It describes schemes like APEX fares that offered discounted tickets for advance purchases.
2) It provides details on various schemes launched by different airlines like "Wings of Freedom" by Indian Airlines and "Sixer" by Air Sahara to attract customers. It also explains concepts like kinked demand curves that are characteristics of oligopoly markets.
3) The implementation of discounted fare schemes through innovations like APEX led to increased air travel among the middle class in India and benefited the tourism industry. However, continued growth of the aviation industry remains dependent on improving infrastructure
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
2. varsha Varde 2
Course Coverage
• Essential Basics for Business Executives
• Data Classification & Presentation Tools
• Preliminary Analysis & Interpretation of Data
• Correlation Model
• Regression Model
• Time Series Model
• Forecasting
• Uncertainty and Probability
• Sampling Techniques
• Estimation and Testing of Hypothesis
4. varsha Varde 4
Preliminary Analysis of Data
Central Tendency of the Data at Hand:
• Need to Size Up the Data At A Glance
• Find A Single Number to Summarize the
Huge Mass of Data Meaningfully:
Average
• Tools: Mode
Median
Arithmetic Mean
Weighted Average
5. varsha Varde 5
Mode, Median, and Mean
• Mode: Most Frequently Occurring Score
• Median: That Value of the Variable Above
Which Exactly Half of the Observations Lie
• Arithmetic Mean: Ratio of Sum of the
Values of A Variable to the Total Number
of Values
• Mode by Mere Observation, Median
needs Counting, Mean requires
Computation
7. varsha Varde 7
This Group
This Group of Participants:
Mode of age is years
Median is years,
Arithmetic Mean is years
8. varsha Varde 8
Arithmetic Mean - Example
Product Return on Investment (%)
A 10
B 30
C 5
D 20
Total 65
9. varsha Varde 9
Arithmetic Mean - Example
• Arithmetic Mean: 65 / 4 = 16.25 %
• Query: But, Are All Products of Equal
Importance to the Company?
• For Instance, What Are the Sales Volumes
of Each Product? Are They Identical?
• If Not, Arithmetic Mean Can Mislead.
10. varsha Varde 10
Weighted Average - Example
Product RoI Sales (Mn Rs) Weight RoI x W
A 10 400 0.20 2.00
B 30 200 0.10 3.00
C 5 900 0.45 2.25
D 20 500 0.25 5.00
Total 65 2000 1.00 12.25
Wt. Av.
11. varsha Varde 11
A Comparison
• Mode: Easiest, At A Glance, Crude
• Median: Disregards Magnitude of Obs.,
Only Counts Number of Observations
• Arithmetic Mean: Outliers Vitiate It.
• Weighted Av. Useful for Averaging Ratios
• Symmetrical Distn: Mode=Median=Mean
• +ly Skewed Distribution: Mode < Mean
• -ly Skewed Distribution: Mode > Mean
12. varsha Varde 12
Preliminary Analysis of Data
Measure of Dispersion in the Data:
• ‘Average’ is Insufficient to Summarize
Huge Data Spread over a Wide Range
• Need to Obtain another Number to Know
How Widely the Numbers are Spread
• Tools: Range & Mean Deviation
Variance & Standard Deviation
Coefficient of Variation
13. varsha Varde 13
Range and Mean Deviation
• Range: Difference Between the Smallest
and the Largest Observation
• Mean Deviation: Arithmetic Mean of the
Deviations of the Observations from an
Average, Usually the Mean.
14. varsha Varde 14
Computing Mean Deviation
• Select a Measure of Average, say, Mean.
• Compute the Difference Between Each
Value of the Variable and the Mean.
• Multiply the Difference by the Concerned
Frequency.
• Sum Up the Products.
• Divide by the Sum of All Frequencies.
• Mean Deviation is the Weighted Average.
19. varsha Varde 19
Mean Deviation
• Sum of the Products: 318.12
• Sum of All Frequencies: 50
• Mean Deviation: 318.12 / 50 = 6.36
• Let Us Compute With a Simpler Example
20. varsha Varde 20
Machine Downtime Data in Minutes
per Day for 100 Working Days
Frequency Distribution
Downtime in Minutes No. of Days
00 – 10 20
10 – 20 40
20 – 30 20
30 – 40 10
40 – 50 10
Total 100
21. varsha Varde 21
Machine Downtime Data in Minutes
per Day for 100 Working Days
Frequency Distribution
Downtime Midpoints No. of Days
05 20
15 40
25 20
35 10
45 10
Total 100
22. varsha Varde 22
Arithmetic Mean
Downtime
Midpoints
No. of Days Product
05 20 05 x 20 = 100
15 40 15 x 40 = 600
25 20 25 x 20 = 500
35 10 35 x 10 = 350
45 10 45 x 10 = 450
Total 100 2000
23. varsha Varde 23
Arithmetic Mean
• Arithmetic Mean is the Average of the
Observed Downtimes.
• Arithmetic Mean= Total Observed
Downtime/ total number of days
• Arithmetic Mean= 2000 / 100 = 20 Minutes
• Average Machine Downtime is 20
Minutes.
24. varsha Varde 24
Mean Deviation
Downtime
Midpoints
No. of Days Deviation from
Mean
05 20 |05 – 20| =15
15 40 |15 – 20| = 05
25 20 |25 – 20| = 05
35 10 |35 – 20| = 15
45 10 |45 – 20| = 25
Total 100
25. varsha Varde 25
Mean Deviation
Downtime
Midpoints
No. of
Days
Deviation from
Mean
Products
05 20 |05 – 20| =15 15 x 20 = 300
15 40 |15 – 20| = 05 05 x 40 = 200
25 20 |25 – 20| = 05 05 x 20 = 100
35 10 |35 – 20| = 15 15 x 10 = 150
45 10 |45 – 20| = 25 25 x 10 = 250
Total 100 1000
26. varsha Varde 26
Mean Deviation
• Definition: Mean Deviation is mean of
Deviations (Disregard negative Sign) of
the Observed Values from the Average.
• In this Example, Mean Deviation is the
Weighted Average(weights as
frequencies) of the Deviations of the
Observed Downtimes from the Average
Downtime.
• Mean Deviation = 1000 / 100 = 10 Minutes
27. varsha Varde 27
Variance
• Definition: Variance is the average of the
Squares of the Deviations of the Observed
Values from the mean.
28. varsha Varde 28
Standard Deviation
• Definition: Standard Deviation is the
Average Amount by which the Values
Differ from the Mean, Ignoring the Sign of
Difference.
• Formula: Positive Square Root of the
Variance.
29. varsha Varde 29
Variance
Downtime
Midpoints
No. of
Days
Difference from
Mean
Square Products
05 20 05 – 20 = -15 225 225 x 20 =
4500
15 40 15 – 20 = - 05 25 25 x 40 =
1000
25 20 25 – 20 = 05 25 25 x 20 =
500
35 10 35 – 20 = 15 225 225 x 10 =
2250
45 10 45 – 20 = 25 625 625 x 10 =
6250
Total 100 14500
30. varsha Varde 30
Variance & Standard Deviation
• Variance = 14500 / 100 = 145 Mts Square
• Standard Deviation =
Sq. Root of 145 = 12.04 Minutes
• Exercise: This Group of 65: Compute the
Variance & Standard Deviation of age
31. varsha Varde 31
Simpler Formula for Variance
• Logical Definition: Variance is the
Average of the Squares of the Deviations
of the Observed Values from the mean.
• Simpler Formula: Variance is the Mean
of the Squares of Values Minus the
Square of the Mean of Values..
32. varsha Varde 32
Variance (by Simpler Formula)
Downtime
Midpoints
No. of Days Squares Products
05 20 25 25 x 20 = 500
15 40 225 225 x 40 = 9000
25 20 625 625 x 20 = 12500
35 10 1225 1225 x 10 = 12250
45 10 2025 2025 x 10 = 20250
Total 100 54500
33. varsha Varde 33
Variance (by Simpler Formula)
• Mean of the Squares of Values
= 54500/100 = 545
• Square of the Mean of Values=20x20=400
• Variance = Mean of Squares of Values
Minus Square of Mean of Values
= 545 – 400 = 145
• Standard Deviation = Sq.Root 145 = 12.04
34. varsha Varde 34
Significance of Std. Deviation
In a Normal Frequency Distribution
• 68 % of Values Lie in the Span of Mean
Plus / Minus One Standard Deviation.
• 95 % of Values Lie in the Span of Mean
Plus / Minus Two Standard Deviation.
• 99 % of Values Lie in the Span of Mean
Plus / Minus Three Standard Deviation.
Roughly Valid for Marginally Skewed Distns.
35. varsha Varde 35
Machine Downtime Data in Minutes
per Day for 100 Working Days
Frequency Distribution
Downtime in Minutes No. of Days
00 – 10 20
10 – 20 40
20 – 30 20
30 – 40 10
40 – 50 10
Total 100
36. varsha Varde 36
Interpretation from Mean & Std Dev
Machine Downtime Data
• Mean = 20 and Standard Deviation = 12
• Span of One Std. Dev. = 20–12 to 20+12
= 8 to 32: 60% Values
• Span of Two Std. Dev. = 20–24 to 20+24
= -4 to 44: 95% Values
• Span of Three Std. Dev. = 20–36 to 20+36
= -16 to 56: 100% Values
38. varsha Varde 38
Interpretation from Mean & Std Dev
Sales Orders Data
• Mean = 9.82 & Standard Deviation = 6.36
• Round Off To: Mean 10 and Std. Dev 6
• Span of One Std. Dev. = 10–6 to 10+6 = 4
to 16: 31 Values (62%)
• Span of Two Std. Dev. = 10–12 to 10+12
= -2 to 22: 45 Values (90%)
• Span of Three Std. Dev. = 10–18 to 10+18
= -8 to 28: 47 Values (94%)
39. varsha Varde 39
BIENAYME_CHEBYSHEV RULE
• For any distribution percentage of
observations lying within +/- k standard
deviation of the mean is at least
( 1- 1/k square ) x100 for k>1
• For k=2, at least (1-1/4)100 =75% of
observations are contained within 2
standard deviations of the mean
40. varsha Varde 40
Coefficient of Variation
• Std. Deviation and Dispersion have Units
of Measurement.
• To Compare Dispersion in Many Sets of
Data (Absenteeism, Production, Profit),
We Must Eliminate Unit of Measurement.
• Otherwise it’s Apple vs. Orange vs. Mango
• Coefficient of Variation is the Ratio of
Standard Deviation to Arithmetic Mean.
• CoV is Free of Unit of Measurement.
41. Coefficient of Variation
• In Our Machine Downtime Example,
Coefficient of Variation is 12.04 / 20 = 0.6
or 60%
• In Our Sales Orders Example, Coefficient
of Variation is 6.36 / 9.82 = 0.65 or 65%
• The series for which CV is greater is said
to be more variable or less consistent ,
less uniform, less stable or less
homogeneous.
42. Coefficient of Variation
• In Our Machine Downtime Example,
Coefficient of Variation is 12.04 / 20 = 0.6
• In Our Sales Orders Example, Coefficient
of Variation is 6.36 / 9.82 = 0.65
• The series for which CV is greater is said
to be more variable or less consistent ,
less uniform, less stable or less
homogeneous.
43. Example
• Mean and SD of dividends on equity stocks of
TOMCO & Tinplate for the past six years is as
follows
• Tomco:Mean=15.42%,SD=4.01%
• Tinplate:Mean=13.83%, SD=3.19%
• CV:Tomco=26.01%,Tinplate=23.01%
• Since CV of dividend of Tinplates is less it
implies that return on stocks of Tinplate is more
stable
• For investor seeking stable returns it is better to
invest in scrips of Tinplate
44. varsha Varde 44
Exercise
• List Ratios Commonly used in Cricket.
• Study Individual Scores of Indian Batsmen
at the Last One Day Cricket Match.
• Are they Nominal, Ordinal or Cardinal
Numbers? Discrete or Continuous?
• Find Median & Arithmetic Mean.
• Compute Range, Mean Deviation,
Variance, Standard Deviation & CoV. ..
45. varsha Varde 45
Steps in Constructing a
Frequency Distribution
(Histogram)
1. Determine the number of classes
2. Determine the class width
3. Locate class boundaries
4. Use Tally Marks for Obtaining
Frequencies for each class
46. varsha Varde 46
Rule of thumb
• Not too few to lose information content
and not too many to lose pattern
• The number of classes chosen is usually
between 6 and15.
• Subject to above the number of classes
may be equal to the square root of the
number of data points.
• The more data one has the larger is the
number of classes.
47. varsha Varde 47
Rule of thumb
• Every item of data should be included in
one and only one class
• Adjacent classes should not have interval
in between
• Classes should not overlap
• Class intervals should be of the same
width to the extent possible
48. varsha Varde 48
Illustration
Frequency and relative frequency distributions
(Histograms):
Example
Weight Loss Data
20.5 19.5 15.6 24.1 9.9
15.4 12.7 5.4 17.0 28.6
16.9 7.8 23.3 11.8 18.4
13.4 14.3 19.2 9.2 16.8
8.8 22.1 20.8 12.6 15.9
• Objective: Provide a useful summary of the available
information
49. varsha Varde 49
Illustration
• Method: Construct a statistical graph called a “histogram” (or frequency distribution)
Weight Loss Data
class boundaries - tally class rel
. freq, f freq, f/n
1 5.0-9.0 3 3/25 (.12)
2 9.0-13.0 5 5/25 (.20)
3 13.0-17.0 7 7/25 (.28)
4 17.0-21.0 6 6/25 (.24)
5 21.0-25.0 3 3/25 (.12)
6 25.0-29.0 1 1/25 (.04)
Totals 25 1.00
Let
• k = # of classes
• max = largest measurement
• min = smallest measurement
• n = sample size
• w = class width
50. varsha Varde 50
Formulas
• k = Square Root of n
• w =(max− min)/k
• Square Root of 25 = 5. But we used k=6
• w = (28.6−5.4)/6
w = 4.0
51. varsha Varde 51
Numerical methods
• Measures of Central Tendency
1. Mean( Arithmetic,Geometric,Harmonic)
2 .Median
3. Mode
• Measures of Dispersion (Variability)
1. Range
2. Mean Absolute Deviation (MAD)
3. Variance
4. Standard Deviation
52. varsha Varde 52
Measures of Central Tendency
• Given a sample of measurements (x1, x2, · · ·, xn) where
n = sample size
xi = value of the ith observation in the sample
• 1. Arithmetic Mean
AM of x =( x1+x2+···+xn) / n = ∑ xi /n
• 2. Geometric Mean
GM of x =(x1.x2.x3…..xn) ^1/n
• 3.Weighted Average = (w1.x1+w2.x2+….wn.xn)/(w1+w2+
…wn)
=∑wixi /∑wi
53. varsha Varde 53
Example
• : Given a sample of 5 test grades
(90, 95, 80, 60, 75)
Then n=5; x1=90,x2=95,x3=80,x4=60,x5=75
• AM of x =( 90 + 95 + 80 + 60 + 75)/5 = 400/5=80
• GM of x =( 90 *95* 80 * 60 * 75)^1/5
=(3078000000)^1/5=79
• Weighted verage;w1=1,w2=2,w3=2,w4=3,w5=2
WM of x =( 1*90 + 2*95 + 2*80 +3* 60 +2*75)/10
= 770/10=77
54. varsha Varde 54
Measures of Central Tendency
• Sample Median
• The median of a sample (data set) is the middle number when the measurements are
• arranged in ascending order.
• Note:
• If n is odd, the median is the middle number
If n is even, the median is the average of the middle two numbers.
• Example 1: Sample (9, 2, 7, 11, 14), n = 5
• Step 1: arrange in ascending order
• 2, 7, 9, 11, 14
• Step 2: med = 9.
• Example 2: Sample (9, 2, 7, 11, 6, 14), n = 6
• Step 1: 2, 6, 7, 9, 11, 14
• Step 2: med = (7+9)/2=8
Remarks:
• (i) AM of x is sensitive to extreme values
• (ii) the median is insensitive to extreme values (because median is a measure of
• location or position).
• 3. Mode
• The mode is the value of x (observation) that occurs with the greatest frequency.
• Example: Sample: (9, 2, 7, 11, 14, 7, 2, 7), mode = 7
55. varsha Varde 55
Choosing Appropriate
Measure of Location
• If data are symmetric, the mean, median,
and mode will be approximately the same.
• If data are multimodal, report the mean,
median and/or mode for each subgroup.
• If data are skewed, report the median.
• The AM is the most commonly used and is
preferred unless precluding circumstances
are present
56. varsha Varde 56
Measures of Variation
• Sample range
• Sample variance
• Sample standard deviation
• Sample interquartile range
57. Sample Range
R = largest obs. - smallest obs.
or, equivalently
R = xmax - xmin
58. Coefficient of Range
CR = largest obs. - smallest obs.
-------------- ----------------------------
largest obs. +smallest obs.
or, equivalently
CR = xmax – xmin/ xmax + xmin
61. varsha Varde 61
• it is the typical (standard) difference
(deviation) of an observation from the
mean
• think of it as the average distance a data
point is from the mean, although this is not
strictly true
What is a standard deviation?
63. Quartile Deviation
• Q.D =( third quartile - first quartile)/2
= (Q3 - Q1)/2
• (Median -Q.D) to( Median+Q.D)
covers around 50% of the observations
as economic or business data are
seldom perfectly symmetrical
• Coefficient of Quartile deviation
=( Q3 - Q1)/ Q3 + Q1
64. varsha Varde 64
Measures of Variation -
Some Comments
• Range is the simplest, but is very sensitive
to outliers
• Interquartile range is mainly used with
skewed data (or data with outliers)
• We will use the standard deviation as a
measure of variation often in this course
65. varsha Varde 65
Measures of Variability
• Given: a sample of size n
• sample: (x1, x2, · · ·, xn)
• 1. Range:
• Range = largest measurement - smallest
measurement
• or Range = max - min
• Example 1: Sample (90, 85, 65, 75, 70, 95)
• Range = max - min = 95-65 = 30
66. varsha Varde 66
Measures of Variability
• 2. Mean Absolute Deviation
• MAD = AM of absolute Deviations
• Sum of |xi −¯ x| /n =∑I xi- ¯ x I /n
Example 2: Same sample
x x−¯ x |x −¯ x|
90 10 10
85 5 560
65 -15 15
7 -5 5
70 -10 10
95 15 15
Totals 480 0 60
• MAD =60/10=6
Remarks:
• (i) MAD is a good measure of variability
• (ii) It is difficult for mathematical manipulations
67. varsha Varde 67
Measures of Variability
• 3. Standard Deviation
• Example: Same sample as before (AM of ;x = 80) ;n=6
x x− ¯x (x − ¯x)2
90 10 100
85 5 25
65 -15 225
75 -5 25
70 -10 100
95 15 225
Totals 480 0 700
• Therefore
• Variance of x =700 / 5 =140 ;
•
• Standard deviation of x = square root of 140 = 11.83
68. varsha Varde 68
• Finite Populations
• Let N = population size.
• Data: {x1, x2, · · · , xN}
• Population mean: μ = (x1+x2+………+xN) /N
• Population variance: σ2 = (x1− μ)2+ (x2− μ)2+…….+ (xN− μ)2
-------------------------------------------------------------------
N
• Population standard deviation: σ = √σ2,
69. varsha Varde 69
• Population parameters vs sample
statistics.
• Sample statistics: ¯x, s2
, s.
• Population parameters: μ, σ2
, σ.
• Approximation: s = range /4
• Coefficient of variation (c.v.) = s / ¯x
70. varsha Varde 70
• 4 Percentiles
• Using percentiles is useful if data is badly
skewed.
• Let x1, x2, . . . , xn be a set of measurements
arranged in increasing order.
• Definition. Let 0 < p < 100. The pth percentile is
a number x such that p% of all measurements
fall below the pth percentile and (100 − p)% fall
above it.
74. varsha Varde 74
Sample Mean and Variance
For Grouped Data
• 5 Example: (weight loss data)
• Weight Loss Data
• class boundaries mid-pt. freq. xf x2
f
x f
• 1 5.0-9.0- 7 3 21 147
• 2 9.0-13.0- 11 5 55 605
• 3 13.0-17.0- 15 7 105 1,575
• 4 17.0-21.0- 19 6 114 2,166
• 5 21.0-25.0- 23 3 69 1,587
• 6 25.0-29.0 27 1 27 729
• Totals 25 391 6,809
• Let k = number of classes.
• Formulas.
• AM= (x1f1+x2f2+……..+xkfk)/(f1+f2+……+fk)=391/25=15.64
• Variance= 6809/24-(15.64)^2=283,71-244.61=39
• SD=(39)^1/2=6.24
75. varsha Varde 75
mode for grouped data
f – f1
• Mode=Lmo + ---------- x w
2f-f1-f2
• Lmo= Lower limit of Modal Class
• f1,f2=Frequencies of classes preceding
and succeeding modal class
• f=Frequency of modal class
• w= Width of class interval
77. varsha Varde 77
Formulas for Quartiles
• [ (N+1)/4-(F+1)]
• Q1=Lq + ------------- x W
fq
Where, Lq=Lower limit of quartile class
N= Total frequency
F=Cumulative frequency upto quartile class
fq= frequency of quartile class
w= Width of the class interval
First quartile class is that which includes observation no,
(N+1)/4
78. varsha Varde 78
Formulas for Quartiles
• [ (N+1)/4-(F+1)]
• Q1=Lq + ------------- x W
fq
Where, Lq=Lower limit of quartile class=9
N= Total frequency=25
F=Cumulative frequency upto quartile class=3
fq= frequency of quartile class=5
w= Width of the class interval=4
First quartile class is that which includes observation no,
(N+1)/4=6.5
Q1=9+[{(6.5 -4)/5 }x 4]=9+2=11
79. varsha Varde 79
Formulas for Quartiles
• [ 3(N+1)/4-(F+1)]
• Q3=Lq + --------------------xW
fq
Where, Lq=Lower limit of quartile class
N= Total frequency
F=Cumulative frequency upto quartile class
fq= frequency of quartile class
w= Width of the class interval
Third quartile class is that which includes observation
no.3(N+1)/4
80. varsha Varde 80
Formulas for Quartiles
• [ 3(N+1)/4-(F+1)]
• Q3=Lq + --------------------xW
fq
Where, Lq=Lower limit of quartile class=17
N= Total frequency=25
F=Cumulative frequency upto quartile class=15
fq= frequency of quartile class=6
w= Width of the class interval=4
Third quartile class is that which includes observation
no.3(N+1)/4=19.5
Q3=17 +[ {(19.5-16)/6}x4]=17+2.33=19.33
81. varsha Varde 81
Formulas for Quartiles
• [ 2(N+1)/4-(F+1)]
• Q2=Lq + ------------------ xW
fq
Where, Lq=Lower limit of quartile class
N= Total frequency
F=Cumulative frequency upto quartile class
fq= frequency of quartile class
w= Width of the class interval
Second quartile class is that which includes observation no.
(N+1)/2
82. varsha Varde 82
Formulas for Quartiles
• [ 2(N+1)/4-(F+1)]
• Q2=Lq + ------------------ xW
fq
Where, Lq=Lower limit of quartile class=13
N= Total frequency=25
F=Cumulative frequency upto quartile class=8
fq= frequency of quartile class=7
w= Width of the class interval=4
Second quartile class is that which includes observation no.
(N+1)/2=13
Q2=13 +[{(13-9)/7}x4]=13+5.14=18.14
83. varsha Varde 83
Empirical mode
• Where mode is ill defined its value may be
ascertained by using the following formula
• Mode =3 median-2mean