This document provides information about quantitative methods and statistics. It discusses different types of data including discrete and continuous variables. Discrete data can only take certain values while continuous data can theoretically take any value. The document also covers different scales of measurement for data like nominal, ordinal, interval and ratio scales. Finally, it discusses tools for data classification including arrays, frequency arrays and frequency distributions which arrange data according to a common characteristic.
Validate data
Questionnaire checking
Edit acceptable questionnaires
Code the questionnaires
Keypunch the data
Clean the data set
Statistically adjust the data
Store the data set for analysis
Analyse data
1. The document discusses PASW Statistics (SPSS), a software package used for statistical analysis. SPSS can be used to summarize, analyze, and visualize data to determine if hypotheses are supported.
2. Key aspects of SPSS covered include the data editor, which allows viewing and editing data in variable or data views, and transforming data using computations or recodes. Descriptive statistics, such as frequencies, means, and standard deviations, can be generated.
3. The median, mode, variance and other statistical techniques are defined to help understand how to analyze data in SPSS. Questions and examples are provided about loading data, recoding variables, and generating frequency tables and histograms.
This document provides information about data interpretation and different ways to present data. It discusses numerical data tables including time series tables, spatial tables, frequency distribution tables, and cumulative frequency tables. Examples are given to show how to calculate capacity utilization, sales growth percentages, and solve other problems using the data in tables. Cartesian graphs are also introduced as a way to show the variation of a quantity with respect to two parameters on the X and Y axes.
This document provides an introduction and overview of SPSS (Statistical Package for the Social Sciences). It discusses what SPSS is, the research process it supports, how questionnaires are translated into SPSS, different question and response formats, and levels of measurement. It also briefly outlines some of SPSS's data editing, analysis, and output features.
This document provides an introduction to statistics and data analysis concepts. It discusses descriptive statistics such as mean, variance, standard deviation, mode and median. It also covers inferential statistics, exploratory data analysis, probability, and probability concepts like joint probability and conditional probability. Examples and diagrams are provided to illustrate key statistical terms and how they relate to data analysis.
The document discusses various techniques for data analysis. It begins by explaining the concepts of data analysis and categories such as descriptive, statistical/mathematical. Common statistical methods are described including descriptive statistics which use sample data to explain population phenomena, and inferential statistics which use samples to infer population parameters and relationships. Examples of descriptive statistics like mean, median and quartiles are provided. The document concludes by emphasizing the importance of choosing the right technique for the research problem and avoiding common mistakes in data analysis.
SPSS can be used for data entry, cleaning, analysis, and presentation. It is important to prepare a data dictionary specifying variable names, codes, ranges, and missing values before entering data. Errors may occur during data collection or entry and can be detected using descriptive statistics, frequency distributions, logical checks, and double data entry. Suspicious values should be investigated rather than automatically changed to avoid correcting valid data.
Validate data
Questionnaire checking
Edit acceptable questionnaires
Code the questionnaires
Keypunch the data
Clean the data set
Statistically adjust the data
Store the data set for analysis
Analyse data
1. The document discusses PASW Statistics (SPSS), a software package used for statistical analysis. SPSS can be used to summarize, analyze, and visualize data to determine if hypotheses are supported.
2. Key aspects of SPSS covered include the data editor, which allows viewing and editing data in variable or data views, and transforming data using computations or recodes. Descriptive statistics, such as frequencies, means, and standard deviations, can be generated.
3. The median, mode, variance and other statistical techniques are defined to help understand how to analyze data in SPSS. Questions and examples are provided about loading data, recoding variables, and generating frequency tables and histograms.
This document provides information about data interpretation and different ways to present data. It discusses numerical data tables including time series tables, spatial tables, frequency distribution tables, and cumulative frequency tables. Examples are given to show how to calculate capacity utilization, sales growth percentages, and solve other problems using the data in tables. Cartesian graphs are also introduced as a way to show the variation of a quantity with respect to two parameters on the X and Y axes.
This document provides an introduction and overview of SPSS (Statistical Package for the Social Sciences). It discusses what SPSS is, the research process it supports, how questionnaires are translated into SPSS, different question and response formats, and levels of measurement. It also briefly outlines some of SPSS's data editing, analysis, and output features.
This document provides an introduction to statistics and data analysis concepts. It discusses descriptive statistics such as mean, variance, standard deviation, mode and median. It also covers inferential statistics, exploratory data analysis, probability, and probability concepts like joint probability and conditional probability. Examples and diagrams are provided to illustrate key statistical terms and how they relate to data analysis.
The document discusses various techniques for data analysis. It begins by explaining the concepts of data analysis and categories such as descriptive, statistical/mathematical. Common statistical methods are described including descriptive statistics which use sample data to explain population phenomena, and inferential statistics which use samples to infer population parameters and relationships. Examples of descriptive statistics like mean, median and quartiles are provided. The document concludes by emphasizing the importance of choosing the right technique for the research problem and avoiding common mistakes in data analysis.
SPSS can be used for data entry, cleaning, analysis, and presentation. It is important to prepare a data dictionary specifying variable names, codes, ranges, and missing values before entering data. Errors may occur during data collection or entry and can be detected using descriptive statistics, frequency distributions, logical checks, and double data entry. Suspicious values should be investigated rather than automatically changed to avoid correcting valid data.
This document provides an overview of descriptive and inferential statistical procedures for analyzing data, including summarizing data using descriptive statistics and graphs, assessing reliability, comparing groups using t-tests and ANOVA, and testing associations using non-parametric tests and regression. It also discusses analyzing customer satisfaction data through reliability analysis, chi-square tests, and comparing service across shops.
This document provides an overview of a 2-day workshop on SPSS syntax that will be held on October 28th and 29th, 2010. The workshop will be organized by the Indian Institute of Psychometry in Kolkata and led by Dr. Debdulal Dutta Roy of the Psychology Research Unit at the Indian Statistical Institute in Kolkata. Topics that will be covered include an introduction to SPSS, its features and interfaces, how to write SPSS syntax for data management and analysis tasks, how to check data quality using syntax, and how to perform statistical analyses like correlations and descriptive statistics using syntax. Assignments involving practicing these skills with sample data will also be part of the workshop.
SPSS (Statistical Package for the Social Sciences) is statistical software used for data management and analysis. It allows users to process questionnaires, report data in tables and graphs, and analyze data through various tests like means, chi-square, and regression. Originally called SPSS Inc., it is now owned by IBM and known as IBM SPSS Statistics. The document provides an introduction to SPSS and outlines how to define variables, enter data, select cases, run descriptive statistics like frequencies and crosstabs, and manipulate output files.
This document provides an overview of data analysis using SPSS. It discusses key concepts like variables, measurement scales, data types, statistical terminology, and the steps involved in data analysis using SPSS. The document defines nominal, ordinal, interval and ratio scales of measurement. It also describes the nature of data as categorical or metric, and the types of categorical and metric data. Furthermore, it outlines tasks like data preparation, coding, cleaning and the appropriate use of statistical tools for analysis in SPSS.
This document provides an introduction to SPSS (Statistical Package for Social Sciences) software. It discusses opening and closing SPSS, the structure and windows of SPSS including the Data View and Variable View windows for entering data. It defines key concepts in SPSS like variables, different types of variables (nominal, ordinal, interval, ratio), and the process of defining variables in the Variable View window by specifying name, type, width, labels, values etc. before entering data. Examples are given around designing an experiment with independent and dependent variables and dealing with extraneous variables.
Statistical Data Analysis | Data Analysis | Statistics Services | Data Collec...Stats Statswork
The present article helps the USA, the UK and the Australian students pursuing their business and marketing postgraduate degree to identify right topic in the area of marketing in business. These topics are researched in-depth at the University of Columbia, brandies, Coventry, Idaho, and many more. Stats work offers UK Dissertation stats work Topics Services in business. When you Order stats work Dissertation Services at Tutors India, we promise you the following – Plagiarism free, Always on Time, outstanding customer support, written to Standard, Unlimited Revisions support and High-quality Subject Matter Experts.
Contact Us:
Website: www.statswork.com
Email: info@statswork.com
UnitedKingdom: +44-1143520021
India: +91-4448137070
WhatsApp: +91-8754446690
SPSS is a statistical software package used for analyzing data. It was developed in 1968 at Stanford University. SPSS stands for Statistical Package for the Social Sciences. The document discusses the types of variables in SPSS including qualitative (string) and quantitative (numeric) variables. It also covers defining variables such as variable name, type, width and labels to describe the values. Proper coding and labeling helps facilitate analysis and interpretation of results.
This document provides an overview of a business statistics course, including:
- Descriptions of fundamental statistical concepts like populations, samples, and types of data including nominal, ordinal, interval, and ratio scales.
- Examples of how to compute and represent different data types through frequency tables, histograms, scatter plots, and other graphs.
- Applications of statistics in fields like demography, econometrics, and the growing area of business analytics.
At the end of this Lesson (Part 1) the students should be able to know the following
Introduction
Data Entry
Variable and Value Label
Entering Data
File management
Descriptive statistics
Editing and modifying the data
This document provides an introduction to using SPSS (Statistical Package for the Social Sciences) for data analysis. It discusses the four main windows in SPSS - the data editor, output viewer, syntax editor, and script window. It also covers the basics of managing data files, including opening SPSS, defining variables, and sorting data. Several basic analysis techniques are introduced, such as frequencies, descriptives, and linear regression. Examples are provided for how to conduct these analyses and interpret the outputs.
tribhuvan University
M.A population Studies
Research methods for population analysis
Data Processing, editing and coding
if any mistakes, suggest me to improve it.
thank you
hope its useful for all :)
This document provides an overview of key concepts in statistics including definitions of statistics, variables, data, descriptive vs inferential statistics, populations vs samples, types of variables and data, levels of measurement, methods of data collection including surveys, sampling methods, types of statistical studies including observational and experimental, and some examples of proper and improper uses of statistics.
The document contains lecture slides on introductory statistics topics including definitions of population, quantitative vs qualitative data, types of measurement scales, and examples of different sampling methods and study designs. Key points covered are the definition of a population as the complete collection of all elements, examples of quantitative data like weights vs qualitative nominal categories, ordinal scales involving ranking, and retrospective study designs using existing historical data.
This document provides an overview of key concepts in descriptive statistics, including measures of center, variation, and relative standing. It discusses the mean, median, mode, range, standard deviation, z-scores, percentiles, quartiles, interquartile range, and boxplots. Formulas and properties of these statistical concepts are presented along with guidelines for interpreting and applying them to describe data distributions.
This chapter discusses decision analysis and various techniques for decision making under certainty, uncertainty, and risk. It covers decision tables, decision trees, expected monetary value, utility theory, and revising probabilities based on sample information. The key techniques taught are maximax, maximin, Hurwicz criterion, minimax regret, expected value, and expected value of perfect and sample information. Decision analysis provides strategies to evaluate alternatives and make optimal decisions under different conditions.
This document provides an overview of Chapter 7 from a statistics textbook. The chapter covers sampling and sampling distributions. It has 6 main learning objectives, including determining when to use sampling vs a census, distinguishing random and nonrandom sampling, and understanding the impact of the central limit theorem. The chapter outline lists 7 sections that will be covered, such as sampling, sampling distributions of the mean and proportion, and key terms. It provides examples to illustrate the central limit theorem and formulas from it.
This chapter introduces simple (bivariate, linear) regression analysis. It covers computing the regression line equation from sample data and interpreting the slope and intercept. It also discusses residual analysis to test regression assumptions and examine model fit, and computing measures like the standard error of the estimate and coefficient of determination to evaluate the model. The chapter teaches how to use the regression model to estimate y values and test hypotheses about the slope and model. The overall goal is for students to understand and apply the key concepts of simple regression.
This document provides an outline and overview of Chapter 3: Descriptive Statistics from a statistics textbook. It discusses key concepts in descriptive statistics including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation), measures of shape (skewness, kurtosis), and correlation. The chapter will cover calculating these statistics for both ungrouped and grouped data, and interpreting them to describe data distributions. It emphasizes that descriptive statistics are used to numerically summarize and characterize data sets.
This chapter introduces fundamental statistical concepts for managers. It defines key terms like population, sample, and parameter and discusses descriptive and inferential statistics. The chapter outlines different data collection methods and sampling techniques, including probability and non-probability samples. It also covers data types, levels of measurement, evaluating survey quality, and sources of survey error. The goal is to explain why understanding statistics is important for managers to analyze data and make informed decisions.
This document provides an overview of descriptive and inferential statistical procedures for analyzing data, including summarizing data using descriptive statistics and graphs, assessing reliability, comparing groups using t-tests and ANOVA, and testing associations using non-parametric tests and regression. It also discusses analyzing customer satisfaction data through reliability analysis, chi-square tests, and comparing service across shops.
This document provides an overview of a 2-day workshop on SPSS syntax that will be held on October 28th and 29th, 2010. The workshop will be organized by the Indian Institute of Psychometry in Kolkata and led by Dr. Debdulal Dutta Roy of the Psychology Research Unit at the Indian Statistical Institute in Kolkata. Topics that will be covered include an introduction to SPSS, its features and interfaces, how to write SPSS syntax for data management and analysis tasks, how to check data quality using syntax, and how to perform statistical analyses like correlations and descriptive statistics using syntax. Assignments involving practicing these skills with sample data will also be part of the workshop.
SPSS (Statistical Package for the Social Sciences) is statistical software used for data management and analysis. It allows users to process questionnaires, report data in tables and graphs, and analyze data through various tests like means, chi-square, and regression. Originally called SPSS Inc., it is now owned by IBM and known as IBM SPSS Statistics. The document provides an introduction to SPSS and outlines how to define variables, enter data, select cases, run descriptive statistics like frequencies and crosstabs, and manipulate output files.
This document provides an overview of data analysis using SPSS. It discusses key concepts like variables, measurement scales, data types, statistical terminology, and the steps involved in data analysis using SPSS. The document defines nominal, ordinal, interval and ratio scales of measurement. It also describes the nature of data as categorical or metric, and the types of categorical and metric data. Furthermore, it outlines tasks like data preparation, coding, cleaning and the appropriate use of statistical tools for analysis in SPSS.
This document provides an introduction to SPSS (Statistical Package for Social Sciences) software. It discusses opening and closing SPSS, the structure and windows of SPSS including the Data View and Variable View windows for entering data. It defines key concepts in SPSS like variables, different types of variables (nominal, ordinal, interval, ratio), and the process of defining variables in the Variable View window by specifying name, type, width, labels, values etc. before entering data. Examples are given around designing an experiment with independent and dependent variables and dealing with extraneous variables.
Statistical Data Analysis | Data Analysis | Statistics Services | Data Collec...Stats Statswork
The present article helps the USA, the UK and the Australian students pursuing their business and marketing postgraduate degree to identify right topic in the area of marketing in business. These topics are researched in-depth at the University of Columbia, brandies, Coventry, Idaho, and many more. Stats work offers UK Dissertation stats work Topics Services in business. When you Order stats work Dissertation Services at Tutors India, we promise you the following – Plagiarism free, Always on Time, outstanding customer support, written to Standard, Unlimited Revisions support and High-quality Subject Matter Experts.
Contact Us:
Website: www.statswork.com
Email: info@statswork.com
UnitedKingdom: +44-1143520021
India: +91-4448137070
WhatsApp: +91-8754446690
SPSS is a statistical software package used for analyzing data. It was developed in 1968 at Stanford University. SPSS stands for Statistical Package for the Social Sciences. The document discusses the types of variables in SPSS including qualitative (string) and quantitative (numeric) variables. It also covers defining variables such as variable name, type, width and labels to describe the values. Proper coding and labeling helps facilitate analysis and interpretation of results.
This document provides an overview of a business statistics course, including:
- Descriptions of fundamental statistical concepts like populations, samples, and types of data including nominal, ordinal, interval, and ratio scales.
- Examples of how to compute and represent different data types through frequency tables, histograms, scatter plots, and other graphs.
- Applications of statistics in fields like demography, econometrics, and the growing area of business analytics.
At the end of this Lesson (Part 1) the students should be able to know the following
Introduction
Data Entry
Variable and Value Label
Entering Data
File management
Descriptive statistics
Editing and modifying the data
This document provides an introduction to using SPSS (Statistical Package for the Social Sciences) for data analysis. It discusses the four main windows in SPSS - the data editor, output viewer, syntax editor, and script window. It also covers the basics of managing data files, including opening SPSS, defining variables, and sorting data. Several basic analysis techniques are introduced, such as frequencies, descriptives, and linear regression. Examples are provided for how to conduct these analyses and interpret the outputs.
tribhuvan University
M.A population Studies
Research methods for population analysis
Data Processing, editing and coding
if any mistakes, suggest me to improve it.
thank you
hope its useful for all :)
This document provides an overview of key concepts in statistics including definitions of statistics, variables, data, descriptive vs inferential statistics, populations vs samples, types of variables and data, levels of measurement, methods of data collection including surveys, sampling methods, types of statistical studies including observational and experimental, and some examples of proper and improper uses of statistics.
The document contains lecture slides on introductory statistics topics including definitions of population, quantitative vs qualitative data, types of measurement scales, and examples of different sampling methods and study designs. Key points covered are the definition of a population as the complete collection of all elements, examples of quantitative data like weights vs qualitative nominal categories, ordinal scales involving ranking, and retrospective study designs using existing historical data.
This document provides an overview of key concepts in descriptive statistics, including measures of center, variation, and relative standing. It discusses the mean, median, mode, range, standard deviation, z-scores, percentiles, quartiles, interquartile range, and boxplots. Formulas and properties of these statistical concepts are presented along with guidelines for interpreting and applying them to describe data distributions.
This chapter discusses decision analysis and various techniques for decision making under certainty, uncertainty, and risk. It covers decision tables, decision trees, expected monetary value, utility theory, and revising probabilities based on sample information. The key techniques taught are maximax, maximin, Hurwicz criterion, minimax regret, expected value, and expected value of perfect and sample information. Decision analysis provides strategies to evaluate alternatives and make optimal decisions under different conditions.
This document provides an overview of Chapter 7 from a statistics textbook. The chapter covers sampling and sampling distributions. It has 6 main learning objectives, including determining when to use sampling vs a census, distinguishing random and nonrandom sampling, and understanding the impact of the central limit theorem. The chapter outline lists 7 sections that will be covered, such as sampling, sampling distributions of the mean and proportion, and key terms. It provides examples to illustrate the central limit theorem and formulas from it.
This chapter introduces simple (bivariate, linear) regression analysis. It covers computing the regression line equation from sample data and interpreting the slope and intercept. It also discusses residual analysis to test regression assumptions and examine model fit, and computing measures like the standard error of the estimate and coefficient of determination to evaluate the model. The chapter teaches how to use the regression model to estimate y values and test hypotheses about the slope and model. The overall goal is for students to understand and apply the key concepts of simple regression.
This document provides an outline and overview of Chapter 3: Descriptive Statistics from a statistics textbook. It discusses key concepts in descriptive statistics including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation), measures of shape (skewness, kurtosis), and correlation. The chapter will cover calculating these statistics for both ungrouped and grouped data, and interpreting them to describe data distributions. It emphasizes that descriptive statistics are used to numerically summarize and characterize data sets.
This chapter introduces fundamental statistical concepts for managers. It defines key terms like population, sample, and parameter and discusses descriptive and inferential statistics. The chapter outlines different data collection methods and sampling techniques, including probability and non-probability samples. It also covers data types, levels of measurement, evaluating survey quality, and sources of survey error. The goal is to explain why understanding statistics is important for managers to analyze data and make informed decisions.
What is a Spearman's Rank Order Correlation (independence)?Ken Plummer
This document provides an overview of Spearman's rank-order correlation test. It explains that Spearman's rho is a non-parametric analogue to the Pearson product-moment correlation coefficient that can be used to measure the strength of association between two ranked variables. It compares Spearman's rho to other correlation tests and notes that it produces identical results to Pearson correlation but can handle ordinal data and situations where variables are skewed or tied ranks.
This document provides an overview of corporate restructuring and industrial sickness. It defines corporate restructuring as assessing and altering a firm's capital structure, assets, and organization to improve performance and shareholder value. Reasons for restructuring include globalization, policy changes, and gaining economies of scale. Techniques include mergers, divestitures, and strategic alliances. Industrial sickness is defined under Indian law and occurs when accumulated losses exceed net worth or a firm fails to repay debts. Common causes are poor planning, financial management, and working capital management. Turnaround management elements to address sickness include changing management, cost reductions, and cash generation.
This document discusses quantitative data analysis. It defines quantitative data as numerical data that can be statistically analyzed. There are different types of quantitative data like counts, measurements, sensory calculations, and projections. Data coding is explained as the process of assigning codes to raw data to organize and summarize it for analysis. Visual aids like tables, bar charts, pie charts, scatter plots, and line graphs are described as ways to present quantitative data visually to identify patterns and relationships. Statistics can then be used to analyze the coded and visualized quantitative data.
This document discusses different types of measurement scales used in research. It defines measurement as assigning numbers or symbols to characteristics according to rules, while scaling involves placing measured objects on a continuum. The primary scales of measurement are nominal, ordinal, interval, and ratio. Nominal scales use numbers as labels, ordinal scales reflect ranking, interval scales have equal differences, and ratio scales have a true zero point. Examples of scales discussed include Likert scales, semantic differentials, and constant sum scales for measuring attitudes and importance of attributes.
c++ computer programming language datatypes ,operators,Lecture 03 04jabirMemon
Data types
Data literals
Variables
Constants
Rules for naming variable and constant
Operators ,Problem solving
Six steps towards problem solution
Basic problem solving concepts
This document discusses measurement and scaling. It defines measurement as assigning numbers and scaling as placing respondents on a continuum. There are four types of measurement scales: nominal, ordinal, interval, and ratio scales. Nominal scales use numbers for identification while ordinal scales show more or less of a characteristic. Interval and ratio scales can be added and subtracted. The document also covers scaling techniques like Likert scales, semantic differentiation, and paired comparisons. It concludes with discussing criteria for good measurement including validity, reliability, and sensitivity.
The document provides an overview of a 3-day data analytics training program held in Jakarta, Indonesia from April 24-26, 2019. It discusses topics that will be covered including big data overview, data for business analysis, data analytics concepts, and data analytics tools. The training is led by Dr. Ir. John Sihotang and is aimed at management trainees of the company Sucofindo.
Research Methodology: Questionnaire, Sampling, Data Preparationamitsethi21985
As per PTU's MBA Syllabus, Unit No. 2: Sources Of Data: Primary And Secondary; Data Collection Methods; Questionnaire Designing: Construction, Types And Developing A Good Questionnaire. Sampling Design and Techniques, Scaling Techniques, Meaning, Types, Data Processing Operations, Editing, Coding, Classification, Tabulation. Research Proposal/Synopsis Writing. Practical Framework
The document discusses various techniques for measurement and scaling in research. It begins by defining measurement as assigning numbers or symbols to object characteristics according to rules, while scaling creates a continuum to locate measured objects. There are four primary scales of measurement: nominal, ordinal, interval, and ratio. Nominal involves labels, ordinal involves ranking, interval involves equal distances between numbers, and ratio has a true zero point. Comparative techniques like paired comparisons and rank ordering involve direct object comparisons, while noncomparative techniques scale objects independently. Constant sum and Likert scaling are provided as examples.
The document discusses various techniques for measuring and scaling objects, characteristics, and attitudes. It begins by defining measurement as assigning numbers or symbols to objects according to rules, while scaling creates a continuum to locate measured objects. It then covers primary scales of measurement (nominal, ordinal, interval, ratio) and provides examples. Several comparative and non-comparative scaling techniques are described in detail, including paired comparison, rank ordering, constant sum, Likert scales, semantic differentials, and continuous rating scales. Advantages and disadvantages of different methods are also reviewed.
The document discusses various techniques for measuring and scaling objects, characteristics, and attitudes. It begins by defining measurement as assigning numbers or symbols to objects according to rules, while scaling creates a continuum to locate measured objects. It then covers primary scales of measurement (nominal, ordinal, interval, ratio) and provides examples. Several comparative and non-comparative scaling techniques are described in detail, including paired comparison, rank ordering, constant sum, Likert scales, semantic differentials, and continuous rating scales. Advantages and disadvantages of different methods are also reviewed.
Here are the 3 types of slowly changing dimensions:
Type 1 SCD - Overwrite current attribute value with new value. Only current value is stored.
Type 2 SCD - Add a new row with a new surrogate key and mark old row as inactive and new row as active. Both old and new values are stored.
Type 3 SCD - Add new columns to capture attribute changes rather than new rows. New columns capture attribute history.
This document discusses measurement and scaling techniques used in marketing research. It defines measurement as assigning numbers to characteristics according to rules, while scaling creates a continuum to locate measured objects. There are four primary scales of measurement - nominal, ordinal, interval, and ratio - which differ in the types of mathematical operations and statistics permitted. Comparative scaling techniques like paired comparisons and rank ordering require direct object comparisons, while noncomparative techniques scale objects independently. The appropriate scale must match the research problem and inform questionnaire design and data analysis.
This document provides an overview of a course on data analysis using SPSS. The course objectives are to teach students statistical analysis concepts, grasp psychological research concepts, and learn how to appropriately process, analyze, interpret and report on research data using SPSS. The course will cover introductory topics like launching SPSS and understanding its interface, as well as more advanced topics like conducting different statistical tests and interpreting outputs. Students will apply their learning through a group project involving data collection, analysis and reporting.
Big Data LDN 2018: TIPS AND TRICKS TO WRANGLE BIG, DIRTY DATAMatt Stubbs
Date: 14th November 2018
Location: Data Ops Theatre
Time: 11:50 - 12:20
Speaker: Marion Azoulai
Organisation: TIBCO
About: Data science may be “one of the sexiest jobs of the 21st Century,” but it’s likely your most valuable analytics employees are spending too much time on the most mundane tasks: prepping data for analysis. Make it easy to clean and work with data to give time back to your analytics talent so they can focus on answering questions, solving problems, and discovering opportunities to innovate. Join this session to learn practical tips and tricks to significantly reduce the time needed to transform and wrangle data and leave more time for generating insights.
This document discusses data collection methods. It begins by defining data collection as the systematic process of gathering observations or measurements. It then outlines the main steps in data collection: 1) defining the research aim, 2) choosing a data collection method such as experiments, surveys, interviews etc., and 3) planning data collection procedures such as sampling and standardizing. It also discusses different measurement scales such as nominal, ordinal, interval and ratio scales that are used to quantify variables. Finally, it covers scaling techniques including comparative scales like paired comparisons and ranking as well as non-comparative scales like Likert scales.
Measurement involves assigning numbers or symbols to characteristics according to prespecified rules with a one-to-one correspondence between the numbers and characteristics. Scaling creates a continuum to locate measured objects. There are four primary scales of measurement - nominal, ordinal, interval, and ratio - which differ in the types of statistical analyses permitted and operations allowed on the assigned numbers.
This document provides an overview of research methodology concepts including:
1. It defines research and discusses the characteristics of scientific methods and research objectives.
2. It covers developing hypotheses, research design, levels of measurement, and scaling techniques.
3. It describes different types of scaling including comparative, non-comparative, continuous rating, itemized rating, Likert, semantic differential, and Stapel scales.
The seven traditional tools of quality – New management tools – Six-sigma: Concepts, methodology, applications
to manufacturing, service sector including IT – Bench marking– Reason to bench mark, Bench marking process –
FMEA – Stages, Types
We run a training program on The Certified Six Sigma Black Belt (CSSBB) and enable participants to become a professional who can explain Six Sigma philosophies and principles, including supporting systems and tools.
The participant would be able to demonstrate team leadership, understand team dynamics, and assign team member roles and responsibilities. They will have a thorough understanding of all aspects of the DMAIC model in accordance with Six Sigma
principles and will have basic knowledge of lean enterprise concepts, are able to identify nonvalue-added elements and activities, and are able to use specific tools post this training.
The document discusses Toyota's recalls of millions of vehicles in 2009-2010 due to issues with accelerators getting stuck. This damaged Toyota's reputation for quality and reliability. Rapid global expansion may have compromised quality systems as production moved overseas. Solutions included overhauling quality processes, communicating directly with customers, and regaining trust through a public relations campaign. The recalls significantly hurt Toyota's financial performance in the short term through lost sales and costs. Marketing would play a role in rebuilding Toyota's brand image and regaining customer confidence.
Reebok entered the Indian sportswear market in 1995 through a joint venture. It targeted the premium segment with higher priced shoes. Through customization, extensive retail presence, and endorsement of Indian cricket stars, Reebok established itself as the market leader with 51% share by 2007. In contrast, Nike entered through a licensing agreement and was slow to develop products for India, position itself effectively, and expand retail presence, allowing Reebok to outperform it initially in the Indian market.
The document discusses Reebok and Nike's entry and performance in the Indian sportswear market in the 1990s and 2000s. Some key points:
- Reebok entered India in 1995 through a joint venture, while Nike entered through a licensing agreement. Reebok customized products for the Indian market and established an extensive retail presence, becoming the market leader with 51% share by 2007.
- In contrast, Nike was slow to develop products for India and relied on its licensing partner for distribution, limiting its market penetration. It initially positioned itself as a lifestyle brand rather than focusing on sports.
- To compete with Reebok's strong cricket brand associations, Nike became the official app
Toyota recalled over 6 million vehicles in the US in late 2009 and early 2010 due to issues with accelerators sticking in certain models. This was a major blow to Toyota's reputation for quality and reliability. Toyota suspended sales and production of some models as a result. The recalls pointed to potential dangers large corporations face in a global economy and the importance of quality for Toyota's operations and brand image, which had been built on its quality systems and processes. However, some analysts felt Toyota may have sacrificed quality for rapid global expansion and the goal of becoming the largest automaker.
The document provides guidance on creating an effective press release. It defines a press release as a written communication distributed to media to provide information and draw attention to something. It should be 1-2 pages, written clearly and concisely. The document outlines key sections of a press release, including the headline, dateline, lead paragraph answering who, what, when, where, why and how, quotes, and ending with "-30-". It emphasizes keeping press releases short, factual, and written from a journalist's perspective.
This document provides information on how to write reports. It discusses that reports present focused content to a specific audience, often as the result of an investigation. Reports serve to give information, record events for decision making, and recommend specific actions. The document outlines the different types of reports and the typical five stages of report writing: defining the problem and purpose, identifying issues, conducting research, analyzing data, and providing conclusions and recommendations. It also discusses the common structure and layout of reports, including front matter, main body, and back matter sections.
This document discusses different types of claims made in business communications. It defines a claim as a request for an adjustment and distinguishes between routine claims, which assume quick approval due to guarantees or contracts, and persuasive claims, which require explanations and arguments to obtain approval. The document advises businesses to welcome all claims as a way to improve customer satisfaction, retain customers, and gain a positive reputation for addressing issues. It also provides tips for writing effective persuasive claims.
The document provides information about memos, including their purpose, characteristics, formats, parts, and an example. Memos are used for inter-office communication to bring attention to problems, provide information, and persuade. They are short, direct, and use a block or modified block format. Key parts include the header, opening, summary, discussion, action, and attachments. The example memo discusses changing an advertising strategy based on market research findings. It focuses the discussion on internet and television advertising that targets young adults.
1) Meetings provide an opportunity for stakeholders to come to a common understanding and allow for discussion to move things forward.
2) An effective meeting follows the PROOF framework: Planning, Reaching out, Organizing, Orchestrating, and Following through.
3) Preparing for a meeting requires clarity on purpose, participants, expectations, time, and logistics. The agenda should be circulated in advance.
The document discusses the various parts of a formal business letter and different letter formats. It outlines the standard elements of a letter which typically include: the heading with return address and date, inside address, salutation/greeting, subject line (optional), body paragraphs, complimentary close, signature, and other optional elements like enclosures. It also compares the American and British styles of letter formatting and punctuation. Finally, it provides examples of three common letter formats: block, modified block, and semi-block indented.
The document provides principles for effective business communication. It discusses how most people are poor communicators and listeners. It emphasizes the importance of clear, concise written communication and provides 12 principles to improve writing skills, including: orienting writing towards the receiver; using simple vocabulary; using concrete rather than abstract words; using active voice; and ensuring coherence, unity, and flow. The document also covers style and tone considerations for business writing.
This document discusses working capital management. It defines current assets and outlines factors that influence working capital requirements, such as a firm's nature of business and production seasonality. The document also discusses determining the optimal level of current assets by balancing liquidity and carrying costs. Additionally, it examines financing current assets through a mix of long-term and short-term sources and calculating cash requirements for working capital based on a firm's operating cycle.
Dividends and _dividend_policy_powerpoint_presentation[1]Pooja Sakhla
The document discusses various aspects of dividends and dividend policy. It begins by defining different types of cash dividends that companies can issue, such as regular cash dividends paid quarterly. It also explains the dividend payment process and timeline. The document then discusses whether dividend policy truly matters or if it is irrelevant under certain assumptions. It also outlines different dividend policies companies may follow, such as residual dividend policies, and considers why companies may prefer high or low dividend payouts. The document concludes by discussing stock repurchases and stock dividends as alternatives to cash dividends.
The document outlines the systems development process and project management. It discusses the importance of involving end users and using prototyping. The systems development life cycle includes systems investigation, feasibility study, systems analysis, systems design, and implementation. The feasibility study evaluates if a project is operationally, economically, technically, legally and humanly feasible. Systems analysis studies user information needs and produces functional requirements. Systems design develops the logical and physical design of the system. Prototyping allows for rapid testing and refinement of designs with end users.
The document outlines the systems development process and project management. It discusses the importance of involving users, prototyping, and following steps like feasibility analysis, systems analysis, design, and implementation. The learning objectives cover using the systems development process as a problem-solving framework, describing the development cycle steps, explaining prototyping, understanding project management, and identifying implementation and evaluation activities.
This document discusses decision support systems and artificial intelligence applications in business. It covers topics like management information systems, online analytical processing, dashboards, expert systems, neural networks, and more. The key learning objectives are to identify how these technologies can support business decisions and to give examples of their uses. Case studies provide real-world illustrations of dashboard tools, automated decision making, and AI implementation challenges.
This chapter discusses decision support systems (DSS) and how they differ from traditional management information systems (MIS). DSS provide interactive support to managers during semistructured decision making through tools like analytical models, databases, and computer modeling. MIS produce predefined reports to support more structured decisions. The chapter outlines several types of DSS including executive information systems, enterprise portals, online analytical processing (OLAP), geographic information systems, and data visualization systems. It also discusses how various analytical techniques can be used in DSS to support decision making.
The document outlines the key learning objectives of Chapter 1 which introduce fundamental concepts of information systems. It provides examples of how information systems support business functions at a company called Sew What? Inc. The chapter defines what an information system is, the difference between an information system and information technology, and the types of systems used by businesses like transaction processing, management information, and expert systems. It also discusses the challenges and opportunities of information technology and careers in the field.
An information system is defined as software that helps organize and analyze data to turn it into useful information for decision making in an organization. The document discusses the need for information systems and their structure, providing an example. It introduces information and information systems, and explains that the purpose of an IS is to take raw data and make it into useful information that can be used for decision making.
1) The document discusses pricing strategies and promotional schemes used by domestic airlines in India to compete in the oligopolistic aviation market. It describes schemes like APEX fares that offered discounted tickets for advance purchases.
2) It provides details on various schemes launched by different airlines like "Wings of Freedom" by Indian Airlines and "Sixer" by Air Sahara to attract customers. It also explains concepts like kinked demand curves that are characteristics of oligopoly markets.
3) The implementation of discounted fare schemes through innovations like APEX led to increased air travel among the middle class in India and benefited the tourism industry. However, continued growth of the aviation industry remains dependent on improving infrastructure
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
2. varsha varde 22
Varsha Varde
• M. Sc; Ph. D. in Statistics (O. R.)
• Taught Advanced Stats to PG Students
• Quantitative Faculty in NIBM
• Visiting Faculty at JBIMS
• Officer in Bank Of India
• General Manager At AFC
• Handled consultancy in Various Fields
3. varsha varde 3
QUANTITATIVE METHODS
• It is a broad term
• Two branches of relevance to us are
statistics and operations research
• Each of these offers several tools and
techniques to tackle real life problems
in scientific manner
4. varsha varde 4
STATISTICS
• Word derived from Latin word status
• It came into existence as collection of
certain data of states
• It continued to expand its boundaries to
include planning and organising of data
collection ,analysis of data and drawing
meaningful conclusions from data
• Data are input, statistics is process and
information is output
5. varsha varde 5
TOOLS IN STATISTICS
Broadly classified into
• Descriptive statistics-describes principal
features of the collected data
• Inferential statistics-says something about
future or for present but for larger group of
data than actually collected
• Sampling- designing of sample survey,
selection of representative sample
• Probability- quantifying uncertainties
6. varsha varde 6
History of OR
• Origin: research in military operations
• 1930’s: British scientists helped in solving
problems of military operations, such as:
• Effective use of radar, Anti-submarine
warfare, civilian defence, deployment of
convoy vessels
• Team: Experts from various disciplines
• Inter disciplinary character of OR still
continues
• World war II: Military operations research in
US.
7. varsha varde 7
History of OR
• Post world war-II: Military continued using OR
analysts
• But, OR as a discipline not accepted in outside world
• Reason: OR solves only military problems
• Two Events helped spread to non –military
establishments
• Development of Simplex method in1947
• Development and usage of high speed computers
• OR as a discipline came into existencein1950’s
• OR: Systematic and scientific approach to problem
solving
8. varsha varde 8
Models in Operations Research
• Linear programming
• Transportation
• Assignment
• Inventory
• Queuing
• Project scheduling
• Simulation
• Decision analysis
9. varsha varde 99
Statistical Problems
1. A market analyst wants to know the
effectiveness of a new diet.
2. A pharmaceutical Co. wants to know if a
new drug is superior to already existing
drugs, or possible side effects.
3. How fuel efficient a certain car model is?
10. varsha varde 1010
Statistical Problems
4. Is there any relationship between your
Grades and employment opportunities.
5. If you answer all questions on a (T,F) (or
multiple choice) examination completely
randomly, what are your chances of
passing?
6. What is the effect of package designs on
sales
11. varsha varde 1111
Statistical Problems
7. How to interpret polls. How many
individuals you need to sample for your
inferences to be acceptable? What is
meant by the margin of error?
8. What is the effect of market strategy on
market share?
9. How to pick the stocks to invest in?
12. varsha varde 1212
Course Coverage
• Essential Basics Management
• Data Classification & Presentation Tools
• Preliminary Analysis & Interpretation of Data
• Correlation Model
• Regression Model
• Time Series Model
• Forecasting
• Uncertainty and Probability
• Probability Distributions
• Sampling and Sampling Distributions
• Estimation and Testing of Hypothesis
• Chi-Square and Analysis of Variance
• Decision Theory
• Linear Programming
13. varsha varde 1313
Suggested Reading
• Statistics for Management by Richard I Levin-
Prentice Hall Of India –New DelhiDavid C.
Howell (2003)
• Quantitative Techniques for Management
Decisions by U K Srivastava & Others-New Age
International-New Delhi
• Quantitative Methods for Business by David R
Anderson &Others-Thomson Learning-New
Delhi
• Business Statistics by David M Levine & Others-
Pearson Education-Delhi-2004
16. varsha varde 1616
Nominal Numbers
• Purpose: Identification of an Object
• Example: House Number (10 Janpath)
Telephone Number
Smart Card PINumber
Number on Cricket T-Shirt
• No Quantitative Properties Except
Equivalence: Two Different Nominal
Numbers Indicate Two Different Objects
17. Silent Disaster
• Nominal Nos. look like normal numerals
• Prime Foods CEO’s Tel No.: 23249843
• Prime Foods Ltd. Sales: Rs. 23249843
• No computer will stop you if you ask it to
add nominal numbers (or multiply, divide)
• But, resultant figure makes no sense
• Still, this mistake is made occasionally.
18. varsha varde 1818
Ordinal Numbers
• Purpose: Represent Position or Ranking
• Example: WTA Ranking of Sania Mirza
Salary Grade
Floor Number
Performance Rating
• No Quantitative Properties Except Order &
Equivalence: Different Ordinal Numbers
Indicate Different Objects in Some Kind of
Relationship with Each Other
19. Silent Disaster
• Ordinal Nos. look like normal numerals
• Sania Mirza’s weight (kg) : 53
• Sania Mirza’s WTA Ranking : 53
• You can safely add weights & divide them
• No computer will stop you if you ask it to
add ordinal numbers (or multiply, divide)
• But, the resultant figure makes no sense
• Still, this blunder is committed frequently.
20. varsha varde 2020
Cardinal Numbers
• Purpose: Represent Quantity
• Example: Sales Turnover in Million Rs.
Production in Tons
Number of Employees
Earning Per Share
• Truly Quantitative
• Follow All Mathematical Properties: Order,
Equivalence, +, -, x, /.
21. varsha varde 21
Interval and Ratio Scales
• Interval Scale employs arbitrary zero point
• Ratio Scale employs a true zero point
• Only ratio scale permits statements
concerning ratios of numbers in the scale;
e.g 4kgs to 2 kgs is 2kgs to 1 kg
• Scale of Temperature measured in Celsius is
Interval Scale.
• Height as measured from a table top has
interval scale
• Height as measured from floor has ratio
scale
• Apart from difference in the nature of zero
point ,interval and ratio scales have same
properties and both employ cardinal
numbers
22. varsha varde 2222
Example
Zone Code No. Sales
(Rs. In Million)
Rank
Northern 01 483 3
Western 02 738 1
Eastern 03 265 4
Southern 04 567 2
Type
23. varsha varde 2323
Example
Zone Code No. Sales
(Rs. In Million)
Rank
Northern 01 483 3
Western 02 738 1
Eastern 03 265 4
Southern 04 567 2
Type Nominal Cardinal Ordinal
24. 7 38
Primary Scales of Measurement
Scale
Nominal Numbers
Assigned
to Runners
Ordinal Rank Order
of Winners
Interval Performance
Rating on a
0 to 10 Scale
Ratio Time to
Finish, in
Third
place
Second
place
First
place
Finish
Finish
8.2 9.1 9.6
15.2 14.1 13.4
25. Primary Scales of Measurement
Nominal Scale
• The numbers serve only as labels or tags for identifying
and classifying objects.
• When used for identification, there is a strict one-to-one
correspondence between the numbers and the objects.
• The numbers do not reflect the amount of the
characteristic possessed by the objects.
• The only permissible operation on the numbers in a
nominal scale is counting.
• Only a limited number of statistics, all of which are based
on frequency counts, are permissible, e.g., percentages,
and mode.
27. Primary Scales of Measurement
Ordinal Scale
• A ranking scale in which numbers are assigned to
objects to indicate the relative extent to which the objects
possess some characteristic.
• Can determine whether an object has more or less of a
characteristic than some other object, but not how much
more or less.
• Any series of numbers can be assigned that preserves
the ordered relationships between the objects.
• In addition to the counting operation allowable for
nominal scale data, ordinal scales permit the use of
statistics based on centiles, e.g., percentile, quartile,
median.
28. Primary Scales of Measurement
Interval Scale
• Numerically equal distances on the scale represent
equal values in the characteristic being measured.
• It permits comparison of the differences between
objects.
• The location of the zero point is not fixed. Both the zero
point and the units of measurement are arbitrary.
• Any positive linear transformation of the form y = a + bx
will preserve the properties of the scale.
• It is not meaningful to take ratios of scale values.
• Statistical techniques that may be used include all of
those that can be applied to nominal and ordinal data,
and in addition the arithmetic mean, standard deviation,
and other statistics commonly used in marketing
research.
29. Primary Scales of Measurement
Ratio Scale
• Possesses all the properties of the nominal, ordinal, and
interval scales.
• It has an absolute zero point.
• It is meaningful to compute ratios of scale values.
• Only proportionate transformations of the form y = bx,
where b is a positive constant, are allowed.
• All statistical techniques can be applied to ratio data.
30. Primary Scales of Measurement
Scale Basic
Characteristics
Common
Examples
Marketing
Examples
Nominal Numbers identify
& classify objects
Social Security
nos., numbering
of football players
Brand nos., store
types
Percentages,
mode
Chi-square,
binomial test
Ordinal Nos. indicate the
relative positions
of objects but not
the magnitude of
differences
between them
Quality rankings,
rankings of teams
in a tournament
Preference
rankings, market
position, social
class
Percentile,
median
Rank-order
correlation,
Ratio Zero point is fixed,
ratios of scale
values can be
compared
Length, weight Age, sales,
income, costs
Geometric
mean, harmonic
mean
Coefficient of
variation
Permissible Statistics
Descriptive Inferential
Interval Differences
between objects
Temperature
(Fahrenheit)
Attitudes,
opinions, index
Range,Arithmeti
c Mean,SD
Correlation,t
tests,ANOVA
31. varsha varde 3131
Basic Definitions
• Constant: A Characteristic that never
changes its Value (Your Height after 20)
• Variable: A Characteristic that assumes
different Values (Your Weight after 20)
• Discrete Variable: Cannot take a Value
Between Any Two Values (Staff Strength)
• Continuous Variable: Can take a Value
Between Any Two Values (P-E Ratio)
32. varsha varde 3232
Discrete Measurement Data
Only certain values are possible (there
are gaps between the possible values).
Continuous Measurement
Data
Theoretically, any value within an
interval is possible with a fine enough
measuring device.
33. varsha varde 3333
Discrete data -- Gaps between possible values
0 1 2 3 4 5 6 7
Continuous data -- Theoretically,
no gaps between possible values
0 1000
34. varsha varde 3434
Examples:
Discrete Measurement Data
• Number of students late for class
• Number of crimes reported in a police
station
• Number of times a particular word is used
• Number of defectives in a lot
Generally, discrete data are counts.
35. varsha varde 3535
Examples:
Continuous Measurement Data
• Cholesterol level
• Height
• Age
• Time to complete a homework assignment
Generally, continuous data come from
measurements.
36. varsha varde 3636
Who Cares?
The type(s) of data
collected in a study
determine the type of
statistical analysis used.
37. varsha varde 3737
For example ...
• Categorical data are commonly
summarized using “percentages” (or
“proportions”).
– 31% of students have a passport
– 2%, 33%, 39%, and 26% of the students in
class are, respectively engineers, science,
commerce and arts graduates
38. varsha varde 3838
And for example …
• Measurement data are typically
summarized using “averages” (or “mean
– Average weight of male students of this batch
is 75 kg.
– Average weight of female students of this
batch is 55 kg.
– Average growth rate of sales of a company is
18%.
39. varsha varde 3939
Course Coverage
• Essential Basics for Business Executives
• Data Classification & Presentation Tools
• Preliminary Analysis & Interpretation of Data
• Correlation Model
• Regression Model
• Time Series Model
• Forecasting
• Uncertainty and Probability
• Sampling Techniques
• Estimation and Testing of Hypothesis
41. varsha varde 4141
Data Classification
• First Step: Organize Data Systematically
• Arrange the Data According to a Common
Characteristic Possessed by All Items
• Methods: Array
Frequency Array
Frequency Distribution
48. varsha varde 4848
Constructing
Frequency Distribution
• Find Maximum & Minimum Values in Data.
• Make Sub-Intervals to Cover Entire Range
• They are Called the ‘Class Intervals’.
• Class Intervals Need Not Be of Equal
Length. But, it is Useful if They Are.
• Note the Number of Observation that
Belong to Each Class Interval.
• They are Called the ‘Frequencies’.
49. varsha varde 4949
Frequency Distribution
Number of Orders Number of SEs
00 – 04 14
05 - 09 19
10 – 14 07
15 – 19 04
20 – 24 02
25 – 29 01
30 – 34 02
35 – 39 00
40 – 44 01
TOTAL 50
50. varsha varde 5050
In This Example
• What is the Variable? Sales Executives or
Sales Orders?
• Is it Nominal, Ordinal or Cardinal?
• Is it Discrete or Continuous?
• What are the frequencies (sometimes
called as frequency values or score)?
51. varsha varde 5151
Data Presentation
• Some People are Averse to Numbers
• They Can’t Grasp Tabulated Data
• Pictures Speak with Them; Figures Don’t.
• Pictures Tell Them What A Thousand
Numbers Can’t.
• If your Boss Fits in This Category, You
Must Learn the Art and Methods of Data
Presentation.
52. varsha varde 5252
For Nominal & Ordinal Variables
Bar Chart:
• Horizontal Diagram of Bars of Equal Width
But of Different Heights
• Bars Stand on a Common Base Line
• Horizontal Axis: Nominal/Ordinal Variables
• Vertical Axis: Their Frequencies
• Height of Bar is Prop. to Frequency Value
• Bars are Separated by Equal Distance
53. varsha varde 5353
Plant wise Production
Tons
Per
Month
April
2006
0
5
10
15
20
25
30
35
Plant
A
Plant
B
Plant
C
Plant
D
54. varsha varde 5454
For Nominal & Ordinal Variables
Component Bar Chart:
• Illustration of A Total Divided Into Parts
• Divide Simple Bars Into Component Parts
• Part Prop. to Component Freq. Value
Multiple Bar Chart:
• Direct Comparison Among Variables
• Draw Bars By the Side of Each Other
55. varsha varde 5555
Multiple Bar Chart
0
10
20
30
40
50
60
70
80
90
1st Qtr 2nd Qtr 3rd Qtr 4th Qtr
East
West
North
56. varsha varde 5656
For Nominal & Ordinal Variables
Pie Chart:
• Divide A Circle Into Sectors (Pie)
• Area of Each Sector Proportionate to
Component Frequency Value
• Also called ‘Pizza Chart’
57. varsha varde 5757
A Pie Chart
SALES(Rs Crores)
A
37%
B
15%
C
7%
D
11%
E
30%
A
B
C
D
E
59. varsha varde 5959
For Cardinal Variables
Histogram:
• A Graph of Columns, Each Having a Class
Interval as Base and Frequency as Height
• Plot Class Intervals Along Horizontal Axis
• Erect A Rectangle On Each Class Interval
• Area of Rectangle Prop. to Freq. Value
• Rectangles Touch Each Other
61. varsha varde 6161
For Cardinal Variables
Frequency Polygon:
• Plot Mid Points of Class Intervals Along
Horizontal Axis
• Concerned Frequencies on Vertical Axis
• Joins All These Points
Frequency Curve:
• Join All These Points by a Smooth Curve
64. varsha varde 64
Visual Characteristics of
Frequency Distributions
• Skewness
• Kurtosis
• Modality
65. varsha varde 6565
Skewness
• Symmetrical Distribution (Normal Distn.)
• Asymmetrical Distribution: Positively
Skewed or Negatively Skewed
• Symmetrical Distributions are Easy to
Handle Mathematically.
• But, Asymmetric Distributions Are More
Commonly Found.
• That Is Why We Need Statistical Methods.
66. varsha varde 6666
Shapes of Frequency Distribution
• Draw Histogram on Paper.
• Fold Paper In Half the Long Way.
• If Distribution Is Symmetrical, the Left
Side of Histogram Would Be Mirror Image
of the Right Side.
• Life is Rarely Symmetrical.
• If Distribution Is Asymmetrical, Two
Sides Will Not Be Mirror Images of Each
Other.
67. varsha varde 6767
Positively Skewed Distribution
• Frequencies Cluster Toward the Lower
End of The Scale (That Is, The Smaller
Numbers).
• Increasingly Fewer Scores At the Upper
End of The Scale (That Is, The Larger
Numbers).
69. varsha varde 6969
Negatively Skewed Distribution
• Negatively Skewed Distribution Is Exactly
The Opposite.
• Most of The Scores Occur Toward The
Upper End of The Scale (That Is, The
Larger Numbers).
• Increasingly Fewer Scores Occur Toward
The Lower End (That Is, The Smaller
Numbers).
71. varsha varde 71
Kurtosis
• Relative Concentration of Scores in the
Center, the Upper and Lower Ends and
the Shoulders of a Distribution
• Platykurtic: Flatter Curve
• Leptokurtic: More Peaked
• Mesokurtic : Medium Peaked
72. varsha varde 72
Modality
• Unimodal: Only One Major "Peak" in the
Distribution of Scores When Represented
as a Histogram
• Bimodal: Two Major Peaks
• Multimodal: More Than Two Major Peaks