This document provides an overview of descriptive statistics concepts including frequency distributions, measures of central tendency, measures of variability, and the normal distribution. It discusses numerical and graphical representations of univariate and bivariate frequency distributions using examples from an overachievement study dataset. Key measures like the mean, median, mode, range, standard deviation, and normal curve are defined. Standard scores and their uses are also introduced.
This document provides an overview of descriptive statistics concepts including frequency distributions, measures of central tendency, measures of variability, the normal curve, and standard scores. It discusses numerical and graphical representations of univariate and bivariate frequency distributions. Key measures are defined such as the mean, median, mode, range, standard deviation, and variance. Properties of the normal distribution are also outlined.
Frequencies provides statistics and graphical displays to describe variables. It can order values by ascending/descending order or frequency. Key outputs include mean, median, mode, quartiles, standard deviation, variance, skewness, and kurtosis. Quartiles divide data into four equal groups. Skewness measures asymmetry while kurtosis measures clustering around the mean. Charts like pie charts, bar charts, and histograms can visualize the data distribution. Crosstabs forms two-way and multi-way tables to analyze relationships between variables.
Univariate, bivariate analysis, hypothesis testing, chi squarekongara
This document provides an introduction to data analysis. It discusses various topics related to measurement and types of data, including univariate and bivariate analysis. For univariate analysis, it describes descriptive statistics such as mean, median, mode, variance, and standard deviation. It also discusses data distributions and different measurement scales. For bivariate analysis, it introduces cross-tabulation and chi-square tests to examine relationships between two variables. Cross-tabulation allows looking at associations between variables through frequencies and percentages in tables, while chi-square can be used to test hypotheses about relationships and determine statistical significance.
20- Tabular & Graphical Presentation of data(UG2017-18).pptRAJESHKUMAR428748
This document provides an overview of methods for tabular and graphical presentation of data. It discusses different types of frequency distributions including simple, grouped, and relative frequency distributions. It also covers various graphical methods such as bar charts, pie charts, histograms, frequency polygons, stem-and-leaf plots, box-and-whisker plots, and scatter diagrams. Examples are provided to illustrate how to construct these different types of tables and graphs. General rules for designing effective graphs are also outlined.
This document discusses methods for presenting tabular and graphical data summaries. It covers constructing frequency distributions for grouped and ungrouped data, and different types of graphs that can be used to summarize quantitative and qualitative data, including histograms, frequency polygons, ogives, stem and leaf plots, pie charts, and bar charts. Examples are provided for constructing frequency distributions and different graph types.
This document provides an introduction to descriptive statistics and related concepts. It defines key terms like variables, levels of measurement, measures of central tendency, distribution, and variability. It discusses nominal, ordinal, interval and ratio scales of measurement. It explains how to calculate the mode, median and mean as measures of central tendency. It describes normal and non-normal distributions and how to assess normality. It also discusses concepts like standard deviation, standard error of the mean, range and interquartile range as measures of variability. Examples and diagrams are provided to illustrate these statistical concepts.
This document discusses frequency distributions, histograms, and the normal distribution. It provides examples of grouped and relative frequency distributions and how to construct histograms to visualize this data. It also explains key properties of the normal distribution including the empirical rule and how it relates to standard deviations from the mean. Finally, it covers how to calculate z-scores to standardize values and use z-tables to find probabilities for the standard normal distribution.
This document provides an overview of descriptive statistics concepts including frequency distributions, measures of central tendency, measures of variability, the normal curve, and standard scores. It discusses numerical and graphical representations of univariate and bivariate frequency distributions. Key measures are defined such as the mean, median, mode, range, standard deviation, and variance. Properties of the normal distribution are also outlined.
Frequencies provides statistics and graphical displays to describe variables. It can order values by ascending/descending order or frequency. Key outputs include mean, median, mode, quartiles, standard deviation, variance, skewness, and kurtosis. Quartiles divide data into four equal groups. Skewness measures asymmetry while kurtosis measures clustering around the mean. Charts like pie charts, bar charts, and histograms can visualize the data distribution. Crosstabs forms two-way and multi-way tables to analyze relationships between variables.
Univariate, bivariate analysis, hypothesis testing, chi squarekongara
This document provides an introduction to data analysis. It discusses various topics related to measurement and types of data, including univariate and bivariate analysis. For univariate analysis, it describes descriptive statistics such as mean, median, mode, variance, and standard deviation. It also discusses data distributions and different measurement scales. For bivariate analysis, it introduces cross-tabulation and chi-square tests to examine relationships between two variables. Cross-tabulation allows looking at associations between variables through frequencies and percentages in tables, while chi-square can be used to test hypotheses about relationships and determine statistical significance.
20- Tabular & Graphical Presentation of data(UG2017-18).pptRAJESHKUMAR428748
This document provides an overview of methods for tabular and graphical presentation of data. It discusses different types of frequency distributions including simple, grouped, and relative frequency distributions. It also covers various graphical methods such as bar charts, pie charts, histograms, frequency polygons, stem-and-leaf plots, box-and-whisker plots, and scatter diagrams. Examples are provided to illustrate how to construct these different types of tables and graphs. General rules for designing effective graphs are also outlined.
This document discusses methods for presenting tabular and graphical data summaries. It covers constructing frequency distributions for grouped and ungrouped data, and different types of graphs that can be used to summarize quantitative and qualitative data, including histograms, frequency polygons, ogives, stem and leaf plots, pie charts, and bar charts. Examples are provided for constructing frequency distributions and different graph types.
This document provides an introduction to descriptive statistics and related concepts. It defines key terms like variables, levels of measurement, measures of central tendency, distribution, and variability. It discusses nominal, ordinal, interval and ratio scales of measurement. It explains how to calculate the mode, median and mean as measures of central tendency. It describes normal and non-normal distributions and how to assess normality. It also discusses concepts like standard deviation, standard error of the mean, range and interquartile range as measures of variability. Examples and diagrams are provided to illustrate these statistical concepts.
This document discusses frequency distributions, histograms, and the normal distribution. It provides examples of grouped and relative frequency distributions and how to construct histograms to visualize this data. It also explains key properties of the normal distribution including the empirical rule and how it relates to standard deviations from the mean. Finally, it covers how to calculate z-scores to standardize values and use z-tables to find probabilities for the standard normal distribution.
Exploratory Data Analysis (EDA) was promoted by John Tukey in 1977 to encourage visually examining data without hypotheses. EDA uses graphical and non-graphical techniques like histograms, scatter plots, box plots to summarize variable characteristics. EDA allows understanding data distributions and relationships without models through inspection and information graphics. Common EDA goals are describing typical values, variability, distributions, and relationships between variables.
This document provides an overview of the normal distribution:
- It defines key terms like population, sample, parameters, and statistics.
- The normal distribution is symmetric and bell-shaped. Most data lies near the mean, and the percentage of data on either side of the mean is consistent.
- 68%, 95%, and 99% of data falls within 1, 2, and 3 standard deviations of the mean, respectively, in a normal distribution.
- The document provides an example of calculating the probability of a value being above the mean using the standard normal distribution and z-scores.
This document discusses key concepts in descriptive statistics including measures of dispersion like range and standard deviation, frequency distributions, the normal distribution, and the empirical rule. It defines range as the difference between the largest and smallest values. Frequency distributions provide the number of occurrences of values using tables or graphs. The normal distribution is widely used and defined by the mean and standard deviation. Standard deviation measures how far values deviate from the mean. The empirical rule states that 68%, 95%, and 99.7% of values in a normal distribution fall within 1, 2, and 3 standard deviations of the mean, respectively.
Introduction. Data presentation
Frequency distribution. Distribution center indicators. RMS. Covariance. Effects of
diversification. Choice of the weighing method.
More: https://ek.biem.sumdu.edu.ua/
This document summarizes various statistical measures used to analyze and describe data distributions, including measures of central tendency (mean, median, mode), dispersion (range, standard deviation, variance), skewness, and kurtosis. It provides formulas and methods for calculating each measure along with interpretations of the results. Measures of central tendency provide a single value to represent the center of the data set. Measures of dispersion describe how spread out or varied the data values are. Skewness and kurtosis measure the symmetry and peakedness of distributions compared to the normal curve.
PSM Public Social Medicine Public HealthIgweSolomon4
The document discusses the collection and organization of health data. It describes routine/regular sources of health data like censuses and hospitals which provide readily available data but may be incomplete or inaccurate. Non-routine/ad-hoc primary data collected through surveys can provide reliable and accurate data but requires more resources. The document also discusses organizing data through descriptive statistics like frequency tables and graphs to describe, explore and summarize data.
This document discusses key concepts related to frequency distributions and analyzing distributions of data, including:
- Frequency distributions group data into classes and show the number of observations in each class. Bar charts and pie charts can represent this.
- Properties of distributions like shape (symmetric vs. skewed), central tendency (mean, median, mode), and variability/spread (range, standard deviation) provide important information.
- Normal distributions form a symmetrical, bell-shaped curve where most scores fall within one standard deviation of the mean. Standard deviation measures average distance from the mean.
This document provides an overview of descriptive statistics techniques for summarizing categorical and quantitative data. It discusses frequency distributions, measures of central tendency (mean, median, mode), measures of variability (range, variance, standard deviation), and methods for visualizing data through charts, graphs, and other displays. The goal of descriptive statistics is to organize and describe the characteristics of data through counts, averages, and other summaries.
This document discusses various methods for summarizing and displaying data, including tables, graphs, and frequency distributions. It explains that frequency distributions simplify data by organizing values into intervals and showing how many observations fall into each interval. Different types of graphs are also described, such as bar graphs and line graphs, which can help visualize relationships in data. The document emphasizes that frequency distributions and graphs educate viewers and can spark new insights when analyzing behaviors and findings from research studies.
This document provides an overview of descriptive statistics concepts including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation, variance), and how to compute them from both ungrouped and grouped data. It defines key terms like mean, median, mode, percentiles, quartiles, range, standard deviation, variance, and coefficient of variation. It also discusses how standard deviation can be used to measure financial risk and the empirical rule and Chebyshev's theorem for interpreting standard deviation.
The document discusses probability distributions and statistical analysis. It introduces key concepts like the binomial distribution, normal distribution, and standard normal distribution. It explains how probability distributions can model expected outcomes from random processes and how they are used to determine if observed data aligns with or differs from expectations through statistical analysis. Examples are provided for calculating probabilities and areas under curves for the binomial and normal distributions.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 2: Exploring Data with Tables and Graphs
2.2: Histograms
This document discusses the normal distribution and its key properties. It also discusses sampling distributions and the central limit theorem. Some key points:
- The normal distribution is bell-shaped and symmetric. It is defined by its mean and standard deviation. Approximately 68% of values fall within 1 standard deviation of the mean.
- Sample statistics like the sample mean follow sampling distributions. When samples are large and random, the sampling distributions are often normally distributed according to the central limit theorem.
- Correlation and regression analyze the relationship between two variables. Correlation measures the strength and direction of association, while regression finds the best-fitting linear relationship to predict one variable from the other.
The document describes key concepts related to normal distributions including:
- Normal distributions are described by a density curve that is symmetric and bell-shaped. The curve is defined by its mean and standard deviation.
- Approximately 68%, 95%, and 99.7% of observations in a normal distribution fall within 1, 2, and 3 standard deviations of the mean, respectively.
- The standard normal distribution has a mean of 0 and standard deviation of 1, and the z-score allows any normal distribution to be standardized to this form.
- The standard normal table can then be used to find the proportion of observations that fall below or between given z-scores.
This document provides an introduction to key concepts in statistics, including scales of measurement for categorical and numerical variables, methods for displaying categorical data, measures of central tendency like mean, median and mode, measures of numerical spread such as range and interquartile range, the concept of association and correlation between variables, and the concept of regression. The document defines key terms and provides examples to illustrate statistical concepts.
An ogive, or cumulative histogram, is a graph used to determine how many data values lie above or below a particular value. It is constructed by plotting the cumulative frequencies on the y-axis against the class limits on the x-axis. Less than ogives start from the upper class limit and add frequencies to the cumulative total, while more than ogives start from the total frequency and subtract frequencies from the cumulative total. Ogives provide a visual summary of a data set and show the proportion of data points above or below values.
An ogive, or cumulative histogram, is a graph used to determine how many data values lie above or below a particular value. It is constructed by plotting the cumulative frequencies on the y-axis against the class limits on the x-axis. Less than ogives start from the upper class limit and add frequencies to the cumulative total, while more than ogives start from the lower limit and subtract frequencies. Ogives provide a visual summary of large data sets and show the proportion of data above or below values.
Exploratory Data Analysis (EDA) was promoted by John Tukey in 1977 to encourage visually examining data without hypotheses. EDA uses graphical and non-graphical techniques like histograms, scatter plots, box plots to summarize variable characteristics. EDA allows understanding data distributions and relationships without models through inspection and information graphics. Common EDA goals are describing typical values, variability, distributions, and relationships between variables.
This document provides an overview of the normal distribution:
- It defines key terms like population, sample, parameters, and statistics.
- The normal distribution is symmetric and bell-shaped. Most data lies near the mean, and the percentage of data on either side of the mean is consistent.
- 68%, 95%, and 99% of data falls within 1, 2, and 3 standard deviations of the mean, respectively, in a normal distribution.
- The document provides an example of calculating the probability of a value being above the mean using the standard normal distribution and z-scores.
This document discusses key concepts in descriptive statistics including measures of dispersion like range and standard deviation, frequency distributions, the normal distribution, and the empirical rule. It defines range as the difference between the largest and smallest values. Frequency distributions provide the number of occurrences of values using tables or graphs. The normal distribution is widely used and defined by the mean and standard deviation. Standard deviation measures how far values deviate from the mean. The empirical rule states that 68%, 95%, and 99.7% of values in a normal distribution fall within 1, 2, and 3 standard deviations of the mean, respectively.
Introduction. Data presentation
Frequency distribution. Distribution center indicators. RMS. Covariance. Effects of
diversification. Choice of the weighing method.
More: https://ek.biem.sumdu.edu.ua/
This document summarizes various statistical measures used to analyze and describe data distributions, including measures of central tendency (mean, median, mode), dispersion (range, standard deviation, variance), skewness, and kurtosis. It provides formulas and methods for calculating each measure along with interpretations of the results. Measures of central tendency provide a single value to represent the center of the data set. Measures of dispersion describe how spread out or varied the data values are. Skewness and kurtosis measure the symmetry and peakedness of distributions compared to the normal curve.
PSM Public Social Medicine Public HealthIgweSolomon4
The document discusses the collection and organization of health data. It describes routine/regular sources of health data like censuses and hospitals which provide readily available data but may be incomplete or inaccurate. Non-routine/ad-hoc primary data collected through surveys can provide reliable and accurate data but requires more resources. The document also discusses organizing data through descriptive statistics like frequency tables and graphs to describe, explore and summarize data.
This document discusses key concepts related to frequency distributions and analyzing distributions of data, including:
- Frequency distributions group data into classes and show the number of observations in each class. Bar charts and pie charts can represent this.
- Properties of distributions like shape (symmetric vs. skewed), central tendency (mean, median, mode), and variability/spread (range, standard deviation) provide important information.
- Normal distributions form a symmetrical, bell-shaped curve where most scores fall within one standard deviation of the mean. Standard deviation measures average distance from the mean.
This document provides an overview of descriptive statistics techniques for summarizing categorical and quantitative data. It discusses frequency distributions, measures of central tendency (mean, median, mode), measures of variability (range, variance, standard deviation), and methods for visualizing data through charts, graphs, and other displays. The goal of descriptive statistics is to organize and describe the characteristics of data through counts, averages, and other summaries.
This document discusses various methods for summarizing and displaying data, including tables, graphs, and frequency distributions. It explains that frequency distributions simplify data by organizing values into intervals and showing how many observations fall into each interval. Different types of graphs are also described, such as bar graphs and line graphs, which can help visualize relationships in data. The document emphasizes that frequency distributions and graphs educate viewers and can spark new insights when analyzing behaviors and findings from research studies.
This document provides an overview of descriptive statistics concepts including measures of central tendency (mean, median, mode), measures of variability (range, standard deviation, variance), and how to compute them from both ungrouped and grouped data. It defines key terms like mean, median, mode, percentiles, quartiles, range, standard deviation, variance, and coefficient of variation. It also discusses how standard deviation can be used to measure financial risk and the empirical rule and Chebyshev's theorem for interpreting standard deviation.
The document discusses probability distributions and statistical analysis. It introduces key concepts like the binomial distribution, normal distribution, and standard normal distribution. It explains how probability distributions can model expected outcomes from random processes and how they are used to determine if observed data aligns with or differs from expectations through statistical analysis. Examples are provided for calculating probabilities and areas under curves for the binomial and normal distributions.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 2: Exploring Data with Tables and Graphs
2.2: Histograms
This document discusses the normal distribution and its key properties. It also discusses sampling distributions and the central limit theorem. Some key points:
- The normal distribution is bell-shaped and symmetric. It is defined by its mean and standard deviation. Approximately 68% of values fall within 1 standard deviation of the mean.
- Sample statistics like the sample mean follow sampling distributions. When samples are large and random, the sampling distributions are often normally distributed according to the central limit theorem.
- Correlation and regression analyze the relationship between two variables. Correlation measures the strength and direction of association, while regression finds the best-fitting linear relationship to predict one variable from the other.
The document describes key concepts related to normal distributions including:
- Normal distributions are described by a density curve that is symmetric and bell-shaped. The curve is defined by its mean and standard deviation.
- Approximately 68%, 95%, and 99.7% of observations in a normal distribution fall within 1, 2, and 3 standard deviations of the mean, respectively.
- The standard normal distribution has a mean of 0 and standard deviation of 1, and the z-score allows any normal distribution to be standardized to this form.
- The standard normal table can then be used to find the proportion of observations that fall below or between given z-scores.
This document provides an introduction to key concepts in statistics, including scales of measurement for categorical and numerical variables, methods for displaying categorical data, measures of central tendency like mean, median and mode, measures of numerical spread such as range and interquartile range, the concept of association and correlation between variables, and the concept of regression. The document defines key terms and provides examples to illustrate statistical concepts.
An ogive, or cumulative histogram, is a graph used to determine how many data values lie above or below a particular value. It is constructed by plotting the cumulative frequencies on the y-axis against the class limits on the x-axis. Less than ogives start from the upper class limit and add frequencies to the cumulative total, while more than ogives start from the total frequency and subtract frequencies from the cumulative total. Ogives provide a visual summary of a data set and show the proportion of data points above or below values.
An ogive, or cumulative histogram, is a graph used to determine how many data values lie above or below a particular value. It is constructed by plotting the cumulative frequencies on the y-axis against the class limits on the x-axis. Less than ogives start from the upper class limit and add frequencies to the cumulative total, while more than ogives start from the lower limit and subtract frequencies. Ogives provide a visual summary of large data sets and show the proportion of data above or below values.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
1. Topics: Descriptive Statistics
• A road map
• Examining data through frequency
distributions
• Measures of central tendency
• Measures of variability
• The normal curve
• Standard scores and the standard normal
distribution
2. The Role of Description
• Description as a purpose of research
• Choosing the right statistical procedures
4. Frequency Distributions
• A method of summarizing and highlighting
aspects of the data in a data matrix, showing
the frequency with which each value occurs.
• Numerical Representations: a tabular
arrangement of scores
• Graphical Representations: a pictorial
arrangement of scores
7. Frequency Distribution: Major
MAJOR
Valid Cum
Value Label Value Frequency Percent Percent Percent
PHYSICS 1.00 5 12.5 12.5 12.5
CHEMISTRY 2.00 4 10.0 10.0 22.5
BIOLOGY 3.00 7 17.5 17.5 40.0
ENGINEERING 4.00 5 12.5 12.5 52.5
ANTHROPOLOGY 5.00 5 12.5 12.5 65.0
SOCIOLOGY 6.00 4 10.0 10.0 75.0
ENGLISH 7.00 7 17.5 17.5 92.5
DESIGN 8.00 3 7.5 7.5 100.0
------- ------- -------
Total 40 100.0 100.0
Valid cases 40 Missing cases 0
8. Frequency Distribution: Major Group
MAJORGRP
Valid Cum
Value Label Value Frequency Percent Percent
SCIENCE & ENGINEERIN 1.00 21 52.5 52.5 52.5
SOCIAL SCIENCE 2.00 9 22.5 22.5 75.0
HUMANITIES 3.00 10 25.0 25.0 100.0
------- ------- -------
Total 40 100.0 100.0
17. Frequency Polygon: SAT Scores
(From Ungrouped Data)
Frequency Polygon: SAT
SAT
1200.00
1185.00
1175.00
1160.00
1150.00
1130.00
1125.00
1120.00
1100.00
1090.00
1085.00
1080.00
1075.00
1060.00
1050.00
1025.00
1000.00
C
o
u
n
t
8
7
6
5
4
3
2
1
0
18. Cumulative Frequency Polygon: SAT
Scores
SAT
1200.00
1185.00
1175.00
1160.00
1150.00
1130.00
1125.00
1120.00
1100.00
1090.00
1085.00
1080.00
1075.00
1060.00
1050.00
1025.00
1000.00
C
u
m
u
l
a
t
i
v
e
F
r
e
q
u
e
n
c
y
50
40
30
20
10
0
22. Relative Frequency Polygon: GPA
Comparison of Majors
GPA
3.60
3.50
3.40
3.30
3.20
3.10
3.00
2.90
2.80
2.70
2.50
2.30
2.00
P
e
r
c
e
n
t
40
30
20
10
0
MAJORGRP
SCIENCE & ENGINEERIN
SOCIAL SCIENCE
HUMANITIES
23. Relative Frequency Polygon: GPA
Comparison of Gender
SEX
MALE
FEMALE
GPA
3.60
3.50
3.40
3.30
3.20
3.10
3.00
2.90
2.80
2.70
2.50
2.30
2.00
P
e
r
c
e
n
t
30
20
10
0
24. What Can Be Seen in Frequency
Distributions
• Shape
• Central Tendency
• Variability
26. Shapes of Distributions
Bell-S h
ap e d
Pro t o
ty p e :
No r
ma lDist r
ib u t i o n
SYMMETRIC
Hu m p in
Distr i b u
tio n
at Hig h S
co r
e End
Tail a
t L o
w Sc o
re End
NEGATIVELY SKEW E
D
Hu m p in
Distr i b u
tio n
at Lo wSc o
re E n d
Tail a
t High S
co r
e End
POSI T I
VELY SKEWED
Very P
ea k
ed in t h
e Cen t
er
C omp a
re d t o
No r
ma lDist r
ib u t i o n
LEPTO K
URTIC
Peak J u s t
Like
t he
No r
ma lDist r
ib u t i o n
MESOKURTIC
Fla tin t h e
Ce n
ter
C omp a
re d t o
No r
ma lDist r
ib u t i o n
PLATYKURTIC
28. Definitions:
Measures of Central Tendency
• Mean:
– “Arithmetic mean”
– “Center of gravity” such that the “weight” of the scores
above the mean exactly balances the “weight” of the
scores below the mean
• Median:
– The number that lies at the midpoint of the distribution
of scores; divides the distribution into two equal halves
• Mode:
– Most frequently occurring score
29. Mean, Median, Mode:
SAT Scores by Gender
Group Mode Median Mean
Male 1200 1112.50 1112.00
Female 1100 1122.50 1129.50
Total 1100.00 1122.50 1122.75
30. Mean, Median, Mode:
SAT Scores by Area
Group Mode Median Mean
Humanities 1100 1092.50 1095.00
Social Sciences 1100 1100.00 1108.89
Sciences 1150,1200 1150.00 1138.10
Total 1100 1122.50 1122.75
32. Definitions:
Measures of Variability
• Range:
– Difference between highest and lowest score
• Inter-quartile Range:
– The spread of the middle 50% of the scores
– The difference between the top 25% (Upper Quartile-Q3) and the lower
25% (Lower Quartile-Q1)
• Standard Deviation:
– The average dispersion or deviation of scores around the mean (measured
in original score units)
• Variance:
– The average variability of scores (measured in squared units of the
original scores (square of the standard deviation)
33. Range, Interquartile Range, and Standard
Deviation: SAT Scores by Area
Gro
up Range IQ Range Standard
Deviat
ion
Human
ities 200 35.00 55.88
Soci
al Science
s 95 15.00 28.59
Science
s 200 27.50 57.00
34. Range, Interquartile Range, and Standard
Deviation: SAT Scores by Gender
Group Range IQ Range Standard
Deviation
Males 200 100 60.92
Females 175 75 46.02
Total 200 70 54.02
35. Properties of Normal Distribution
• Bell-shaped (unimodal)
• Symmetric about the mean
• Mode, median, and mean are equal (though
rarely occurs)
• Asymptotic (curve never touches the
abscissa)
37. Definitions: Standard Scores
• Standard Scores: scores expressed as SD
away from the mean (z-scores)
• Obtained by finding how far a score is
above or below the mean and dividing that
difference by the SD
• Changes mean to 0 and SD to 1, but does
not change the shape (called Standard
Normal Distribution)
38. Uses of Standard Normal
Distribution
• What proportion of scores falls between the mean
and a given raw score
• What proportion of scores falls above or below a
given raw score
• What proportion of scores falls between two raw
scores
• What raw score fall above (or below) a certain
percentage of scores