This document discusses various types of errors that can occur in regression analysis including multicollinearity, heteroscedasticity, and autocorrelation. It defines each error, provides practical examples, and outlines methods for detecting each type of error, including statistical tests like variance inflation factor, Breusch-Pagan test, Durbin-Watson test, and Run test. Remedial measures are also discussed.
This document discusses various methods to detect errors in regression models such as multicollinearity, heteroscedasticity, and autocorrelation. It defines each error and provides practical examples. Detection methods are then presented, including variance inflation factor, Breusch-Pagan test, Durbin-Watson test, and others. Specific steps are outlined for applying each test to determine if errors are present based on the test statistics.
Here are my responses to the guide questions:
1. I decided to teach in SHS because I wanted to help guide students in their transition to college and career. I find it rewarding to support students' personal and academic growth during this important stage of their lives.
2. Two of the most significant experiences I've had teaching Research involve seeing students get excited about their topics and taking ownership of their work. It's amazing to see their eyes light up when they discover something interesting during the research process. I also appreciate witnessing students' confidence grow as they learn to independently plan and conduct research. These experiences are meaningful because they show the positive impact of research skills on student learning and development.
3. One of my most
The document discusses various statistical methods for analyzing relationships between variables, including chi-square tests, measures of association like lambda and gamma, and rank correlation. Chi-square tests can be used to test for independence and goodness of fit between nominal or ordinal variables. Measures like lambda and gamma range from 0 to 1 and indicate the strength of association while controlling for errors. Rank correlation assesses relationships between variables when only ordinal data is available by analyzing the agreement between ranks. Cross tabulation allows investigating patterns of bivariate association through distribution analysis.
This document provides an overview of non-parametric statistical tests and how to perform them using SPSS. It discusses the assumptions and advantages of parametric vs non-parametric tests. The document is divided into chapters that cover different types of non-parametric tests for relationships between variables, independent samples, related samples, and exact tests. Examples are provided for tests including chi-square, binomial, Mann-Whitney U, and Wilcoxon signed-rank. Steps for running these tests in SPSS are outlined.
The document discusses non-parametric statistical tests and provides examples of their use in SPSS. It introduces key non-parametric tests including the chi-square test, binomial test, run test, Kolmogorov-Smirnov test, Mann-Whitney U test, and Kruskal-Wallis H test. Each test is explained and an example is provided demonstrating how to conduct the test in SPSS, interpret the output, and determine if results are statistically significant. The document serves as a hands-on guide for using various non-parametric tests to analyze data when parametric assumptions are not met.
The document discusses chi-square test and its properties. It defines chi-square as a non-parametric statistical test used for discrete data to test for independence and goodness of fit between observed and expected frequencies. The chi-square test has some key assumptions including independent random samples, nominal or ordinal level data, and no expected cell counts below 5. It is calculated by subtracting expected from observed frequencies, squaring the differences, and dividing by expected counts. The chi-square test can identify if there is a significant association between variables but does not measure the strength of the association.
1. Linearity is evaluated by visually inspecting a plot of analytical results versus analyte concentration and determining if the relationship is linear.
2. If linear, the data should be statistically analyzed using methods like regression analysis and calculation of the correlation coefficient.
3. The Cochran test is used to determine if linearity data is homoscedastic or heteroscedastic by comparing the ratio of the largest variance to the total variances to critical values. If the ratio is lower, the data is homoscedastic.
This document discusses various methods to detect errors in regression models such as multicollinearity, heteroscedasticity, and autocorrelation. It defines each error and provides practical examples. Detection methods are then presented, including variance inflation factor, Breusch-Pagan test, Durbin-Watson test, and others. Specific steps are outlined for applying each test to determine if errors are present based on the test statistics.
Here are my responses to the guide questions:
1. I decided to teach in SHS because I wanted to help guide students in their transition to college and career. I find it rewarding to support students' personal and academic growth during this important stage of their lives.
2. Two of the most significant experiences I've had teaching Research involve seeing students get excited about their topics and taking ownership of their work. It's amazing to see their eyes light up when they discover something interesting during the research process. I also appreciate witnessing students' confidence grow as they learn to independently plan and conduct research. These experiences are meaningful because they show the positive impact of research skills on student learning and development.
3. One of my most
The document discusses various statistical methods for analyzing relationships between variables, including chi-square tests, measures of association like lambda and gamma, and rank correlation. Chi-square tests can be used to test for independence and goodness of fit between nominal or ordinal variables. Measures like lambda and gamma range from 0 to 1 and indicate the strength of association while controlling for errors. Rank correlation assesses relationships between variables when only ordinal data is available by analyzing the agreement between ranks. Cross tabulation allows investigating patterns of bivariate association through distribution analysis.
This document provides an overview of non-parametric statistical tests and how to perform them using SPSS. It discusses the assumptions and advantages of parametric vs non-parametric tests. The document is divided into chapters that cover different types of non-parametric tests for relationships between variables, independent samples, related samples, and exact tests. Examples are provided for tests including chi-square, binomial, Mann-Whitney U, and Wilcoxon signed-rank. Steps for running these tests in SPSS are outlined.
The document discusses non-parametric statistical tests and provides examples of their use in SPSS. It introduces key non-parametric tests including the chi-square test, binomial test, run test, Kolmogorov-Smirnov test, Mann-Whitney U test, and Kruskal-Wallis H test. Each test is explained and an example is provided demonstrating how to conduct the test in SPSS, interpret the output, and determine if results are statistically significant. The document serves as a hands-on guide for using various non-parametric tests to analyze data when parametric assumptions are not met.
The document discusses chi-square test and its properties. It defines chi-square as a non-parametric statistical test used for discrete data to test for independence and goodness of fit between observed and expected frequencies. The chi-square test has some key assumptions including independent random samples, nominal or ordinal level data, and no expected cell counts below 5. It is calculated by subtracting expected from observed frequencies, squaring the differences, and dividing by expected counts. The chi-square test can identify if there is a significant association between variables but does not measure the strength of the association.
1. Linearity is evaluated by visually inspecting a plot of analytical results versus analyte concentration and determining if the relationship is linear.
2. If linear, the data should be statistically analyzed using methods like regression analysis and calculation of the correlation coefficient.
3. The Cochran test is used to determine if linearity data is homoscedastic or heteroscedastic by comparing the ratio of the largest variance to the total variances to critical values. If the ratio is lower, the data is homoscedastic.
This document summarizes a study that used canonical correlation analysis to detect potential bias in faculty promotion scores at American University of Nigeria. The study aimed to test if canonical correlation could identify bias scoring, determine the influence of individual assessors' scores, and discriminate between promotable and non-promotable candidates. The results showed that canonical correlation could detect bias and influence with over 90% confidence and correctly classified candidates into promotable and non-promotable groups, rejecting the null hypotheses. Thus, canonical correlation was found to be an effective statistical tool for unbiased promotion scoring and decision making at the university.
Correlational research examines relationships between two or more variables without manipulating them. It investigates whether changes in one variable are associated with changes in another. Correlational studies describe relationships using a correlation coefficient and can be used to predict scores on one variable based on scores on another. Common correlational techniques include scatterplots, regression analysis, and factor analysis. Threats to internal validity like subject characteristics, mortality, history, and instrumentation must be controlled.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. Formulas are provided for calculating the slope, intercept, and testing significance of the regression model.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. The document also discusses using the regression equation to predict outcomes and the significance test for the slope of the regression line.
The document discusses the chi-square test of independence, which determines if there is a relationship between two categorical variables. It explains that the chi-square test compares observed and expected category frequencies to assess if the null hypothesis of independence is true. An example calculates the chi-square test to see if gender and education level are related using a sample of 45 people's education data. The chi-square test value is greater than the critical value, so the null hypothesis of no relationship is rejected.
This document discusses non-parametric tests, which are statistical tests that can be used when assumptions of parametric tests are not met. It provides examples of common non-parametric tests including the sign test and Wilcoxon signed-rank test. The sign test analyzes differences between paired observations by assigning a "+" or "-" sign. The Wilcoxon signed-rank test also assigns ranks to the differences and calculates the sum of positive and negative ranks to determine if the null hypothesis can be rejected. The document provides step-by-step explanations and examples of applying these non-parametric tests to paired sample data to test for differences between populations.
The document discusses the sign test, a nonparametric hypothesis test that does not require assumptions about the population distribution. The sign test can be used to test claims involving matched pairs, nominal data with two categories, or the population median. The document provides guidelines for performing the sign test in each of these cases, including stating hypotheses, determining sample sizes and test statistics, and making conclusions. Examples are also given to illustrate the sign test for matched pairs, nominal data, and testing the population median.
This document discusses non-parametric statistical tests, which make few assumptions about the distribution of the underlying population. It provides examples of non-parametric tests like the sign test, Wilcoxon rank sum test, and Kruskal-Wallis test. These tests involve ranking all observations from different groups together and applying statistical tests to the ranks rather than the original values. Non-parametric tests are useful when assumptions of parametric tests may not hold but lack power with small samples.
Hypothesis is usually considered as the principal instrument in research and quality control. Its main function is to suggest new experiments and observations. In fact, many experiments are carried out with the deliberate object of testing hypothesis. Decision makers often face situations wherein they are interested in testing hypothesis on the basis of available information and then take decisions on the basis of such testing. In Six –Sigma methodology, hypothesis testing is a tool of substance and used in analysis phase of the six sigma project so that improvement can be done in right direction
This document discusses correlation analysis and the different statistical tests used to analyze the relationship between two variables. It explains that correlation determines how strongly two variables are related and describes the correlation coefficient and p-value. Pearson, Kendall's Tau, and Spearman's rank correlation tests are presented for measuring the correlation between two interval/ratio variables, while chi-square is used for categorical variables. The assumptions of each test are provided along with guidelines for selecting the appropriate analysis and interpreting the results.
The document discusses Spearman's rank correlation coefficient, a nonparametric measure of statistical dependence between two variables. It assumes values between -1 and 1, with -1 indicating a perfect negative correlation and 1 a perfect positive correlation. The steps involve converting values to ranks, calculating the differences between ranks, and determining if there is a statistically significant correlation based on the test statistic and critical values. An example calculates Spearman's rho using rankings of cricket teams in test and one day international matches.
Decision between homoscedasticity or heteroscedasticity for linearity data (C...Chandra Prakash Singh
This document discusses testing for homoscedasticity or heteroscedasticity in linearity data using the Cochran test. The Cochran test compares the ratio of the largest variance (C value) to the total variances to the critical C value. If C is less than the critical value, the data is homoscedastic. If C is greater than or equal to the critical value, the data is heteroscedastic. An example is provided to calculate variances, C value, and compare to critical value to determine if three data sets exhibit homoscedasticity or heteroscedasticity.
Nonparametric methods and chi square tests (1)Shakeel Nouman
This document discusses nonparametric statistical methods and chi-square tests. It introduces several nonparametric tests that do not rely on assumptions about the population distribution, including the sign test for paired comparisons, the runs test for detecting randomness, and ranks tests like the Mann-Whitney U test for comparing two populations and the Wilcoxon signed-rank test for paired comparisons. It also discusses the Kruskal-Wallis and Friedman tests for comparing multiple populations and chi-square tests for goodness of fit, independence, and equality of proportions. Examples are provided to demonstrate how to perform and interpret these various nonparametric tests.
This ppt includes Student's T-Test, Paired T-Test, Chi-Square Test, X2 Test for population variance. There Introduction, Characteristics, Assumptions, Applications, and Formulas. This is useful for 2nd year students of BBA or BBM studying research methodology,
Introduction to correlation and regression analysisFarzad Javidanrad
This document provides an introduction to correlation and regression analysis. It defines key concepts like variables, random variables, and probability distributions. It discusses how correlation measures the strength and direction of a linear relationship between two variables. Correlation coefficients range from -1 to 1, with values closer to these extremes indicating stronger correlation. The document also introduces determination coefficients, which measure the proportion of variance in one variable explained by the other. Regression analysis builds on correlation to study and predict the average value of one variable based on the values of other explanatory variables.
This document discusses correlation coefficient and path coefficient analysis. It defines correlation as a statistical method to analyze the relationship between two or more variables. Correlation determines the degree of relationship but not causation. The document then discusses different types of correlation including positive, negative, linear, non-linear, simple, multiple and partial correlation. It also discusses methods to measure correlation including scatter diagrams, Karl Pearson's coefficient, Spearman's coefficient and concurrent deviation method. Finally, it explains path analysis which can be used to partition correlations into direct and indirect effects when studying causal relationships between variables.
Correlation research examines the relationships between two or more non-manipulated variables without changing any variables. It can be used to predict scores on one variable based on scores of another predictor variable. Common techniques include explanatory design to look for associations between variables and prediction design to identify predictors of outcomes. Tools to analyze correlations include scatter plots, correlation coefficients, and regression analysis.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This document summarizes a study that used canonical correlation analysis to detect potential bias in faculty promotion scores at American University of Nigeria. The study aimed to test if canonical correlation could identify bias scoring, determine the influence of individual assessors' scores, and discriminate between promotable and non-promotable candidates. The results showed that canonical correlation could detect bias and influence with over 90% confidence and correctly classified candidates into promotable and non-promotable groups, rejecting the null hypotheses. Thus, canonical correlation was found to be an effective statistical tool for unbiased promotion scoring and decision making at the university.
Correlational research examines relationships between two or more variables without manipulating them. It investigates whether changes in one variable are associated with changes in another. Correlational studies describe relationships using a correlation coefficient and can be used to predict scores on one variable based on scores on another. Common correlational techniques include scatterplots, regression analysis, and factor analysis. Threats to internal validity like subject characteristics, mortality, history, and instrumentation must be controlled.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. Formulas are provided for calculating the slope, intercept, and testing significance of the regression model.
The document discusses simple linear regression analysis. It provides definitions and formulas for simple linear regression, including that the regression equation is y = a + bx. An example is shown of using the stepwise method to determine if there is a significant relationship between number of absences (x) and grades (y) for students. The analysis finds a significant negative relationship, meaning more absences correlated with lower grades. The document also discusses using the regression equation to predict outcomes and the significance test for the slope of the regression line.
The document discusses the chi-square test of independence, which determines if there is a relationship between two categorical variables. It explains that the chi-square test compares observed and expected category frequencies to assess if the null hypothesis of independence is true. An example calculates the chi-square test to see if gender and education level are related using a sample of 45 people's education data. The chi-square test value is greater than the critical value, so the null hypothesis of no relationship is rejected.
This document discusses non-parametric tests, which are statistical tests that can be used when assumptions of parametric tests are not met. It provides examples of common non-parametric tests including the sign test and Wilcoxon signed-rank test. The sign test analyzes differences between paired observations by assigning a "+" or "-" sign. The Wilcoxon signed-rank test also assigns ranks to the differences and calculates the sum of positive and negative ranks to determine if the null hypothesis can be rejected. The document provides step-by-step explanations and examples of applying these non-parametric tests to paired sample data to test for differences between populations.
The document discusses the sign test, a nonparametric hypothesis test that does not require assumptions about the population distribution. The sign test can be used to test claims involving matched pairs, nominal data with two categories, or the population median. The document provides guidelines for performing the sign test in each of these cases, including stating hypotheses, determining sample sizes and test statistics, and making conclusions. Examples are also given to illustrate the sign test for matched pairs, nominal data, and testing the population median.
This document discusses non-parametric statistical tests, which make few assumptions about the distribution of the underlying population. It provides examples of non-parametric tests like the sign test, Wilcoxon rank sum test, and Kruskal-Wallis test. These tests involve ranking all observations from different groups together and applying statistical tests to the ranks rather than the original values. Non-parametric tests are useful when assumptions of parametric tests may not hold but lack power with small samples.
Hypothesis is usually considered as the principal instrument in research and quality control. Its main function is to suggest new experiments and observations. In fact, many experiments are carried out with the deliberate object of testing hypothesis. Decision makers often face situations wherein they are interested in testing hypothesis on the basis of available information and then take decisions on the basis of such testing. In Six –Sigma methodology, hypothesis testing is a tool of substance and used in analysis phase of the six sigma project so that improvement can be done in right direction
This document discusses correlation analysis and the different statistical tests used to analyze the relationship between two variables. It explains that correlation determines how strongly two variables are related and describes the correlation coefficient and p-value. Pearson, Kendall's Tau, and Spearman's rank correlation tests are presented for measuring the correlation between two interval/ratio variables, while chi-square is used for categorical variables. The assumptions of each test are provided along with guidelines for selecting the appropriate analysis and interpreting the results.
The document discusses Spearman's rank correlation coefficient, a nonparametric measure of statistical dependence between two variables. It assumes values between -1 and 1, with -1 indicating a perfect negative correlation and 1 a perfect positive correlation. The steps involve converting values to ranks, calculating the differences between ranks, and determining if there is a statistically significant correlation based on the test statistic and critical values. An example calculates Spearman's rho using rankings of cricket teams in test and one day international matches.
Decision between homoscedasticity or heteroscedasticity for linearity data (C...Chandra Prakash Singh
This document discusses testing for homoscedasticity or heteroscedasticity in linearity data using the Cochran test. The Cochran test compares the ratio of the largest variance (C value) to the total variances to the critical C value. If C is less than the critical value, the data is homoscedastic. If C is greater than or equal to the critical value, the data is heteroscedastic. An example is provided to calculate variances, C value, and compare to critical value to determine if three data sets exhibit homoscedasticity or heteroscedasticity.
Nonparametric methods and chi square tests (1)Shakeel Nouman
This document discusses nonparametric statistical methods and chi-square tests. It introduces several nonparametric tests that do not rely on assumptions about the population distribution, including the sign test for paired comparisons, the runs test for detecting randomness, and ranks tests like the Mann-Whitney U test for comparing two populations and the Wilcoxon signed-rank test for paired comparisons. It also discusses the Kruskal-Wallis and Friedman tests for comparing multiple populations and chi-square tests for goodness of fit, independence, and equality of proportions. Examples are provided to demonstrate how to perform and interpret these various nonparametric tests.
This ppt includes Student's T-Test, Paired T-Test, Chi-Square Test, X2 Test for population variance. There Introduction, Characteristics, Assumptions, Applications, and Formulas. This is useful for 2nd year students of BBA or BBM studying research methodology,
Introduction to correlation and regression analysisFarzad Javidanrad
This document provides an introduction to correlation and regression analysis. It defines key concepts like variables, random variables, and probability distributions. It discusses how correlation measures the strength and direction of a linear relationship between two variables. Correlation coefficients range from -1 to 1, with values closer to these extremes indicating stronger correlation. The document also introduces determination coefficients, which measure the proportion of variance in one variable explained by the other. Regression analysis builds on correlation to study and predict the average value of one variable based on the values of other explanatory variables.
This document discusses correlation coefficient and path coefficient analysis. It defines correlation as a statistical method to analyze the relationship between two or more variables. Correlation determines the degree of relationship but not causation. The document then discusses different types of correlation including positive, negative, linear, non-linear, simple, multiple and partial correlation. It also discusses methods to measure correlation including scatter diagrams, Karl Pearson's coefficient, Spearman's coefficient and concurrent deviation method. Finally, it explains path analysis which can be used to partition correlations into direct and indirect effects when studying causal relationships between variables.
Correlation research examines the relationships between two or more non-manipulated variables without changing any variables. It can be used to predict scores on one variable based on scores of another predictor variable. Common techniques include explanatory design to look for associations between variables and prediction design to identify predictors of outcomes. Tools to analyze correlations include scatter plots, correlation coefficients, and regression analysis.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
7. Multicollinearity
Heteroscedasticity
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur
07
Practical Examples
Daily stock returns involving regression analysis
using time series data
2.
Temperatures on different days in a month
1.
Similar perform from same class students more
than different classes students
4.
Similar answer from nearby geographic locations
& geographically distant people
3.
Expenditure on households is influenced by the
expenditure of the preceding month
5.
8. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Detection Methods
High 𝑹𝟐
but few
significan
t ratios
High pair-wise
correlations
among
regressors
Eigen
values &
condition
index
Tolerance
& variance
inflation
factor
Department Of Statistics
Begum Rokeya University, Rangpur 08
Multicollinearity
Heteroscedasticity
Autocorrelation
9. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 09
High 𝑹^𝟐 But Few
Significant T Ratios
𝑹𝟐
≈ 1, it indicates the variable can be
explained by other predictor variables.
But t tests show that none or very few
of the predictors are statistically
different from zero.
10. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 10
High pair-wise correlations
among regressors
If the pair-wise correlation between two independent
variable is high (greater than 0.8).
High zero order correlations may suggest collinearity,
it is not necessary that they be high to have
collinearity in any specific case.
11. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 11
Value Eigen and
Condition Index
Condition number, 𝐤 =
𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝒆𝒊𝒈𝒆𝒏𝒗𝒂𝒍𝒖𝒆
𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝒆𝒊𝒈𝒆𝒏𝒗𝒂𝒍𝒖𝒆
Condition index, CI=
𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝒆𝒊𝒈𝒆𝒏𝒗𝒂𝒍𝒖𝒆
𝑴𝒊𝒏𝒊𝒎𝒖𝒎 𝒆𝒊𝒈𝒆𝒏𝒗𝒂𝒍𝒖𝒆
= 𝒌
100 1000 ∞
0
0
No
Multicollinearity
Moderate to Strong
Multicollinearity
Severe
Multicollinearity
10 30 ∞
0
0
No
Multicollinearity
Moderate to Strong
Multicollinearity
Severe
Multicollinearity
12. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 12
Tolerance & Variance
Inflation Factor (VIF)
V𝐈𝐅 =
𝟏
𝟏− 𝑹𝟐
Tolerance, TOL =
𝟏
𝑽𝑰𝑭
5 10 ∞
0
0
No
Multicollinearity
Moderate to Strong
Multicollinearity
Severe
Multicollinearity
0.1 0.2 ∞
0
0
Severe
Multicollinearity
Moderate to Strong
Multicollinearity
No Multicollinearity
14. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Park’s Test
Department Of Statistics
Begum Rokeya University, Rangpur 14
Take the natural log of squared residuals
Take the natural log of Independent variable
suspecting heteroscedastic behavior.
Run OLS for the natural log of regressor against the
natural log of the squared residuals
Run OLS on our data & find squared residuals from it
If the model is insignificant ,there is no
heteroscedasticity in the error variance
15. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 15
Glejser Test
15
Run OLS from our data set & find the residuals
Take absolute value of the residuals
Run OLS for regressor against residuals used:
|𝒓𝒊| = 𝜷𝟏 + 𝜷𝟐𝒙𝒊 + 𝒗𝒊 |𝒓𝒊| = 𝜷𝟏 + 𝜷𝟐 𝒙𝒊 + 𝒗𝒊
|𝒓𝒊| = 𝜷𝟏 + 𝜷𝟐
𝟏
𝒙𝒊
+ 𝒗𝒊 |𝒓𝒊| = 𝜷𝟏 + 𝜷𝟐
𝟏
𝒙𝒊
+ 𝒗𝒊
If the model is insignificant ,there is no
heteroscedasticity in the error variance
16. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 16
Spearman's Rank Correlation Test
16
Fit the regression model on Y & X and obtain the
residuals 𝒖𝒊
Rank the absolute value of residuals & independent
variable
Find the spearman’s rank correlation coefficient,
𝒓𝒔 = 𝟏 − 𝟔
𝒅𝒊
𝟐
𝒏 𝒏𝟐 − 𝟏
t statistic where, 𝒕 =
𝒓𝒔 𝒏−𝟐
𝟏−𝒓𝒔
𝟐
with n-2 df
If 𝒕𝒄𝒂𝒍 > 𝒕𝒕𝒂𝒃 ,we may reject null hypothesis
and there will be heteroscedasticity in the
error variance
17. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 17
Goldfeld – Quandt Test
𝑭 =
𝑹𝑺𝑺𝟐/𝒅𝒇
𝑹𝑺𝑺𝟏/𝒅𝒇
[ df 𝑹𝑺𝑺𝟏 =(n-c)/2 – k & df 𝑹𝑺𝑺𝟐 =(n-c-2k)/2
Rank the observations ascending order according to 𝑿𝒊
Omit c central observations and devide the remaining
(n-c) observations into two groups.
[If n ≈ 30 then c=4, n ≈ 60 then c=8 ]
Fit OLS for the two group in step-2 & find residuals sum
of square 𝑹𝑺𝑺𝟏 & 𝑹𝑺𝑺𝟐
Fit OLS for the two group in step-2 & find residuals sum
of square 𝑹𝑺𝑺𝟏 & 𝑹𝑺𝑺𝟐
If 𝑭𝒄𝒂𝒍 > 𝑭𝒕𝒂𝒃 ,we may reject null
hypothesis and there will be
heteroscedasticity in the error variance.
18. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 18
Breusch – Pagan – Godfrey Test
Obtain ESS from constructed model and find Θ = ESS/2
where Θ~ 𝓧𝒎−𝟏
𝟐
&
ESS = total sum of squares – residual sum of squares
Fit a regression model 𝒀𝒊 = 𝜷𝟏 + 𝜷𝟐𝒙𝟐𝒊 + ⋯ + 𝒖𝒊 &
obtain residuals 𝒖𝒊
Obtain 𝝈𝟐 =
𝒖𝒊
𝟐
𝒏
& Construct variables 𝒑𝒊 =
𝒖𝒊
𝟐
𝝈𝟐
Construct the model, 𝒑𝒊 = 𝜶 + 𝜶𝟐𝒙𝟐𝒊 + 𝜶𝟑𝒙𝟑𝒊 + ⋯ + 𝒖𝒊
If𝓧𝒄𝒂𝒍
𝟐
> 𝓧𝒕𝒂𝒃
𝟐
,we may reject null hypothesis
and there will be heteroscedasticity in the
error variance.
20. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Durbin Watson test
Department Of Statistics
Begum Rokeya University, Rangpur 20
Fit a model, 𝒀𝒕 = 𝜷𝟏 + 𝜷𝟐𝑿𝟐𝒕 + 𝜷𝟑𝑿𝟑𝒕+ 𝒖𝒕
Find the residuals 𝒖𝒊 and take a lag 𝒖𝒕−𝟏
Calculate, 𝒅 = 𝒕=𝟐
𝒕=𝒏
(𝒖𝒕−𝒖𝒕−𝟏)𝟐
𝒕=𝟏
𝒕=𝒏 𝒖𝒕
𝟐 ; d lies between 0 to 4
For the given sample size & number of regressor find
out the critical 𝒅𝑳 & 𝒅𝒖
21. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur
Decision Rules
Durbin Watson Test
Department Of Statistics
Begum Rokeya University, Rangpur 21
Reject 𝑯𝟎:
Positive
autocorrelation
Inconclusive
Do not Reject
𝑯𝟎:No evidence
autocorrelation
Inconclusive
Reject
𝑯𝟎:Negative
autocorrelation
𝒅𝑳 𝒅𝒖 2 4-𝒅𝑳
4-𝒅𝒖
4
0 𝒅𝑳 𝒅𝒖
0
22. Find Prob [E (R) – 1.96𝝈𝑹 ≤ 𝑹 ≤ E (R) + 1.96𝝈𝑹] = 0.95
Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
Run test
Department Of Statistics
Begum Rokeya University, Rangpur 22
Mean: 𝑬 𝑹 =
𝟐𝑵𝟏𝑵𝟐
𝑵
+ 𝟏
Variance: 𝝈𝑹
𝟐
=
𝟐𝑵𝟏𝑵𝟐(𝟐𝑵𝟏𝑵𝟐 − 𝑵)
𝑵𝟐(𝑵 −𝟏)
Here, 𝑁1 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 " + “, and 𝑁2 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 " − “
𝑁 = 𝑁1 + 𝑁2 and R = Numbers of Runs
Note the signs (+ or -) of the residuals
Count the number of runs & define the length of a run
Fit a regression model and find the residuals
23. Detection of
Multicollinearity
Detection of
Heteroscedasticity
Detection of
Autocorrelation
𝑯𝟎: No autocorrelation
𝑯𝟏: There is autocorrelation
Decision Rule: Run Test
Decision Rules
Run Test
Accept 𝑯𝟎 if R lies in the confidence interval
otherwise reject
Positive autocorrelation when R will be few
Negative autocorrelation when r will be many
Department Of Statistics
Begum Rokeya University, Rangpur 23
25. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 25
A Priori Information
Consider a regression model:
Supose a priori believe that β𝟑 = 𝟎. 𝟏𝟎𝛃𝟐
𝒀𝒊 = 𝜷𝟏 + 𝜷𝟐𝑿𝟐𝒊 + 𝜷𝟑𝑿𝟑𝒊+ 𝒖𝒊
Run the model:
𝒀𝒊 = 𝜷𝟏 + 𝜷𝟐𝑿𝟐𝒊 + 𝟎. 𝟏𝟎 𝜷𝟐𝑿𝟑𝒊+ 𝒖𝒊
= 𝜷𝟏 + 𝜷𝟐𝑿𝒊+ 𝒖𝒊
Where 𝑿𝒊 = 𝑿𝟐𝒊 + 𝟎. 𝟏𝟎 𝑿𝟑𝒊
26. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 26
Combining Cross-sectional
And Time Series Data
Fit a model having cross section & time
series data:
𝒍𝒏𝒀𝒕 = 𝜷𝟏 + 𝜷𝟐𝒍𝒏𝑷𝒕 + 𝜷𝟑𝒍𝒏𝑰𝒕 + 𝒖𝒕
where P and I are highly correlated
We have to fit:
𝒀𝒕
∗
= 𝜷𝟏 + 𝜷𝟐𝒍𝒏𝑷𝒕 + 𝒖𝒕
Where 𝒀∗ = 𝒍𝒏𝒀 − 𝜷𝟑𝒍𝒏𝑰
𝐘𝐭
∗
represent that value of 𝐘 after removing Multicollinearity problem
27. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 27
Dropping A Variable(s) And
Specification Bias
Consider a regression model:
𝒀𝒊 = 𝜷𝟏 + 𝜷𝟐𝑿𝟐𝒊 + 𝜷𝟑𝑿𝟑𝒊+ 𝒖𝒊
• If in the model , the regressor are highly
correlated then drop a variable and fit a model
having no multicollinearity problem.
28. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 28
Transformation of Variables
Regression model for time series data:
𝒀𝒕= 𝜷𝟏 + 𝜷𝟐𝑿𝟐𝒕 + 𝜷𝟑𝑿𝟑𝒕+ 𝒖𝒕
Subtract above two model we get:
𝒀𝒕 - 𝒀𝒕−𝟏= 𝜷𝟐(𝑿𝟐𝒕 − 𝑿𝟐,𝒕−𝟏+ 𝜷𝟑(𝑿𝟑𝒕 −𝑿𝟑,𝒕−𝟏)+ 𝒗𝒕
Where, 𝒗𝒕 = 𝒖𝒕 − 𝒖𝒕−𝟏
Fit a model at time t-1:
𝒀𝒕−𝟏 = 𝜷𝟏 + 𝜷𝟐𝑿𝟐,𝒕−𝟏 + 𝜷𝟑𝑿𝟑,𝒕−𝟏+ 𝒖𝒕−𝟏
Which model will reduce the multicollinearity problem.
29. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur 29
Additional or New Data
Multicollinearity is a sample feature. So, we can
add another sample to the same variable.
Increasing sample size, multicollinearity problem
may reduce.
30. Remedial Measures
when
𝝈𝟐 is
known
when
𝝈𝟐 is
unknown
There are two approaches to remediation
Multicollinearity
Heteroscedasticity
Autocorrelation
Department Of Statistics
Begum Rokeya University, Rangpur
30
31. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
When 𝝈𝟐 is Known: Method of
Weighted Least Square
Department Of Statistics
Begum Rokeya University, Rangpur
31
Consider the simple linear regression model:
𝒀𝒊 = 𝜶 + 𝜷𝑿𝒊 + 𝒖𝒊
Then the transformed error will have a constant variance,
𝑽 𝝁𝒊
∗
= 𝑽
𝝁𝒊
𝝈𝒊
=
𝟏
𝝈𝒊
𝟐
𝑽 𝝁𝒊 +
𝟏
𝝈𝒊
𝟐
𝝈𝒊
𝟐
= 𝟏
Run a model:
𝒀𝒊
𝝈𝒊
= 𝜷𝟏
∗
𝟏
𝝈𝒊
+ 𝜷𝟐
∗
𝑿𝟐𝒊
𝝈𝒊
+ 𝜷𝟑
∗
𝑿𝟑𝒊
𝝈𝒊
+
𝑼𝒊
𝝈𝒊
If 𝐕(𝒖𝒊) = 𝝈𝒊
𝟐
then heteroscedasticity is present
32. Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
When 𝝈𝟐 is Unknown
Department Of Statistics
Begum Rokeya University, Rangpur
32
The error variance is proportional to 𝑿𝒊
𝟐
:
𝑬(𝑼𝒊
𝟐
) = 𝝈𝟐 𝑿𝒊
𝟐
The error variance is proportional to 𝑿𝒊:
𝑬(𝑼𝒊
𝟐
) = 𝝈𝟐𝑿𝒊
The error variance is proportional to the square of the
mean value of Y :
𝑬(𝑼𝒊
𝟐
) = 𝝈𝟐
[𝑬(𝒀𝒊)]𝟐
A log transformation such as:
𝒍𝒏𝒀𝒊 = 𝜷𝟏 + 𝜷𝟐 𝒍𝒏𝑿𝒊 + 𝑼𝒊
34. First-difference Transformation
Department Of Statistics
Begum Rokeya University, Rangpur
34
Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
If autocorrelation is of AR(1) type, we have:
𝒖𝒕 − 𝝆𝒖𝒕−𝟏 = 𝒗𝒕
• Assume ρ=1 and run first-difference model
(taking first difference of dependent variable
and all regressors)
𝒀𝒕 − 𝒀𝒕−𝟏 = 𝜷𝟐(𝑿𝒕 − 𝑿𝒕−𝟏)+ (𝒖𝒕−𝒖𝒕−𝟏)
35. Generalized Transformation
Department Of Statistics
Begum Rokeya University, Rangpur
35
Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Estimate value of ρ through regression of residual
on lagged residual and use value to run transformed
regression
𝒀𝒕 = 𝜷𝟏 + 𝜷𝟐𝑿𝒕+ 𝒖𝒕
𝒀𝒕−𝟏 = 𝜷𝟏 + 𝜷𝟐𝑿𝒕−𝟏+ 𝒖𝒕−𝟏
ρ𝒀𝒕−𝟏 = ρ𝜷𝟏 + ρ𝜷𝟐𝑿𝒕−𝟏+ ρ𝒖𝒕−𝟏
𝒀𝒕 − ρ𝒀𝒕−𝟏 = 𝜷𝟏(𝟏 − ρ) + 𝜷𝟐(𝑿𝒕 − 𝝆𝑿𝒕−𝟏)+ (𝒖𝒕 − ρ𝒖𝒕−𝟏)
36. Newey-West Method
Remedials of
Multicollinearity
Remedials of
Heteroscedasticity
Remedials of
Autocorrelation
Generates HAC (heteroscedasticity and
autocorrelation consistent) standard errors.
Department Of Statistics
Begum Rokeya University, Rangpur
36
39. Multicollinearity
Heteroscedasticity
Autocorrelation
Plot Explanation
Department Of Statistics
Begum Rokeya University, Rangpur
39
In this figure, each plot exhibits
the potential existence of
heteroscedasticity with various
relationships between the residual
variance (squared residuals) and
the values of the independent
variable X