This document provides an overview of various statistical analyses that can be conducted when performing a complete regression model, including validity testing, reliability testing, descriptive statistics, correlation analysis, multicollinearity testing, autocorrelation testing, heteroscedasticity testing, normality testing, linearity testing, conceptual framework development, regression equation, F-statistic testing, t-statistic testing, coefficient of determination, path analysis, and Sobel testing. Examples are provided for validity testing, reliability testing, descriptive statistics, and correlation analysis using sample data.
Aminullah assagaf model regresi lengkap (ada sobel & peth) 4 agst 2021Aminullah Assagaf
This document discusses various statistical tests that will be used in a research study, including: validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis. Validity and reliability testing are used to ensure the measurement instrument is accurately measuring the intended constructs. Descriptive statistics will be used to summarize the basic characteristics of the data. Correlation analysis and regression analysis will be used to determine the relationship between variables and develop predictive models.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis.
2) Steps are outlined for testing the validity and reliability of survey instruments, as well as performing descriptive statistics, correlation analysis, and various regression diagnostics.
3) Examples are provided applying these statistical techniques to sample data, including interpreting outputs from SPSS.
The document describes the steps in conducting a complete regression analysis, including validity testing, reliability testing, descriptive statistics, correlation analysis, and various regression diagnostics. It provides examples of analyzing data from a study using SPSS to test the validity and reliability of survey instruments, summarize variables using descriptive statistics, examine correlations between variables, and ensure the assumptions of regression are met. The document serves as a guide for properly specifying, estimating, and evaluating a regression model.
Aminullah assagaf model regresi lengkap 10 agustus 2021_(sobel, path, outlier)Aminullah Assagaf
This document provides information on statistical analysis techniques used to analyze survey data, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis. It discusses conducting validity and reliability tests on survey instruments before collecting data. It also covers descriptive statistics such as measures of central tendency and dispersion. Correlation analysis is used to determine the strength and direction of relationships between variables. Regression analysis involves developing a regression model and testing hypotheses about variables.
Statistika Dasar (15) statistika non_parametrikjayamartha
This document summarizes several nonparametric statistical tests that can be applied to ordinal or nominal data and do not require assumptions about the population distribution. It discusses the sign test, Wilcoxon signed-rank test, Kruskal-Wallis test, Spearman rank correlation, and provides examples of applying each test. The examples analyze consumer preferences, delivery times, investment risk ratings, and determine if there are differences or correlations between groups or variables.
The document outlines the key topics covered in Chapter 15, which include frequency distribution, measures of location, variability and shape related to frequency distributions, hypothesis testing procedures, and cross-tabulation. It provides examples of computing common statistics like the mean, median, range and standard deviation. The chapter cover introduces hypothesis testing methodology, outlining steps like formulating hypotheses, selecting a test, and determining significance levels and types of errors. Examples are given of computing test statistics and determining probabilities. Cross-tabulation and related statistics like chi-square are also listed as chapter topics.
This document discusses several nonparametric tests:
1. The Sign Test is used for paired data and makes no assumptions about the distribution of the data. It looks at the signs of differences between pairs to determine if the median difference is zero.
2. The Mann-Whitney U Test compares two independent groups and uses ranks rather than raw values. It does not assume normality or equal variance like the t-test.
3. The Kruskal-Wallis H Test compares more than two populations and ranks all measurements jointly to compare distributions using rank sums.
It also briefly outlines Spearman's Rank Correlation test, the Run Test for Randomness, and the Cox-Stuart
This chapter discusses analysis of variance (ANOVA) techniques. It outlines one-way ANOVA, which involves one categorical independent variable (factor) and a continuous dependent variable. The chapter describes how to conduct a one-way ANOVA by identifying variables, decomposing total variation, measuring effects sizes, testing significance, and interpreting results. An example is provided to illustrate these steps using data on store sales and in-store promotion levels.
Aminullah assagaf model regresi lengkap (ada sobel & peth) 4 agst 2021Aminullah Assagaf
This document discusses various statistical tests that will be used in a research study, including: validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis. Validity and reliability testing are used to ensure the measurement instrument is accurately measuring the intended constructs. Descriptive statistics will be used to summarize the basic characteristics of the data. Correlation analysis and regression analysis will be used to determine the relationship between variables and develop predictive models.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis.
2) Steps are outlined for testing the validity and reliability of survey instruments, as well as performing descriptive statistics, correlation analysis, and various regression diagnostics.
3) Examples are provided applying these statistical techniques to sample data, including interpreting outputs from SPSS.
The document describes the steps in conducting a complete regression analysis, including validity testing, reliability testing, descriptive statistics, correlation analysis, and various regression diagnostics. It provides examples of analyzing data from a study using SPSS to test the validity and reliability of survey instruments, summarize variables using descriptive statistics, examine correlations between variables, and ensure the assumptions of regression are met. The document serves as a guide for properly specifying, estimating, and evaluating a regression model.
Aminullah assagaf model regresi lengkap 10 agustus 2021_(sobel, path, outlier)Aminullah Assagaf
This document provides information on statistical analysis techniques used to analyze survey data, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression analysis. It discusses conducting validity and reliability tests on survey instruments before collecting data. It also covers descriptive statistics such as measures of central tendency and dispersion. Correlation analysis is used to determine the strength and direction of relationships between variables. Regression analysis involves developing a regression model and testing hypotheses about variables.
Statistika Dasar (15) statistika non_parametrikjayamartha
This document summarizes several nonparametric statistical tests that can be applied to ordinal or nominal data and do not require assumptions about the population distribution. It discusses the sign test, Wilcoxon signed-rank test, Kruskal-Wallis test, Spearman rank correlation, and provides examples of applying each test. The examples analyze consumer preferences, delivery times, investment risk ratings, and determine if there are differences or correlations between groups or variables.
The document outlines the key topics covered in Chapter 15, which include frequency distribution, measures of location, variability and shape related to frequency distributions, hypothesis testing procedures, and cross-tabulation. It provides examples of computing common statistics like the mean, median, range and standard deviation. The chapter cover introduces hypothesis testing methodology, outlining steps like formulating hypotheses, selecting a test, and determining significance levels and types of errors. Examples are given of computing test statistics and determining probabilities. Cross-tabulation and related statistics like chi-square are also listed as chapter topics.
This document discusses several nonparametric tests:
1. The Sign Test is used for paired data and makes no assumptions about the distribution of the data. It looks at the signs of differences between pairs to determine if the median difference is zero.
2. The Mann-Whitney U Test compares two independent groups and uses ranks rather than raw values. It does not assume normality or equal variance like the t-test.
3. The Kruskal-Wallis H Test compares more than two populations and ranks all measurements jointly to compare distributions using rank sums.
It also briefly outlines Spearman's Rank Correlation test, the Run Test for Randomness, and the Cox-Stuart
This chapter discusses analysis of variance (ANOVA) techniques. It outlines one-way ANOVA, which involves one categorical independent variable (factor) and a continuous dependent variable. The chapter describes how to conduct a one-way ANOVA by identifying variables, decomposing total variation, measuring effects sizes, testing significance, and interpreting results. An example is provided to illustrate these steps using data on store sales and in-store promotion levels.
Non-parametric analysis: Wilcoxon, Kruskal Wallis & SpearmanAzmi Mohd Tamil
This document discusses non-parametric statistical tests including the Wilcoxon rank sum test, Kruskal-Wallis test, and Spearman/Kendall correlation. It provides an overview of when to use these tests, their assumptions, procedures, advantages and disadvantages. Examples are given to illustrate how to perform the Wilcoxon rank sum test, Kruskal-Wallis test, and Wilcoxon signed rank test step-by-step. SPSS output is also shown for these tests.
This document provides an overview of inferential statistics and statistical analysis techniques. It discusses measures of difference between groups such as the t-test, analysis of variance, and chi-square test. It also covers regression analysis techniques including simple linear regression, multiple regression, and logistic regression. Key concepts explained include standardized scores, degrees of freedom, assumptions of regression, and examining regression output for diagnostics like collinearity. Stepwise regression and procedures for conducting analyses in SPSS are also outlined.
This chapter outline describes one-way analysis of variance (ANOVA). It introduces key concepts like decomposing total variation into between- and within-group components to test if group means are equal. The chapter will cover conducting a one-way ANOVA, including identifying variables, decomposing variation, measuring effects with statistics like η2, testing significance with the F-ratio, and interpreting results. Examples and assumptions of one-way ANOVA will also be discussed.
The document discusses several nonparametric statistical tests:
1) The Wilcoxon signed-rank test can be used to test hypotheses about one population median with minimal assumptions about the distribution.
2) The Mann-Whitney U-test compares two independent samples and can detect differences in population medians.
3) The Kruskal-Wallis test compares more than two population medians and is a nonparametric equivalent of one-way ANOVA.
The document discusses the Friedman test, a non-parametric statistical test used to detect differences in treatments across multiple test attempts. It provides information on the history, assumptions, general procedure, applications, advantages and disadvantages of the Friedman test. An example is also included to demonstrate how to perform the Friedman test and analyze the results.
This document discusses inferential statistics and various statistical tests used to analyze differences between groups. It describes measures of difference such as the t-test, analysis of variance (ANOVA), chi-square test, Mann-Whitney test, and Kruskal-Wallis test. It also covers regression analysis techniques like simple and multiple linear regression. Key steps are outlined for conducting t-tests, ANOVA, and interpreting their results from SPSS output. Degrees of freedom and their role in statistical tests are also explained.
This document provides an overview of key concepts in quantitative data analysis, including:
1. It describes four scales of measurement (nominal, ordinal, interval, ratio) and warns against using statistics inappropriate for the scale of data.
2. It distinguishes between parametric and non-parametric statistics, descriptive and inferential statistics, and the types of variables and analyses.
3. It explains important statistical concepts like hypotheses, one-tailed and two-tailed tests, distributions, significance, and avoiding type I and II errors in hypothesis testing.
Correlation is a statistical technique used to determine the degree of relationship between two variables. Correlational research aims to identify and describe relationships but does not imply causation. Positive correlation indicates high scores on one variable are associated with high scores on the other, while negative correlation means high scores on one variable are associated with low scores on the other. Correlational research can be used for explanatory or predictive purposes. More complex techniques like multiple regression allow prediction using combinations of variables. Threats to internal validity like subject characteristics must be controlled.
Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data. It helps make better decisions by extracting information from data. There are two main types: descriptive statistics which describe data through methods like averages and distributions, and inferential statistics which make estimates, predictions, or generalizations about a population based on a sample. Key concepts in statistics include populations, samples, parameters which describe populations, and statistics which describe samples. The level of measurement of data, such as nominal, ordinal, interval, or ratio, determines what calculations and tests can be done.
This ppt includes basic concepts about data types, levels of measurements. It also explains which descriptive measure, graph and tests should be used for different types of data. A brief of Pivot tables and charts is also included.
Non-parametric tests are used when data is not normally distributed. They analyze rankings of raw scores rather than means. The Mann-Whitney and Wilcoxon rank-sum tests compare two independent groups and are equivalent to a t-test. They ignore groupings and rank all data points, expecting similar ranks between groups if they are the same. The Kruskal-Wallis test compares multiple groups and is akin to an ANOVA. Chi square examines relationships between categorical variables by comparing observed and expected frequencies in a contingency table to determine if differences are due to chance.
Introduction to Business Analytics Course Part 9Beamsync
Beamsync is providing "Business Analytics Training in Bangalore" with experience faculty. If you are looking for analytics courses in Bengaluru consult beamsync.
For more details visit: http://beamsync.com/business-analytics-training-bangalore/
Introduction to Business Analytics Course Part 10Beamsync
Are you looking for Business Analytics training courses in Bangalore? then consult Beamsync.
Beamsync is providing business analytics training in Bengaluru / Bangalore with experience trainers. For schedules visit: http://beamsync.com/business-analytics-training-bangalore/
This document provides an overview of a data analysis course covering various statistical techniques including correlation, regression, hypothesis testing, clustering, and time series analysis. The course covers descriptive statistics, data exploration, probability distributions, simple and multiple linear regression analysis, logistic regression analysis, and model building for credit risk analysis. Notes are provided on correlation calculation and its properties. Assumptions and interpretations of linear regression are also summarized. The document is intended as a high-level overview of topics covered in the course rather than an in-depth treatment.
This document discusses quality management systems in education. It provides information on the structure of quality management departments, including analysis and control, social and psychological research, and testing. It also outlines the main activities of quality management in education, such as analyzing trends, coordinating improvement efforts, and evaluating performance. Finally, it describes several quality management tools used in education, including check sheets, control charts, Pareto charts, and scatter plots.
This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling. It discusses conducting validity testing using Pearson correlation and corrected item-total correlation. It also covers reliability testing using Cronbach's alpha, descriptive statistics such as measures of central tendency and dispersion, correlation analysis to examine relationships between variables, and multiple linear regression modeling. The document contains examples of conducting these analyses in SPSS and interpreting the results.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling procedures.
2) Statistical tests covered include validity, reliability, descriptive statistics such as mean and standard deviation, correlation analysis using Pearson's correlation coefficient, and regression modeling procedures like the F-test and t-test.
3) Examples are provided applying these techniques to sample data, demonstrating how to conduct the analyses in SPSS.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling procedures.
2) Statistical tests covered include validity, reliability, descriptive statistics such as mean and standard deviation, correlation analysis using Pearson's correlation coefficient, and regression modeling procedures like the F-test and t-test.
3) Examples are provided applying these techniques to sample data, including interpreting outputs from the SPSS statistical software package.
Aminullah assagaf model regresi lengkap 8 agustus 2021_(sobel, path, outlier)Aminullah Assagaf
The document discusses various statistical tests that are commonly used when conducting regression analysis, including: validity testing, reliability testing, descriptive statistics, correlation analysis, multicollinearity testing, autocorrelation testing, heteroscedasticity testing, normality testing, linearity testing, conceptual framework development, regression equation testing, F-statistic testing, t-statistic testing, coefficient of determination, path analysis, and Sobel testing. It provides details on how to perform each test in SPSS and interpret the resulting output.
This document provides several examples of statistical analysis results from hypothesis testing in different formats, including SPSS, Elsevier-Scopus, and regression analysis. It also includes examples of descriptive statistics, correlation analysis, and comparisons between qualitative and quantitative research methods.
Modul Ajar Statistika Inferensia ke-12: Uji Asumsi Klasik pada Regresi Linier...Arif Rahman
1. The document discusses statistical analysis methods, including regression analysis and classical assumptions for regression models.
2. It explains the differences between correlation and regression, and covers simple and multiple linear regression analysis.
3. Key classical assumptions discussed include the assumptions of linearity, no multicollinearity, normality of residuals, homoscedasticity, and that covariates are uncorrelated with residuals. Methods for testing some of these assumptions are also presented.
This document provides information on analyzing data using SPSS 13.0. It discusses objectives of data analysis, an overview of SPSS, data management, entry, analysis techniques including descriptive analysis, reliability analysis, factor analysis, tests of differences and relationships. Descriptive analysis is used to describe distributions. Reliability analysis and factor analysis are used to assess validity and reliability. Appropriate techniques for comparing groups and analyzing relationships depend on measurement levels and number of variables. Examples of outputs from SPSS are also included.
Non-parametric analysis: Wilcoxon, Kruskal Wallis & SpearmanAzmi Mohd Tamil
This document discusses non-parametric statistical tests including the Wilcoxon rank sum test, Kruskal-Wallis test, and Spearman/Kendall correlation. It provides an overview of when to use these tests, their assumptions, procedures, advantages and disadvantages. Examples are given to illustrate how to perform the Wilcoxon rank sum test, Kruskal-Wallis test, and Wilcoxon signed rank test step-by-step. SPSS output is also shown for these tests.
This document provides an overview of inferential statistics and statistical analysis techniques. It discusses measures of difference between groups such as the t-test, analysis of variance, and chi-square test. It also covers regression analysis techniques including simple linear regression, multiple regression, and logistic regression. Key concepts explained include standardized scores, degrees of freedom, assumptions of regression, and examining regression output for diagnostics like collinearity. Stepwise regression and procedures for conducting analyses in SPSS are also outlined.
This chapter outline describes one-way analysis of variance (ANOVA). It introduces key concepts like decomposing total variation into between- and within-group components to test if group means are equal. The chapter will cover conducting a one-way ANOVA, including identifying variables, decomposing variation, measuring effects with statistics like η2, testing significance with the F-ratio, and interpreting results. Examples and assumptions of one-way ANOVA will also be discussed.
The document discusses several nonparametric statistical tests:
1) The Wilcoxon signed-rank test can be used to test hypotheses about one population median with minimal assumptions about the distribution.
2) The Mann-Whitney U-test compares two independent samples and can detect differences in population medians.
3) The Kruskal-Wallis test compares more than two population medians and is a nonparametric equivalent of one-way ANOVA.
The document discusses the Friedman test, a non-parametric statistical test used to detect differences in treatments across multiple test attempts. It provides information on the history, assumptions, general procedure, applications, advantages and disadvantages of the Friedman test. An example is also included to demonstrate how to perform the Friedman test and analyze the results.
This document discusses inferential statistics and various statistical tests used to analyze differences between groups. It describes measures of difference such as the t-test, analysis of variance (ANOVA), chi-square test, Mann-Whitney test, and Kruskal-Wallis test. It also covers regression analysis techniques like simple and multiple linear regression. Key steps are outlined for conducting t-tests, ANOVA, and interpreting their results from SPSS output. Degrees of freedom and their role in statistical tests are also explained.
This document provides an overview of key concepts in quantitative data analysis, including:
1. It describes four scales of measurement (nominal, ordinal, interval, ratio) and warns against using statistics inappropriate for the scale of data.
2. It distinguishes between parametric and non-parametric statistics, descriptive and inferential statistics, and the types of variables and analyses.
3. It explains important statistical concepts like hypotheses, one-tailed and two-tailed tests, distributions, significance, and avoiding type I and II errors in hypothesis testing.
Correlation is a statistical technique used to determine the degree of relationship between two variables. Correlational research aims to identify and describe relationships but does not imply causation. Positive correlation indicates high scores on one variable are associated with high scores on the other, while negative correlation means high scores on one variable are associated with low scores on the other. Correlational research can be used for explanatory or predictive purposes. More complex techniques like multiple regression allow prediction using combinations of variables. Threats to internal validity like subject characteristics must be controlled.
Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data. It helps make better decisions by extracting information from data. There are two main types: descriptive statistics which describe data through methods like averages and distributions, and inferential statistics which make estimates, predictions, or generalizations about a population based on a sample. Key concepts in statistics include populations, samples, parameters which describe populations, and statistics which describe samples. The level of measurement of data, such as nominal, ordinal, interval, or ratio, determines what calculations and tests can be done.
This ppt includes basic concepts about data types, levels of measurements. It also explains which descriptive measure, graph and tests should be used for different types of data. A brief of Pivot tables and charts is also included.
Non-parametric tests are used when data is not normally distributed. They analyze rankings of raw scores rather than means. The Mann-Whitney and Wilcoxon rank-sum tests compare two independent groups and are equivalent to a t-test. They ignore groupings and rank all data points, expecting similar ranks between groups if they are the same. The Kruskal-Wallis test compares multiple groups and is akin to an ANOVA. Chi square examines relationships between categorical variables by comparing observed and expected frequencies in a contingency table to determine if differences are due to chance.
Introduction to Business Analytics Course Part 9Beamsync
Beamsync is providing "Business Analytics Training in Bangalore" with experience faculty. If you are looking for analytics courses in Bengaluru consult beamsync.
For more details visit: http://beamsync.com/business-analytics-training-bangalore/
Introduction to Business Analytics Course Part 10Beamsync
Are you looking for Business Analytics training courses in Bangalore? then consult Beamsync.
Beamsync is providing business analytics training in Bengaluru / Bangalore with experience trainers. For schedules visit: http://beamsync.com/business-analytics-training-bangalore/
This document provides an overview of a data analysis course covering various statistical techniques including correlation, regression, hypothesis testing, clustering, and time series analysis. The course covers descriptive statistics, data exploration, probability distributions, simple and multiple linear regression analysis, logistic regression analysis, and model building for credit risk analysis. Notes are provided on correlation calculation and its properties. Assumptions and interpretations of linear regression are also summarized. The document is intended as a high-level overview of topics covered in the course rather than an in-depth treatment.
This document discusses quality management systems in education. It provides information on the structure of quality management departments, including analysis and control, social and psychological research, and testing. It also outlines the main activities of quality management in education, such as analyzing trends, coordinating improvement efforts, and evaluating performance. Finally, it describes several quality management tools used in education, including check sheets, control charts, Pareto charts, and scatter plots.
This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling. It discusses conducting validity testing using Pearson correlation and corrected item-total correlation. It also covers reliability testing using Cronbach's alpha, descriptive statistics such as measures of central tendency and dispersion, correlation analysis to examine relationships between variables, and multiple linear regression modeling. The document contains examples of conducting these analyses in SPSS and interpreting the results.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling procedures.
2) Statistical tests covered include validity, reliability, descriptive statistics such as mean and standard deviation, correlation analysis using Pearson's correlation coefficient, and regression modeling procedures like the F-test and t-test.
3) Examples are provided applying these techniques to sample data, demonstrating how to conduct the analyses in SPSS.
1) This document provides an overview of statistical analysis techniques for regression modeling, including validity testing, reliability testing, descriptive statistics, correlation analysis, and regression modeling procedures.
2) Statistical tests covered include validity, reliability, descriptive statistics such as mean and standard deviation, correlation analysis using Pearson's correlation coefficient, and regression modeling procedures like the F-test and t-test.
3) Examples are provided applying these techniques to sample data, including interpreting outputs from the SPSS statistical software package.
Aminullah assagaf model regresi lengkap 8 agustus 2021_(sobel, path, outlier)Aminullah Assagaf
The document discusses various statistical tests that are commonly used when conducting regression analysis, including: validity testing, reliability testing, descriptive statistics, correlation analysis, multicollinearity testing, autocorrelation testing, heteroscedasticity testing, normality testing, linearity testing, conceptual framework development, regression equation testing, F-statistic testing, t-statistic testing, coefficient of determination, path analysis, and Sobel testing. It provides details on how to perform each test in SPSS and interpret the resulting output.
This document provides several examples of statistical analysis results from hypothesis testing in different formats, including SPSS, Elsevier-Scopus, and regression analysis. It also includes examples of descriptive statistics, correlation analysis, and comparisons between qualitative and quantitative research methods.
Modul Ajar Statistika Inferensia ke-12: Uji Asumsi Klasik pada Regresi Linier...Arif Rahman
1. The document discusses statistical analysis methods, including regression analysis and classical assumptions for regression models.
2. It explains the differences between correlation and regression, and covers simple and multiple linear regression analysis.
3. Key classical assumptions discussed include the assumptions of linearity, no multicollinearity, normality of residuals, homoscedasticity, and that covariates are uncorrelated with residuals. Methods for testing some of these assumptions are also presented.
This document provides information on analyzing data using SPSS 13.0. It discusses objectives of data analysis, an overview of SPSS, data management, entry, analysis techniques including descriptive analysis, reliability analysis, factor analysis, tests of differences and relationships. Descriptive analysis is used to describe distributions. Reliability analysis and factor analysis are used to assess validity and reliability. Appropriate techniques for comparing groups and analyzing relationships depend on measurement levels and number of variables. Examples of outputs from SPSS are also included.
This document discusses test validity, reliability, and item analysis. It provides details on the following stages of test construction: planning the test, trying out the test, establishing validity, establishing reliability, and interpreting scores. Item analysis involves calculating difficulty and discrimination indices to evaluate individual test items and improve the test. Validity refers to how well a test measures the intended construct. Reliability measures the consistency of test scores and can be estimated through stability, equivalence, internal consistency, and other methods. The document provides formulas and steps for conducting these analyses to evaluate and improve assessments.
APPLICATION OF TOOLS OF QUALITY IN ENGINEERING EDUCATIONSyed Raza Imam
This document provides an overview of a project that aims to apply quality tools to analyze the performance of first-year engineering students at Manipal Institute of Technology. CGPA data was collected from 300 randomly selected first-year students out of a total batch of 1270 through systematic random sampling. Quality tools like histograms, control charts, and cause-and-effect diagrams were then applied to the CGPA data to identify factors affecting student performance and determine whether the educational process was under statistical control. The analysis found a mean CGPA of 6.5593 and standard deviation of 2.2291. Control limits were also calculated for X-bar and R control charts.
This document provides an overview of data preparation and analysis techniques. It discusses data editing, coding, tabulation and exploratory data analysis. It then covers various bivariate techniques including correlation, regression and two-way ANOVA. Multivariate techniques like factor analysis, discriminant analysis, cluster analysis, multidimensional scaling and conjoint analysis are also summarized. Finally, it discusses the use of statistical software packages for data analysis.
Factor analysis is a statistical technique used to reduce the dimensionality of a set of correlated variables by identifying underlying factors. It seeks to explain the variance between observed variables in terms of a smaller number of latent factors. The document describes how factor analysis works, including that it begins with a correlation matrix and aims to group highly correlated variables together into factors while variables with low correlations are separated into different factors. Factor analysis can help provide a clearer understanding of the relationships in a dataset and enable subsequent analyses using the identified factors.
This document provides an overview of survey and correlational research methods. It defines survey research as collecting data using instruments like questionnaires to answer questions about people's opinions or characteristics. The main purposes of surveys are to gather information about groups and sample populations. Correlational research determines if and how strongly two or more variables are related by calculating correlation coefficients. Relationship studies explore factors related to complex variables, while prediction studies use correlations to predict outcomes. The document outlines different survey and correlational research designs, procedures, analyses, and considerations.
SPSS is a popular statistical software package that allows users to perform complex data analysis with simple instructions. It requires variables, data, measurement scales, and a code book to be defined. The document then describes different variable types (independent, dependent), measurement scales (nominal, ordinal, interval, ratio), how to start and use SPSS, and basic functions for data entry, analysis including frequencies, descriptives, correlation, and reliability which can be measured using Cronbach's alpha.
This document provides an overview of a regression modelling course. It includes contact details for the lecturer and tutor. It outlines the lecture and tutorial times, textbook information, and assessment details. It also provides hints for success, including attending all classes, doing the required readings and tutorials, and using R to complete assignments. The document then begins covering key concepts in regression modelling, including the history, different types of relationships between variables, and how to construct regression models.
This document provides information on conducting item analysis to improve test quality and instruction. It discusses the purposes of item analysis which include providing more diagnostic student information, building future tests by revising items, and improving test writing skills. It outlines the classical item analysis statistics of reliability at the test level and difficulty and discrimination at the item level. The document then describes the step-by-step process for conducting item analysis which involves administering a test, coding responses, analyzing data in SPSS, and summarizing results to identify items to retain or revise. The goal of item analysis is to use results to improve both test items and instruction.
PILOT STUDY RESULTS TURNAROUND - EDITED.pptxMatataMuthoka1
The pilot study tested the reliability of survey questions that would be used to examine the relationship between turnaround strategies (operational restructuring, management restructuring, and diversification) and performance of selected public universities in Kenya. Reliability tests showed that the survey questions had high levels of internal consistency, with Cronbach's alpha values above 0.70 for each category of questions and 0.762 overall, indicating good reliability. Diagnostic tests also confirmed there were no issues with multicollinearity or heteroscedasticity and that the data was normally distributed. The results supported the reliability and validity of using the survey to collect data for the full study.
This document discusses the validity and reliability of measurement instruments. It defines validity as the degree to which an instrument measures what it intends to measure. There are different types of validity discussed, including content validity, construct validity, and criterion validity. Reliability is defined as the consistency of measurement, or the degree to which an instrument produces stable and consistent results. Common methods for assessing reliability are test-retest reliability, parallel forms, and internal consistency. Formulas for calculating reliability coefficients like Cronbach's alpha and Kuder-Richardson 20 are also provided.
This document provides an overview of structural equation modeling (SEM) techniques. It discusses key SEM concepts like latent constructs, measurement models, structural models, and model estimation. The document also covers advanced SEM topics such as comparing covariance-based and variance-based approaches, handling formative versus reflective measures, and analyzing moderators and mediators. Overall, the document serves as a guide for researchers to understand and apply quantitative data analysis methods using SEM.
Here are my responses to the guide questions:
1. I decided to teach in SHS because I wanted to help guide students in their transition to college and career. I find it rewarding to support students' personal and academic growth during this important stage of their lives.
2. Two of the most significant experiences I've had teaching Research involve seeing students get excited about their topics and taking ownership of their work. It's amazing to see their eyes light up when they discover something interesting during the research process. I also appreciate witnessing students' confidence grow as they learn to independently plan and conduct research. These experiences are meaningful because they show the positive impact of research skills on student learning and development.
3. One of my most
Research is a systematic and scientific method of finding solutions by obtaining various types of data and systematic analysis of the multiple aspects of the issues related.
The techniques or the specific procedure which helps to identify, choose, process, and analyze information about a subject is called Research Methodology
Experimental design is a statistical tool for improving product design and solving production problems.
Similar to Aminullah assagaf model regresi lengkap 10 agustus 2021_(sobel, path, outlier) (20)
This document provides an overview of a project management course to be taught from March to July 2024 by Professor Aminullah Assagaf. The course will cover topics such as project planning, scheduling, control methods, critical path method (CPM), probabilistic activity times, and project crashing and time-cost tradeoffs. It lists learning objectives, lecture outlines, and resources including a YouTube channel for the course.
The document provides an outline for a lecture on operations management. It discusses topics related to human resources including human resources and quality management, changing nature of HR management, contemporary trends in HR, employee compensation, managing diversity, job design, job analysis, and learning curves. It provides definitions and explanations of these topics with examples. The lecture will be given by Professor Aminullah Assagaf from March to July 2024.
Aminullah Assagaf_P5-Ch.7_Capacity and Facility_32.pptxAminullah Assagaf
This document provides an outline for a lecture on operations management topics related to capacity and facilities planning. The key points covered include capacity planning strategies, economies of scale, basic layout types including process, product and fixed position layouts. Methods for designing different layouts are discussed. The document also covers topics like line balancing, cellular layouts, flexible manufacturing systems, and mixed model assembly lines.
Aminullah Assagaf_P4-Ch.6_Processes and technology-32.pptxAminullah Assagaf
This document provides an outline for a course on operations management being taught by Professor Aminullah Assagaf from March to July 2024. The outline covers topics such as process planning, analysis, innovation, and technology decisions. Key aspects of process design, strategy, and selection are discussed. Process types like project, batch, mass, and continuous are defined and compared. Process analysis techniques like flowcharting are also introduced. The document concludes with an overview of technologies relevant to product development and manufacturing processes.
This document discusses concepts related to operations management and service design. It covers topics such as the design process, concurrent design, technology in design, design reviews, quality function deployment, characteristics of services, the service design process, tools for service design such as service blueprinting and servicescapes, and waiting line analysis for service improvement. The document contains lecture outlines, definitions, and examples to explain key concepts.
This document provides an overview of operations management topics including operations strategy, quality management, and changing corporations. It discusses four steps for strategy formulation, competitive priorities, the role of operations in corporate strategy, strategic decisions in operations, and issues and trends affecting operations. Key concepts are defined and companies are used as examples to illustrate various strategies and quality management techniques. The document appears to be from a textbook or set of lecture slides on operations management.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
2. CONTENT
1) Uji Validity
2) Uji Reliability
3) Statistik Descriptif
4) Correlation
5) Uji Multicollinearity
6) Uji Autocorrelation
7) Uji Heteroscedasticity
8) Uji Normaliity
9) Uji Linearity
10) Kerangka Konseptual
11) Persamaan Regresi
12) Uji Statistik F
13) Uji Statistik t
14) Koefisien Determinan (Adjusted R2)
15) Uji path (Uji pengaruh tidak langsung melalui variabel intervening)
16) Uji SOBEL (uji variabel INTERVENING)
17) Drop outlier (Bias), data tdk berdistribusi normal
https://www2.slideshare.net/AminullahAssagaf1/aminullah-assagaf-model-
regresi-lengkap-ada-sobel-amp-peth-4-agst-2021
3. UJI VALIDITAS & REALIBILITAS
• Sebelum instrument/alat ukur digunakan untuk mengumpulkan data penelitian, maka perlu
dilakukan uji coba kuesioner untuk mencari kevalidan dan reliabilitas alat ukur tersebut
• Uji validitas dan realibilitas digunakan untuk menguji data yang berasal dari daftar pertanyaan
atau kuesioner responden
• Uji validitas dan reliabilitas dapat membuktikan bahwa daftar pertanyaan dalam kuesioner adalah
tepat dan konsisten hasil jawaban dari responden atas pertanyaan yang diajukan.
• Uji validitas berguna untuk mengetahui apakah alat ukur tersebut valid, valid artinya ketepatan
mengukur atau alat ukur tersebut tepat untuk mengukur sebuah variable yang akan diukur.
• Setelah dilakukan uji validitas, maka harus dilanjutkan dengan menggunakan uji reliabilitas data.
Alat ukur yang reliabel pasti terdiri dari item-item alat ukur yang valid. Sehingga, setiap reliabel
pasti valid, namun setiap yang valid belum tentu reliabel.
• Reliabilitas adalah keandalan/konsistensi alat ukur (keajegan alat ukur), sehingga reliabilitas
merupakan ukuran suatu kestabilan dan konsistensi responden dalam menjawab hal yang
berkaitan dengan konstruk-konstruk pertanyaan yang merupakan dimensi suatu variabel dan
disusun dalam suatu bentuk kuesioner.
4. 1) UJI VALIDITY
UJI VALIDITAS KUISIONER
• Uji validitas, adalah untuk mengetahui seberapa cermat suatu instru-
ment (teknik pengambilan sampel atau pengukuran data) dalam
mengukur apa yang ingin diukur atau diteliti.
• Untuk melakukan Uji Validitas kuisioner, digunakan metode Pearson
Corelation (Product Moment Pearson) dan metode Corrected Item
Total Correlation.
6. 1) UJI VALIDITY
1. Metode Pearson Correlation (Product Moment Pearson)
Langkahnya (SPSS): Analyze → Correlate → Bivariate → Pindahkan semua
item dan total ke kolom Variables → OK
Hasil perhitungan variabel Y, korelasi item_Y1 sampai dengan item_Y7
terhadap total, bervariasi antara 0,716 sampai dengan 0,884 dengan tingkat
signifikan (2-tailed) 0,000.
Variabel Y dinyatakan Valid , karena tiap item memiliki tingkat Sig (2-tailed)
0,000 < 0,01 (1%), atau korelasi tiap item terhadap total > r table 0,505 (1%
atau 0,01 dan n = 25).
8. 1) UJI VALIDITY
2. Metode Corrected Item Total Correlation
Langkahnya (SPSS): Analyze → Scale → Reliability Analysis → Pindahkan
semua Item ke kolom item (kecuali total) → Statistics → Centang pada Scale
if item deleted → Continue → OK
Hasil perhitungan variable Y menunjukkan bahwa “Corrected Item – Total
Correlation” diperoleh: variabel Y, corelasi item Y_1 sampai dengan item Y_7
terhadap total bervaiasi antara 0,616 sampai dengan 0,836.
Karena tiap item memiliki nilai Corrected item – Total Correlation yang lebih
besar dari r-tabel 0,505 (0,01 dan n=25), maka kese-luruhan item dari
variable Y dinyatakan Valid.
11. 2) UJI RELIABILITY
UJI RELIABILITAS KUISIONER
• Menguji konsistensi alat ukur jika pengukuran diulangi, hasilnya
konsisten atau dapat dipercaya atau tahan uji.
• Langkahnya (SPSS): Analyze → Scale → Reliability Analysis →
Pindahkan semua item kekolom item (kecuali total) → OK
12. 2) UJI RELIABILITY
• Contoh aplikasi dengan menggunakan data hasil penelitian diatas,
yaitu variable Y (7 item)
• Hasilnya akan konsisten bila dilakukan pengukuran terhadap Y
• Uji reliabilitas biasanya menggunakan batasan Crombach’s alpha 0,7
keatas dapat diterima.
• Output SPSS, pada Cronbach’s Alpha diperoleh, variable Y = 0,924,
• Karena Cronbach’s Alpha masing-masing variable tersebut > 0,7 maka
dinyatakan bahwa instrument pengukuran variable Y dinyatakan
reliable.
14. 3) STATISTIK DESCRIPTIF
• Statistika adalah suatu ilmu yang mempelajari tentang cara-cara
pengumpulan data, penyajian daata, analisis dan interpretasi tentang data
terseut. Seorang yang belajar statistika biasanya bekerja dengan data
numerik yang berupa hasil cacahan ataupun hasil pengukuran, atau
mungkin dengan data kategorik yang diklasifikasikan menurut kriteria
tertentu. Setiap informasi yang tercatat dan terkumpul, baik numerik dan
kategorik disebut pengamatan.
• Metode statistika adalah prosedur-prosedur yang digunakan dalam
pegumpulan, penyajian, analisis, dan penafsiran data. Metode-metode
tersebut dikelompokkan dalam dua kelompok besar, yaitu:
1. Statistika Deskriptif
2. Statistika Inferensial
15. STATISTIK DESKRIPTIF
• Statistik deskriptif adalah metode-metode yang berkaitan dengan
pengumpulan dan Penyajian suatu gugus data sehingga memberikan
informasi yang berguna. (Ronald E. walpole)
• Statistik deskriptif adalah metode yang sangat sederhana. Metode ini
hanya mendeskripsikan kondisi dari data dalam bentuk tabel diagram
grafik dan bentuk lainnya yang disajikan dalam uraian-uraian singkat
dan terbatas.
• Statistika deskriptif memberikan informasi hanya mengenai data tsb
dan sama sekali tidak menarik kesimpulan apapun tentang data
tersebut.
16. STATISTIK INFERENSIAL
• Statistik inferensial adalah sebuah sebuah metode yang dapat digunakan
untuk menganalisis kelompok kecil data dari data induknya (sample yang
diambil dari populasi) sampai pada peramalan dan penarikan kesimpulan
terhadap kelompok data induknya atau populasi.
• Statistika inferensial merupakan cakupan seluruh metode yang
berhubungan dengan analisis sebagian data untuk kemudian sampai pada
peramalan atau penarikan kesimpulan mengenai keseluruhan data induk
(populasi) tersebut.
• Generalisasi yang berhubungan dengan inferensia statistik selalu
mempunyai sifat tidak pasti, karena kita mendasarkan pada informasi
parsial yang diperoleh dari sebagian data. Sehingga yang didapat hanya
peramalan.
17. CONTOH STATISTIKA INFERENSIA
• Catatan kelulusan selama lima tahun terakhir pada sebuah universitas
negeri di Sumatra Barat menunjukkan bahwa 72% diantara
mahasiswa S1 lulus dengan nilai yang memuaskan.
• Nilai numerik 72% merupakan bentuk suatu statistika deskriptif.
• Jika berdasarkan ini kemudian seorang mahasiswa Teknik Industri
menyimpulkan bahwa peluang dirinya akan lulus dengan nilai yang
memuaskan adalah lebih dari 70%, maka mahasiswa tersebut telah
melakukan inferensia statistika yang tentu saja memiliki sifat yang
tidak pasti
18. PERBEDAAN ANTARA STATISTIK DESKRIPTIF DAN STATISTIK
INFERENSIA
• Statistik deskriptif hanya terbatas dalam menyajikan data dalam
bentuk tabel, diagram, grafik, dan besaran lain
• Sedangkan statistik inferensial selain mencakup statistik deskriptif
juga dapat digunakan untuk melakukan estimasi dan penarikan
kesimpulan terhadap populasi dari sampelnya.
• Untuk sampai pada penarikan kesimpulan statistik inferensia melalui
tahap uji hipotesis dan uji statistik.
19. 3) STATISTIK DESKRIPTIF
• Contoh: variabel Y, X1 dan X2 berikut ini
• Langkah (SPSS): AnalysisDescriptives statistics Descriptives
pindahkan semua variabel ke kanan kontak variable (s)Ok
• Hasil SPSS
N Minimum Maximum Mean
Std.
Deviation
Y 25 37.00 80.00 56.1200 14.14013
X1 25 29.00 75.00 46.6000 16.22498
X2 25 40.00 88.00 63.6400 13.90108
Valid N
(listwise)
25
Descriptive Statistics
21. 4) CORRELATION
• Analisis korelasi digunakan untuk mengetahui derajat atau kekuatan
hubungan linear antara satu variable dengan variable lain.
• Dikatakan suatu variable memiliki hubungan dengan variable lain jika
perubahan suatu variable diikuti dengan perubahan variable lain.
• Perubahan dapat terjadi dalam bentuk searah atau korelasi positif
dan perubahan berlawanan arah atau korelasi negatif.
• Koefisien korelasi suatu variabel dinyatakan memiliki kekuatan atau
derajat hubungan dengan variable lain, dan tidak membedakan
antara variable bebas dengan variable terikat.
22. 4. CORRELATION
• Koefisien korelasi menggambarkan keeratan hubungan yang berkisar
pada negatif satu (-1) sampai dengan satu (1)
• Koefisien korelasi -1 atau mendekati -1 maka semakin tinggi nilai X
maka semakin rendah nilai Y
• Sebaliknya bila koefisien korelasi mendekati 1, maka semakin tinggi
nilai X semakin besar nilai Y
• Metode yang digunakan dalam analisis korelasi :
a) Korelasi product moment (Pearson)
b) Korelasi Rank Spearman
c) Korelasi Rank Kendal atau Kendal Tau
d) Korelasi dengan koefisien kontingensi
24. Korelasi product moment (Pearson)
• Analisis korelasi product moment digunakan untuk mengetahui hubungan
antara variable yang memiliki skala interval atau rasio.
• Analisis product moment atau Pearson Correlation, digunakan untuk
mengetahui seberapa kuat hubungan antara dua variable. Berikut contoh
data penelitian, aplikasi SPSS untu analisis korelasi product moment.
• Langkahnya (SPSS) : Analyze → Correlate → Bivariate → Pindahkan
variable X1, X2 dan Y ke kolom Variables → Pada Correlatin Coeficient
biarkan terpilih Pearson → Pada Test of significance jika uji dua sisi
biarkan terpilih Two tailed atau pilih one-tiled (bila telah ditunjuk arah
korelasi positif) → OK
• Berdasarkan output SPSS, diperoleh (a) koefisien korelasi antar variabel X1
dengan Y= 0.980, dan tingkat sig (2-tailed) 0,000, (b) koefisien korelasi
variabel X2 dengan Y = 0,985, dan tingkat sig (2-tailed) 0,000.
25. Korelasi product moment (Pearson)
• Sugiyono (2007) memberikan interpretasi koefisien korelasi, y :
• 0,00 – 0,199 : sangat rendah
• 0,20 – 0, 399 : rendah
• 0,40 – 0,599 : sedang
• 0,60 – 0,799 : kuat
• 0,80 – 1,000 : sangat kuat
• Karena tingkat koefisien korelasi (X1 =0,980 dan X2 = 0,985) > 0,80
maka dinyatakan hubungannya sangat kuat. Kemudian tingkat Sig (2-
tailed) X1 dan X2 terhadap Y sebesar 0,000< 0,05 atau 5%, maka
dinyatakan bahwa terjadi hubungan yang signifikan antara variable
bebas X1, X2 dengan Y.
26.
27. Korelasi product moment (Pearson)
Koefisien korelasi Product Moment dapat dihitung dengan formula :
n∑XY – (∑X) (∑Y)
rxy = ----------------------------------------------------
{ n∑X2
– (∑X)2
} {n∑Y2
– (∑Y)2
}
Dimana :
rxy = koefisien korelasi
n = jumlah pengamatan
∑X = jumlah nilai X
∑Y = jumlah nilai Y
29. Reference: https://rumusrumus.com/korelasi-adalah/
Pengertian Korelasi
Korelasi atau umumnya disebut koefisien korelasi adalah nilai yang menunjukkan kekuatan
dan arah hubungan linier antara dua peubah acak
Korelasi Sederhana adalah suatu Teknik Statistik yang dipakai guna mengukur kekuatan
hubungan 2 Variabel dan juga untuk bisa mengetahui bentuk hubungan antara 2 Variabel
itu dengan hasil yang sifatnya kuantitatif.
Rumus Korelasi
Koefisien Korelasi Sederhana pada umumnya disebut juga dengan Koefisien Korelasi
Pearson karena memiliki rumus perhitungan Koefisien korelasi sederhana dikemukakan
oleh Karl Pearson yaitu seseorang ahli Matematika yang berasal dari Inggris. (Rumus ini
disebut juga dengan Pearson product moment)
rumus korelasi
Keterangan Rumus :
n adalah Banyaknya Pasangan data X dan Y
Σx adalah Total Jumlah dari Variabel X
Σy adalah Total Jumlah dari Variabel Y
Σx2 adalah Kuadrat dari Total Jumlah Variabel X
Σy2 adalah Kuadrat dari Total Jumlah Variabel Y
Σxy adalah Hasil Perkalian dari Total Jumlah
Variabel X dan Variabel Y
30. Bentuk Hubungan Antara 2 Variabel
Korelasi Linear Positif (+1)
Perubahan Nilai Variabel diikuti perubahan Nilai Variabel yang lainnya secara
teratur dengan arah yang sama. Jika Nilai Variabel X mengalami kenaikan, maka
Variabel Y juga ikut naik. Jika Nilai Variabel X mengalami penurunan, maka
Variabel Y pun ikut turun.
Jika Nilai Koefisien Korelasi mendekati +1 (positif Satu) berarti pasangan data
Variabel X dan Y mempunyai Korelasi Linear Positif yang kuat.
Korelasi Linear Negatif (-1)
Perubahan Nilai Variabel diikuti perubahan Nilai Variabel yang lainnya secara
teratur tetapi dengan arah yang berlawanan. Jika Nilai Variabel X mengalami
kenaikan, maka Variabel Y akan turun. Jika Nilai Variabel X turun, maka Nilai
Variabel Y mengalami kenaikan.
Apabila Nilai Koefisien Korelasi mendekati -1 maka hal ini menunjukan pasangan
data Variabel X dan Variabel Y mempunyai Korelasi Linear Negatif yang kuat/erat.
Tidak berkolerasi (0)
Kenaikan Nilai Variabel yang satunya terkadang diikuti dengan penurunan Variabel
yang lainnya atau terkadang diikuti dengan kenaikan Variable yang lainnya.Arah
hubungannya tidak teratur, searah, dan terkadang berlawanan.
Apabila Nilai Koefisien Korelasi mendekati 0 (Nol) berarti pasangan data Variabel
X dan Y mempunyai korelasi yang sangat lemah atau berkemungkinan tidak
berkolerasi
31.
32. Koefisien korelasi non-parametrik
Koefisien korelasi Pearson adalah statistik parametrik, dan ia kurang begitu
menggambarkan korelasi jika asumsi dasar normalitas suatu data dilanggar. Metode
korelasi non-parametrik seperti ρ Spearman and τ Kendall berguna saat distribusi tidak
normal.
Koefisien korelasi non-parametrik masih kurang kuat jika disejajarkan dengan metode
parametrik jika asumsi normalitas data terpenuhi, tetapi cenderung memberikan hasil
distrosi ketika asumsi tersebut tak terpenuhi.
33. Korelasi Ganda
Korelasi pada (multyple correlation) adalah angka yang menunjukkan arah dan kuatnya
hubungan antara dua variabel secara bersama-sama atau lebih dengan variabel yang lainya.
Pemahaman tentang korelasi ganda bisa dilihat melalui gambar berikut ini. Simbol korelasi
ganda adalah R
Keterangan gambar :
X1 = Kepemimpinan
X2 = Tata Ruang Kantor
Y = Kepuasan Kerja
R = Korelasi Ganda
34. Keterangan gambar :
X1 = Kesejahteraan pegawai
X2 = Hubungan dengan pimpinan
X3 = Pengawasan
Y = Efektivitas kerja
Dari contoh di atas, terlihat bahwa korelasi ganda R, bukan merupakan penjumlahan dari
korelasi sederhana yang ada pada setiap variabel (r1-r2-r3). Jadi R (r1+ r2+ r3).
Korelasi ganda merupakan hubungan secara bersama-sama antara X1 dengan X2 dan Xn
dengan Y. Pada gambar pertama. korelasi ganda merupakan hubungan secara bersama-
sama antara variabel kepemimpinan, dan tata ruang kantor dengan kepuasan kerja pegawai
35. Kopula dan korelasi
Banyak yang keliru dan menganggap bahwa informasi yang diberikan dari sebuah koefisien
korelasi cukup mendefinisikan struktur ketergantungan antara peubah acak.
Untuk mengetahui adanya ketergantungan antara peubah acak harus dipertimbangkan
kopula antara keduanya. Koefisien korelasi bisa didefinisikan sebagai struktur
ketergantungan hanya pada beberapa kasus, misalnya pada fungsi distribusi kumulatif pada
distribusi normal multivariat
Korelasi Parsial
Analisis korelasi parsial dipakai untuk mengetahui hubungan antara dua variabel dimana
variabel lainnya yang dianggap berpengaruh dikendalikan atau dibuat tetap sebagai variabel
kontrol.
Nilai korelasi (r) berkisar antara 1 sampai dengan -1, nilai semakin mendekati 1 atau -1
berarti hubungan antara dua variabel akan semakin kuat, sebaliknya jika nilai mendekati 0
berarti hubungan antara dua variabel akan semakin lemah.
Nilai positif menunjukkan hubungan searah (X naik maka Y naik) dan nilai negatif
menunjukkan bahwa hubungan terbalik (X naik maka Y turun). Data yang dipakai biasanya
berskala interval atau rasio.
Pedoman untuk memberikan interpretasi koefisien korelasi sebagai berikut:
0,00 – 0,199 = sangat rendah
0,20 – 0,399 = rendah
0,40 – 0,599 = sedang
0,60 – 0,799 = kuat
0,80 – 1,000 = sangat kuat
52. 5) UJI MULTICOLLINEARITY
• Multikolinearitas, adalah terjadinya korelasi linear yang tinggi atau
mendekati sempurna antara variable bebas. Konsekuensi atau akibat
terjadinya multikolineariti, yaitu penaksir kuadrat terkecil tidak bisa
ditentukan (indeterminate).
• Beberapa metode yang digunakan untuk mendeteksi multikolinearitas
dalam model regresi.
a. Melihat nilai R2 dan nilai t statistic
b. Uji multikolinearitas menggunakan Pair-Wise Correlation antara variable bebas
c. Uji multikolieritas berdasarkan EIGENVALUE dan Condition Index
d. Uji multikolieritas dengan korelasi parsial
e. Uji multikolinearitas dengan Tolerance (TOL) dan Vriance Inflation Factor (VIF)
54. 5) UJI MULTICOLLINEARITY
• Uji multikolinearitas dengan Tolerance (TOL) dan Vriance Inflation
Factor (VIF)
• Jika nilai VIF < 10 atau tidak lebih dari 10 dan nilai Tolerance (TOL) >
0,10 maka dinyatakan tidak ada gejala multikolinearitas
• Langkahnya (SPSS): Analyze → Regression → Linear → Dependent →
Independent → Statitics → Collinierity Diagnostics → Continue →
OK
55. 5) UJI MULTICOLLINEARITY
Hasil sebagaimana pada table Coeficients. Nilai VIF atas variebl X1 =15,234 dan X2
=15,234 sehingga dinyatakan bahwa dalam model regresi tersebut terdapat gejala
multikolinearitas karena nilai VIF> 10 dan Tolerance (TOL) < 0,10
56. 6) UJI AUTOCORRELATION
• Autokorelasi, adalah keadaan dimana terjadinya korelasi dari residual
untuk pemgamatan satu dengan pengamatan yang lain yg disusun menurut
urutan waktu. Uji autokorelasi dimaksudkan untuk menge-tahui apakah
ada korelasi antara anggota serangkaian data observasi yang diuraikan
menurut waktu (time-series) dan ruang (cross-saction). Konsekuensi bila
terdapat masalah autokorelasi, yaitu nilai t-statistik dan nilai F-statistik
tidak dapat dipercaya, karena hal itu akan menye-satkan.
• Beberapa metode yang dapat digunakan untuk mendeteksi terjadinya
autokorelasi (Gujaratai, 1995):
a. Metode Durbin Watson (Durbin Watson Test)
b. Metode Lagrange Multiplier (LM Test)
c. Metode Breusch-Godfrey (B-G Test)
d. Metode Run Test
58. 6) UJI AUTOCORRELATION
Uji autokorelsi dengan metode Durbin Watson (Durbin Watson Test)
• Uji ini diperkenalkan oleh J. Durbin dan GS Watson tahun 1951. Rumus
yang digunakan untuk uji Durbin-Watson :
• Membandingkan DW dengan table DW, dengan kesimpulan, yaitu
(a) ada autokorelasi positif : DW < dL, (b) tanpa kesimpulan atau tak
dapat dipastikan : DW diantara dL sampai dengan dU, (c) tidak ada
autokorelasi,: DW diantara dU sampai dengan 4-dU, (d) tanpa
kesimpulan atau tak dapat dipastikan : DW diantara 4-dU sampai dengan
4-dL, (e) ada autokorelasi negatif DW > 4-dL
DW = (∑e – et-1)2
/ ∑ei
2
59. 6) UJI AUTOCORRELATION
• Langkahnya (SPSS): Analyze → Regression → Linear → Dependent →
Independent → Statistics → Durbin-Watson → Continue → OK
• Output SPSS diperoleh DW hitung = 1,446 sedangkan DW table
diperoleh dengan n= 25 dan k=2 dengan nilai dL=1,206 dan dU =
1,550, sehingga dapat dinyatakan bahwa model regresi tidak dapat
dipastikan adanya autokorelasi, karena DW berada diantara dL dan dU
62. 7) UJI HETEROSCEDASTICITY
• Heteroskedastisitas, adalah keadaan dimana terjadinya ketidaksa-
maan varian dari residual pada model regresi.
• Heteroskedastisitas berarti ada varian variable pada model regresi
yang tidak sama atau konstan.
• Sebaliknya Homoskedastisitas berarti varian variable pada model
regresi memiliki nilai yang sama atau konstan.
• Masalah heteroskedastisitas sering terjadi pada data cross-saction.
Konse-kuensi heteroskedastisitas adalah uji hipotesis yang
didasadrkan pada uji t dan dsitribusi F tidak dapat dipercaya.
63. 7) UJI HETEROSCEDASTICITY
Beberapa metode yang dapat digunakan menguji heteroskedas -
tisitas :
1. Metode grafik
2. Metode Glejser
3. Metode Park
4. Metode White
5. Metode Rank Spearman
6. Metode Bresh-Pagan-Godfrey (BPG)
65. 7) UJI HETEROSCEDASTICITY
Uji heteroskedastisitas dengan metode GLejser
• Metode ini meregresi semua variable bebas terhadap nilai mutlak
residualnya. Jika terdapat pengaruh variable bebas yang signifikan
terhadap nilai mutlak residualnya, maka dalam model regresi
terdapat masalah heteroskedastisitas.
• Persamaan yang digunakan untuk menguji heteroskedastisitas dari
metode Glejser adalah :
│µi│ = α + βXi + έi
Dimana │µi│ nilai residual mutlak dan Xi variable bebas
66. 7) UJI HETEROSCEDASTICITY
Langkahnya (SPSS):
a) Meregresikan variable : Analyze → Regression → Linear → Dependent
→ Independent →Save → pada Residual → Unstandardized →
Continue → OK
b) Kembali kedata, ada tambahan data pada kolom RES_1 selanjutnya
lakukan lagi transformasi ABRESID : Transform → Compute → pada
target variable isi ABRESID → Pada Number Expresion isi ABS(RES_1)
→OK
c) Kembali ke data, ada tambahan data pada kolom ABRESID, lanjutkan
dengan meregresikan variable ABRESID: Analyze → Regression
→Linear→ pada Dependent masukkan ABRESID → pada Independent
X1 dan X2→ OK
67. 7) UJI HETEROSCEDASTICITY
• Sebagai acuan yaitu bila nilai probabilitas (sig) > nilai alpha 5%, maka dipastikan
tidak terjadi Heteroskedastisitas.
• Dari outpu SPSS pada table coeficient diperolehtingkat sig X1 =0,758 dan X2 =
0,969 > 0,05 (alpha), sehingga dinyatakan bahwa dalam model regresi tidak
terdapat gejala Heteroskedastisitas. Dengan kata lain, jika t hitung < t table atau
sig > alpha 5%, maka tidak terjadi gejala heteroskedastisitas
68. 8) UJI NORMALITY
• Uji normalitas bertujuan untuk mengetahui apakah nilai residual yang telah
distandarisasi pada model regresi tsb berdistribusi normal atau tidak.
• Nilai residual dikatakan berdistribusi normal jika nilai residual
terstandarisasi sebagian besar mendekati nilai rata-ratanya.
• Nilai residual terstandarisasi yang berdistribusi normal jika digambarkan
dalam bentuk kurva akan membentuk gambar lonceng (bell-shaped curve).
• Berdasarkan pengertian uji normalitas tersebut maka uji normalitas disini
tidak dilakukan pervariabel (univariate) tetapi hanya terhadap nilai residual
terstandarisasinya (multivariate).
• Tidak terpe-nuhinya normalitas pada umumnya karena distribusi data yang
dianalisis tidak normal, karena nilai ekstrim pada data yang diambil yang
dapat terjadi karena (a) kesalahan pengambilan sampel, (b) pengetikan
input data, (c) atau memang karakter data tersebut jauh dari rata-ratanya
atau benar-benar berbeda dibanding dengan lain.
69. 8) UJI NORMALITY
• Untuk mendeteksi nilai residual terstandarisasi berdistribusi normal
atau tidak, maka digunakan beberapa metode.
a) Uji normalitas dengan Grafik
b) Uji normalitas denga metode signifikansi Skewness dan Kurtosis
c) Uji normalitas dengan Jarque-Bera (JB-Test)
d) Uji normalitas dengan Kolmogorov-Smirnow
e) Uji normalitas lainnya
71. 8) UJI NORMALITY
Uji normalitas dengan Kolmogorov-Smirnow
• Langkahnya (SPSS):
a) Meregressikan variable bebas terhadap variable terikat: Analyse →
Regrsion → Linier → Dependent → Indepen-denave → Save → Pada
Residual klik Standardized → Continue → OK
b) Lanjutkan dengan perhitungan Standard ResidualHitung: Analyze →
Nonparametrics Test → Legacy Dialog → 1 Sample K-S → pada Variables
isi Standardized Residu → OK
• Berdasarkan ouput SPSS diperoleh nilai Asymptotic significance 2-
tailed atau Asymp Sig (2-tailed) sebesar 0,343> table 0,05 atau 5%
atau H0 diterima yang berarti bahwa nilai residu terstandarisasi
dinyatakan menyebar secara normal.
73. 8) UJI NORMALITY
Uji Normalitas lainnya
• Menguji apakah data terdistribusi dengan normal atau tidak. Analisis
parametric seperti korelasi product moment mensyaratkan bahwa data
harus teridtribusi dengan normal. Uji normalitas lainnya, yaitu (a) Metode
Lillefors dan (b) Metode Kolmogorov-Smirnow Z.
a) Metode Lillefors
• Langkahnya (SPSS) : Analyze → Descriptive statistics → Explore → pindahkan variable
Y, X1 dan X2 ke kolom Dependent list → Plots → Centang Normality plots with tests
→ Continue → OK
• Tingkat Sig masing-masing variable, yaitu Y = 0,198 variabell X1 = 0,097 dan X2 =
0,200 sehingga dinyatakan nilai residual berdistribusi normal, karena tingikat Sig
melebihi 0,05 atau 5%.
74.
75. 8) UJI NORMALITY
b) Metode Kolmogorov-Smirnow Z
• Langkahnya (SPAA) : Analyze → Nonparametric test → Legacy dialog → 1-
Simple K-S →indahkan variable Y, X1, dan X2ke kolom Test variable list →
pada Distribution biarkan terpilih Normal → OK
• Nilai Asymp Sig masing-masing variable, yaitu Y = 0,684, X1 = 0,542, dan X2 =
0,982.
• Karena tingkat Asymp Sig lebih besar dari 0,05 atau 5%, maka nilai residual
terstandarisasi dinyatakan berdistribusi normal
77. 9) UJI LINEARITY
Uji Linieritas
Pengujian perlu dilakukan untuk membuktikan apakah model yang
digunakan linear atau tidak. Untuk mendeteksi apakah model sebaik-nya
menggunakan linear atau tidak, maka digunakan beberapa metode.
a) Uji linieritas dengan Metode Analisis Grafik
b) Uji linieritas dengan Metode Durbin-Watson d Statistik (The Durbin-Watson d
Statistic Test)
c) Uji linieritas dengan Metode Uji MWD (Mac Kinnon, White dan Davidson)
d) Uji linieritas dengan Metode Ramsey
e) Uji linieritas dengan Metode Lagrange Multiplier (LM-Test)
f) Uji linieritas lainnya, untuk mengetahui apakah dua variable yang dikenai prosedur
analisis statistik korelasional menunjukkan hu-bungan yang linear atau tidak.
79. Uji linieritss dengan Metode Ramsey
Langkahnya (SPSS):
• Meregresikan varibel X1, X2 terhadap Y : Analyze → Regression →
Linear = > Dependent isi Y→ Inde-pendent isi X1, X2 → Save → pada
Influnce Statitic klik DFit → Continue → OK
• Meregresikan variable bebas dan DFF_1 terhadap Y: Analyze →
Regression →Linear→ Reset → Dependent isi Y→ pada Independent
isi X1, X2, dan DFF_1 → OK
80. • Berdasarkan output SPSS dihitung besarnya F hitung, ke-mudian dibandingkan
dengan F table, dan hasilnya F hitung (176) > F table (n-k; 0,05 : 3,049 ). Formula
yang digunakan untuk menghitung F hitung :
F hitung = (R2 new – R2 old) / m
(1-R2 new) / (n-k)
F hitung= (0,998 – 0,982) / 1 = 176
(1-0,998) / (25-3)
Dimana, m jumlah variable bebas yang baru masuk (DFF_1), dan k banyaknya parameter (k=3)
• Karena F hitung (176) > F table (4,301) maka dinyatakan bahwa model regresi adalah linear,
dimana F table =4,301 diperoleh dari alpha 5%, m = 1 dan (n-k) =25-3 = 22
82. Menghitung R2new :
Karena F hitung (176) > F table (4,301) maka dinyatakan bahwa model
regresi adalah linear, dimana F table =4,301 diperoleh dari alpha 5%, m
= 1 dan (n-k) =25-3 = 22
83.
84. 10) KERANGKA KONSEPTUAL
X1
X2
C
I Y
M
Variabel Moderating
Variabel Independent
Variabel Intervening Variabel Dependent
Variabel Control
85. 11) PERSAMAAN REGRESI
Persamaan Regresi:
I = β0 + β1X1 + β2X2 + β3C + e ……………………………………..(1)
Y = β0 + β1I + e ………………………………………………..…………(2)
Y = β0 + β1I + β2M + β3IM + e ………………………………..……(3)
Y = β0 + β1X1 + β2X2 + β3C + β4I + β5M + β6IM + e ……...(4)
Dimana: X1 dan X2 = variabel independen; C = variabel control; I = variabel
intervening,; M = variabel moderating; IM = interaksi variabel I dengan M; β0 =
konstanta; β1 … β6 = Koefisien regresi; e = error
87. 11) PERSAMAAN REGRESI
Standardized
Coefficients
B Std. Error Beta
(Constant) -5.356 4.299 -1.246 .227
X1 .861 .160 .835 5.390 .000
X2 -.444 .165 -.369 -2.698 .013
C .542 .206 .518 2.630 .016
a. Dependent Variable: I
Coefficients
a
Model
Unstandardized
Coefficients
t Sig.
1
Persamaan Regresi:
I = β0 + β1X1 + β2X2 + β3C + e ……………………………………..(1)
I = - 5.356 + 0.861 X1 – 0.444 X2 + 0.542 C
88. 11) PERSAMAAN REGRESI
Standardized
Coefficients
B Std. Error Beta
(Constant) 30.673 1.685 18.205 .000
I .814 .048 .963 17.052 .000
Coefficientsa
Model
Unstandardized
Coefficients
t Sig.
1
a. Dependent Variable: Y
Persamaan Regresi:
Y = β0 + β1I + e ………………………………………………..…………(2)
Y = 30.673 + 0.814 I
89. 11) PERSAMAAN REGRESI
Standardized
Coefficients
B Std. Error Beta
(Constant) 19.234 4.817 3.993 .001
I .683 .394 .808 1.734 .098
M .655 .116 .722 5.639 .000
IM -.006 .005 -.538 -1.177 .252
1
a. Dependent Variable: Y
Coefficients
a
Model
Unstandardized
Coefficients
t Sig.
Persamaan Regresi:
Y = β0 + β1I + β2M + β3IM + e ………………………………..……(3)
Y = 19.234 + 0.683 I + 0.655 M – 0.006 IM
90. 11) PERSAMAAN REGRESI
Standardized
Coefficients
B Std. Error Beta
(Constant) -1.613 1.152 -1.400 .178
X1 .223 .036 .255 6.251 .000
X2 .320 .034 .315 9.307 .000
C 1.186 .043 1.342 27.838 .000
I -.321 .072 -.380 -4.476 .000
M -.430 .033 -.474 -12.852 .000
IM -.001 .001 -.068 -1.086 .292
Model
Unstandardized
Coefficients
t Sig.
1
a. Dependent Variable: Y
Coefficientsa
Persamaan Regresi:
Y = β0 + β1X1 + β2X2 + β3C + β4I + β5M + β6IM + e ……...(4)
Y = -1,613 + 0.223 X1 + 0.320 X2 + 1.186 C – 0.321 I – 0.430 M – 0.001 IM
91. 12) UJI STATISTIC F
Sum of
Squares df
Mean
Square F Sig.
Regression 4797.105 6 799.517 9373.853 .000
b
Residual 1.535 18 .085
Total 4798.640 24
ANOVA
a
Model
1
a. Dependent Variable: Y
b. Predictors: (Constant), IM, X2, M, X1, C, I
Sum of
Squares df
Mean
Square F Sig.
Regression 6612.490 3 2204.163 434.420 .000
b
Residual 106.550 21 5.074
Total 6719.040 24
ANOVA
a
Model
1
a. Dependent Variable: I
b. Predictors: (Constant), C, X2, X1
92. 13) UJI STATISTIK t
Standardized
Coefficients
B Std. Error Beta
(Constant) -1.613 1.152 -1.400 .178
X1 .223 .036 .255 6.251 .000
X2 .320 .034 .315 9.307 .000
C 1.186 .043 1.342 27.838 .000
I -.321 .072 -.380 -4.476 .000
M -.430 .033 -.474 -12.852 .000
IM -.001 .001 -.068 -1.086 .292
Model
Unstandardized
Coefficients
t Sig.
1
a. Dependent Variable: Y
Coefficientsa
93. 14) KOEFISIEN DETERMINAN (ADJUSTED R2)
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1
.986a
.971 .967 2.56348
Model Summary
Model
a. Predictors: (Constant), IM, M, I
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1 1.000a
1.000 1.000 .29205
Model
a. Predictors: (Constant), IM, X2, M, X1, C, I
Model Summary
94. 15) UJI PATH (UJI PENGARUH TAK LANGSUNG MELALUI VARIABEL
INTERVENING)
Standardized
Coefficients
B Std. Error Beta
(Constant) -5.356 4.299 -1.246 .227
X1 .861 .160 .835 5.390 .000
X2 -.444 .165 -.369 -2.698 .013
C .542 .206 .518 2.630 .016
a. Dependent Variable: I
Coefficients
a
Model
Unstandardized
Coefficients
t Sig.
1
(a) Pengaruh variabel Indpenden terhadap variabel Intervening
Pengaruh tak langsung variabel indeenden terhadap variabel dependen melalui vriabel intervening
Pada butir (b) berikut ini.
95. 15) UJI PATH (UJI PENGARUH TAK LANGSUNG MELALUI VARIABEL
INTERVENING)
Standardized
Coefficients
B Std. Error Beta
(Constant) 30.673 1.685 18.205 .000
I .814 .048 .963 17.052 .000
Coefficientsa
Model
Unstandardized
Coefficients
t Sig.
1
a. Dependent Variable: Y
(b) Pengaruh variabel Intervening terhadap variabel Dependen
Uji pengaruh tak lagsung variabel independent terhadap variabel dependent, melalui
variabel intervening, menggunakan perhitungan (a) dan (b) diatas, dengan cara berikut ini.
96. 15) UJI PATH (UJI PENGARUH TAK LANGSUNG MELALUI VARIABEL
INTERVENING)
Uji variabel intervening
Uji variabel intervening ini dapat dilakukan melalui Path Analysis yang
dikembangkan pertama kali oleh Sewal Wright pada tahun 1934 (Sarwono, 2011),
yaitu menguji pengaruh tidak langsung variabel independen terhadap variabel
dependen melalui uji statistic t dengan tahap perhitungan:
a) Koefisien regresi standardized variabel independen terhadap variabel
interevening dikali koefisien regresi standardized variabel intervening terhadap
variabel dependen,
b) Jumlahkan standar deviasi kedua persamaan regresi tersebut kemudian dibagi
dua,
c) Hitung statistic t melalui butir a dibagi butir b, kemudian bandingkan dengan t
tabel alpha 0,05.
d) Pengaruh tidak langsung signifikan bila t hit > t tab, dan sebaliknya pengaruh
tidak signifikan bila t hit < t tab.
97. 15) UJI PATH (UJI PENGARUH TAK LANGSUNG MELALUI VARIABEL
INTERVENING)
Contoh:
a) Koefisien standardized X1 terhadap I = 0.835 dikali I terhadap Y = 0.953,
yaitu : 0.835 x 0.963 = 0.796
b) Jumlah standar devisi kedua koefisien tsb dibagi dua : (0.160 + 0.048)/ 2
= 0.104
c) Statistik t (a dibagi b): (0.796 / 0.104) = 7.654 (t hitung), dan t tabel (n-k-
1 :22, alpha : 0.05) = 2.074
d) Karena t hitung (7.654) lebih besar dari t tabel (2.074) pada alpha 5%
(0.05), maka dinyatakan bahwa X1 berpengaruh tidak langsung signifikan
terhadap Y (melalui I)
e) Dst dengan cara yang sama untuk menguji pengaruh tidak langsung X2
terhadap Y melalui I
101. MODEL ANALISIS REGERESI JALUR
(PATH ANALYSIS)
Batam, 8 Maret 2019
Prof. Dr. H. Aminullah Assagaf, SE., MS., MM., M.Ak
Email: assagaf29@yahoo.com
HP : 08113543409
102. Menghitung koefisien -Stadardize
• Konstanta = 0
• Koefisien regresi dihitung dari x kecil atau (Xi – Xbar), dengan jumlah
nol dengan langkah:
a) Data X dan Y ditransfer menjadi x dan y, dengan (Xi-Xbar) dan (Yi-Ybar),
dengan jumlah = nol
b) x dibagi standar deviasi X, dan y dibagi standar devisi Y, hasil penjumlahan x
dan y = nol sebagaimana butir a
c) x kecil sebagai variabel independent, dan y sebagai variabel dependent
dengan jumlah nol sebagaimana butir a, sehingga menghasilkan nilai
xbar=0 dan y=0
d) Berdasarkan butir c, maka konstanta (b0) = ybar – xbar(b1) = 0
108. ANALISIS JALUR (PATH ANALYSIS)
Cara Uji Analisis Jalur [Path Analysis] dengan SPSS Lengkap ...
https://www.spssindonesia.com/.../cara-uji-analisis-jalur-path-analysis.html
Cara Uji Analisis Jalur [Path Analysis] dengan SPSS Lengkap, ... Program SPSS, Cara Uji
Regresi menggunakan Variabel Intervening dengan SPSS versi 21.
109. Persamaan (1) : Y=f(X1, X2)
Persamaan (2): Z = f(X1, X2, Y)
Pengaruh langsung X1 to Z = 0.156 Pers. 2
Pengaruh tak langsung X1 to Z = 0.336 x 0.612 = 0.206 (X1 to Y=0.336 Pers. 1) kali (Y to Z
=0.612 Pers. 2)
113. 16. Uji SOBEL (uji variabel INTERVENING)
Uji Variabel Intervening (Sobel test)
Uji variabel intervening menggunakan Sobel test, yaitu uji untuk mengetahui apakah
hubungan yang melalui sebuah variabel mediasi atau intervening secara signifikan mampu
sebagai mediator dalam hubungan tersebut. Sobel test dengan membandingkan antara Z-hitung
dengan Z-tabel, dan bila Z-hitung lebih besar dari Z-tabel yaitu 1.96 (diperoleh dari alpha 5%
atau tingkat keyakinan 95%) maka dinyatakan bahwa variabel intervening mampu memediasi
hubungan antara variabel independen atau eksogen dengan variabel dependen atau endogen.
Untuk menghitung Z-hitung pada uji Sobel sebagaimana Solihin (2020) digunakan formula
berikut ini.
Z =
ab
b2Sa2 + a2Sb2 + Sa2Sb2
Dimana: a = koefisien regresi variabel independen terhadap variabel intervening, b = koefisien regresi
variabel intervening terhadap variabel dependen, Sa = standar error (hasil SPSS tabel coeffisient) dari
pengaruh variabel independen terhadap variabel intervening. Sb = standar error (hasil SPSS tabel
coeffisient) dari pengaruh variabel intervening terhadap variabel dependen.
118. Cara Membaca Tabel Z: Contoh 1
Tabel ini berisi nilai peluang untuk nilai z dari 0 s.d. 4.095, Untuk menentukan nilai z yang dimaksud,
pelajarilah contoh-contoh berikut ini :
Contoh 1
Misal kita ingin mencari nilai z untuk uji dua arah dengan nilai peluang sebesar 0.1, maka ikuti langkah-
langkah di bawah ini:
1. Karena uji dua arah maka akan dicari nilai z untuk satu arah saja, yakni dengan nilai peluang sebesar
(0,5)(0,1) = 0,05
1. Carilah angka 0.05 pada deretan angka pada tabel. Apabila tidak dapat menemukan angka yang persis
sebesar 0.05, maka carilah angka yang paling mendekati angka 0.05. (pada table yang mendekati adalah
0.049985.)
2. Dari angka 0.049985, tariklah garis ke kiri terlebih dahulu hingga mencapai deretan angka pada
kolom paling kiri dan catatlah angkanya. Dalam kasus ini adalah 1.6.
3. Kemudian kembali ke posisi angka 0.049985, tariklah garis ke atas hingga mencapai deretan
ujung kolom bagian atas dan catatlah angkanya (yaitu 0.045)..
4. nilai z yang dicari adalah 1.6 + 0.045 = 1.645
119. Cara membaca Tabel Z: Contoh 2
Contoh 2
Misal kita ingin mencari nilai z untuk uji satu arah dengan nilai peluang sebesar 0.025, maka ikuti
langkah-langkah di bawah ini:
1. carilah angka 0.025 pada deretan angka pada tabel. Apabila tidak dapat menemukan angka yang
persis sebesar 0.05, maka carilah angka yang paling mendekati angka 0.05. (pada table yang
mendekati adalah 0.024998.)
2. Dari angka 0.024998, tariklah garis ke kiri terlebih dahulu hingga mencapai deretan angka pada
kolom paling kiri dan catatlah angkanya. Dalam kasus ini adalah 1.9.
3. Kemudian kembali ke posisi angka 0.024998, tariklah garis ke atas hingga mencapai deretan ujung
kolom bagian atas dan catatlah angkanya (yaitu 0.060)..
4. nilai z yang dicari adalah 1.9 + 0.060 = 1.960
120. CONTOH : UJI SOBEL
Untuk melakukan pengujian terhadap variabel intervening, penelitian ini menggunakan Sobel test sebagaimana
Solihin (2020), yaitu membandingkan antara Z-hitung dengan Z-tabel. Bila Z-hitung lebih besar dari Z-tabel,
dinyatakan bahwa variabel intervening mampu memediasi hubungan antara variabel independen dengan variabel
dependen. Demikian sebaliknya bila Z-hitung lebih kecil dari Z-tabel, maka variabel intervening dinyatakan tidak
mampu memediasi hubungan antara variabel independen dengan variabel dependen. Dan untunk menghitung Z-hitung,
digunakan formula berikut ini.
𝑍
ab
b2Sa2 + a2Sb2 + Sa2Sb2
Dimana: a = koefisien regresi pengaruh variabel independen terhadap variabel intervening, b = koefisien regresi pengaruh variabel intervening terhadap variabel dependen, Sa =
standar error (hasil SPSS tabel coeffisient) dari pengaruh variabel independen terhadap variabel intervening. Sb = standar error (hasil SPSS tabel coeffisient) dari pengaruh variabel
intervening terhadap variabel dependen.
Berdasarkan formula Z tersebut, dapat digitung Z-hitung, dengan menggunakan dua persamaan regresi
sebagimana tabel 26. Persamaan regresi yang digunakan terdiri dari persamaan regresi variabel independent terhadap
variabel intervening, dan persamaan regresi variabel intervening terhadap variabel dependen KPKU. Hasil perhitungan
Z hitung diperoleh sebagaimana tabel 25 berikut ini.
121. Tabel 23. Faktor yang mempengaruhi Kriteria Penilaian Kinerja Unggul BUMN
Model (1): KPKUt = β0+ β1ZCLt + β2LEVt +β3CAPEXt +β4GROWTHt + β5TAEMt +β6RAEMt +
β7IKMt + β8CFOt + β9LIQt + β10TAXt + β11SIZEt + β12CMt + β13REVt + et
Standardize
d
Coefficients
B Std. Error Beta
(Constant) -1.618 0.105 -15.381 0.041
LEV 0.108 0.077 0.031 1.403 0.394
CAPEX 0.439 0.026 0.431 17.134 0.037
TAEM -0.001 0.000 -0.120 -4.734 0.133
RAEM -0.001 0.000 -0.254 -12.274 0.052
IKM 0.492 0.003 0.937 159.367 0.004
CFO 0.076 0.004 0.366 18.450 0.034
SIZE 0.211 0.013 0.550 16.373 0.039
CM -0.110 0.005 -0.348 -22.480 0.028
REV -0.027 0.016 -0.015 -1.716 0.336
ZCL 0.217 0.014 0.204 15.243 0.042
GROWTH 0.053 0.003 0.235 16.030 0.040
LIQ -0.084 0.005 -0.171 -16.326 0.039
TAX -0.177 0.025 -0.182 -7.211 0.088
*Signigikan pada level 0.10 atau 10%, **Signigikan pada level 0.05 atau 5%, ***Signigikan
pada level 0.01 atau 1% (Uji statistik-t)
Model
Unstandardized
Coefficients t Sig.
1
a. Dependent Variable: KPKU
122. Tabel 24. Faktor yang mempengaruhi Cost Leadership BUMN
Model (2): ZCLt = β0+β1LEVt + β2CAPEXt + β3TAEMt + β4RAEMt + β5IKMt + β6CFOt + β7SIZEt +
β8CMt + β9REVt + et Standardize
d
Coefficients
B Std. Error Beta
(Constant) 5.666 1.081 5.239 0.003
LEV -3.614 0.641 -1.119 -5.640 0.002
CAPEX -1.034 0.186 -1.080 -5.562 0.003
TAEM 0.011 0.005 1.250 2.328 0.067
RAEM 0.0002 0.001 0.060 0.136 0.897
IKM -0.051 0.046 -0.103 -1.111 0.317
CFO -0.076 0.026 -0.388 -2.896 0.034
SIZE -0.662 0.145 -1.834 -4.570 0.006
CM 0.193 0.052 0.647 3.721 0.014
REV 0.482 0.176 0.278 2.737 0.041
a. Dependent Variable: ZCL
*Signigikan pada level 0.10 atau 10%, **Signigikan pada level 0.05 atau 5%, ***Signigikan
pada level 0.01 atau 1% (Uji statistik-t)
Model
Unstandardized
Coefficients t Sig.
1
123. Contoh
Tabel 25. Uji variabel Intervening dengan Sobel Test
Variable a b ab a
2
b
2
Sa Sb Sa
2
Sb
2
b
2
Sa
2
a
2
Sb
2
Sa
2
Sb
2 b
2
Sa
2
+a
2
Sb
2
+
Sa
2
Sb
2
(b
2
Sa
2
+a
2
Sb
2
+Sa
2
Sb
2
)
0.5
Z
1 2 3 4=2x3 5=2x2 6=3x3 7 8 9=7x7 10=8x8 11=6x9 12=5x10 13=9x10 14=11+12+13 15=14
0.5
16=4/15
LEV -3.614 0.217 -0.7824 13.0599 0.0469 0.641 0.0142 0.4105 0.0002 0.019243 0.002635 0.000083 0.021961
CAPEX -1.034 -0.2239 1.0698 0.186 0.0346 0.001621 0.000216 0.000007 0.001844
TAEM 0.011 0.0023 0.0001 0.005 0.0000 0.000001 0.000000 0.000000 0.000001
RAEM 0.000 0.0000 0.0000 0.001 0.0000 0.000000 0.000000 0.000000 0.000000
IKM -0.051 -0.0110 0.0026 0.046 0.0021 0.000098 0.000001 0.000000 0.000099
CFO -0.076 -0.0165 0.0058 0.026 0.0007 0.000032 0.000001 0.000000 0.000034
SIZE -0.662 -0.1434 0.4388 0.145 0.0210 0.000985 0.000089 0.000004 0.001078
CM 0.193 0.0417 0.0371 0.052 0.0027 0.000126 0.000007 0.000001 0.000134
REV 0.482 0.1044 0.2327 0.176 0.0311 0.001456 0.000047 0.000006 0.001509
Jumlah -4.752 0.217 -1.029 14.847 0.047 1.278 0.014 0.503 0.000 0.024 0.003 0.000101 0.0266596 0.16328 -6.30
124. Hasil uji Sobel sebagaiman formula Z-hitung dan perhitungan tabel 25, menunjukkan bahwa
Nilai Z = 6.30 lebih besar dari tabel Z = 1.96, sehingga dinyatakan bahwa variabel intervening
mampu memediasi hubungan antara variabel independen (eksogen) dengan variabel dependen
(endogen). Dengan demikian, hasil penelitian ini menemukan bahwa variabel independen
berpengaruh langsung terhadap kinerja BUMN. Dilain pihak, penelitian ini juga menemukan
bahwa varaibel cost leadership mampu memediasi hubungan antara variabel inependen dengan
variabel dependen kinerja BUMN.
125. 17_Drop outlier (Bias), data tdk berdistribusi normal
• Analys Descriptve statisticsExplorepindahkan semua variable
ke kotak dependent list klik plotcentang Histogram dan
Normality plot with testscontinueOk
• Cek tiap varibel, pilih misalnya variabel X2 pilih data ekstrim atas atau
bawah yg ada bintang, klik kanan ok pilih lagi yang ada bintang,
klik kananokgo caseklik kanan, cut atau clear data
pengamatan berkurang hitung lagi Kembali dimulai dgn uji
normalitas, dll