A multiple linear regression was calculated to predict weight based on height and sex. The regression equation was significant and height and sex were significant predictors of weight, explaining 99.3% of the variance. Participants' predicted weight is equal to 47.138 - 39.133 (sex) + 2.101 (height), where height is measured in inches and sex is coded as 0 for female and 1 for male.
Reporting a multiple linear regression in apaKen Plummer
A multiple linear regression was calculated to predict weight based on height and sex. A significant regression equation was found (F(2,13)=981.202, p<.000), with an R2 of .993. Participants' predicted weight is equal to 47.138 + 2.101(height) - 39.133(sex), where height is measured in inches and sex is coded as 0 for male and 1 for female. Both height and sex were significant predictors of weight.
Reporting a single linear regression in apaKen Plummer
The document provides a template for reporting the results of a simple linear regression analysis in APA format. It explains that a linear regression was conducted to predict weight based on height. The regression equation was found to be significant, F(1,14)=25.925, p<.000, with an R2 of .649. The predicted weight is equal to -234.681 + 5.434 (height in inches) pounds.
This document provides guidance on reporting the results of a Phi-Coefficient test in APA style. It describes analyzing whether there is a non-random pattern between on-time graduation (no=1 and yes=2) and gender (male=1 and female=2). The general template is to state the main finding, include the Phi-Coefficient value (f), and report the p-value. For example, "Based on the results of the study, males are less likely to graduate on time than females f = .82, p < .05."
The document provides guidance on reporting the results of a one-way ANOVA in APA format. It recommends including that a one-way ANOVA was conducted to examine the effect of an independent variable on a dependent variable. It provides a template for reporting the F-statistic, degrees of freedom, and significance level based on the ANOVA output. Filling in the specifics of the independent variable, dependent variable, and ANOVA results completes the report.
This document discusses how to report the results of a Pearson correlation analysis in APA style. It provides an example of a problem investigating the relationship between broccoli extract consumption and well-being scores. The template shown reports that a strong positive correlation was found between broccoli extract consumption and well-being (r = .88, p < .05).
The document describes how to report a partial correlation in APA format. It provides a template for reporting that there is a significant positive partial correlation of .82 between intense fanaticism for a professional sports team and proximity to the city the team resides when controlling for age, with a p-value of .000.
A two-way ANOVA was conducted to examine the effects of athlete type (football, basketball, soccer) and age (younger, older) on slices of pizza eaten. There were significant main effects of athlete type and an interaction between athlete type and age, but no main effect of age. Football players ate the most pizza, followed by basketball players and then soccer players.
A One-way ANOVA was conducted to compare the effect of type of athlete on the number of pizza slices eaten. The ANOVA results showed that the effect of type of athlete on number of pizza slices eaten was significant, F(2,66) = 99.82, p = .000.
Reporting a multiple linear regression in apaKen Plummer
A multiple linear regression was calculated to predict weight based on height and sex. A significant regression equation was found (F(2,13)=981.202, p<.000), with an R2 of .993. Participants' predicted weight is equal to 47.138 + 2.101(height) - 39.133(sex), where height is measured in inches and sex is coded as 0 for male and 1 for female. Both height and sex were significant predictors of weight.
Reporting a single linear regression in apaKen Plummer
The document provides a template for reporting the results of a simple linear regression analysis in APA format. It explains that a linear regression was conducted to predict weight based on height. The regression equation was found to be significant, F(1,14)=25.925, p<.000, with an R2 of .649. The predicted weight is equal to -234.681 + 5.434 (height in inches) pounds.
This document provides guidance on reporting the results of a Phi-Coefficient test in APA style. It describes analyzing whether there is a non-random pattern between on-time graduation (no=1 and yes=2) and gender (male=1 and female=2). The general template is to state the main finding, include the Phi-Coefficient value (f), and report the p-value. For example, "Based on the results of the study, males are less likely to graduate on time than females f = .82, p < .05."
The document provides guidance on reporting the results of a one-way ANOVA in APA format. It recommends including that a one-way ANOVA was conducted to examine the effect of an independent variable on a dependent variable. It provides a template for reporting the F-statistic, degrees of freedom, and significance level based on the ANOVA output. Filling in the specifics of the independent variable, dependent variable, and ANOVA results completes the report.
This document discusses how to report the results of a Pearson correlation analysis in APA style. It provides an example of a problem investigating the relationship between broccoli extract consumption and well-being scores. The template shown reports that a strong positive correlation was found between broccoli extract consumption and well-being (r = .88, p < .05).
The document describes how to report a partial correlation in APA format. It provides a template for reporting that there is a significant positive partial correlation of .82 between intense fanaticism for a professional sports team and proximity to the city the team resides when controlling for age, with a p-value of .000.
A two-way ANOVA was conducted to examine the effects of athlete type (football, basketball, soccer) and age (younger, older) on slices of pizza eaten. There were significant main effects of athlete type and an interaction between athlete type and age, but no main effect of age. Football players ate the most pizza, followed by basketball players and then soccer players.
A One-way ANOVA was conducted to compare the effect of type of athlete on the number of pizza slices eaten. The ANOVA results showed that the effect of type of athlete on number of pizza slices eaten was significant, F(2,66) = 99.82, p = .000.
The document provides a template for reporting the results of an independent samples t-test in APA format. It demonstrates how to write a sentence summarizing that there was a significant/non-significant difference between two groups by including the group means, standard deviations, t-statistic, and p-value filled in from a sample SPSS output.
Reporting point biserial correlation in apaKen Plummer
This document provides guidance on reporting point-biserial correlations in APA style. It describes analyzing the relationship between preference for taking a fencing class on a scale of 1-10 and gender, coded as 1 for male and 2 for female. It recommends reporting the point-biserial correlation coefficient rpb, the statistical significance level p, and an interpretation of the relationship, such as "Females tend to prefer taking a fencing class more than males."
Reporting a one way repeated measures anovaKen Plummer
The document provides guidance on reporting the results of a one-way repeated measures ANOVA in APA style. It includes templates for reporting the main ANOVA results and any post-hoc pairwise comparisons between conditions. Key sections are highlighted to fill in values from an example SPSS output to generate a complete APA-style results section reporting a significant effect of time of season on pizza consumption.
The document describes how to report a partial correlation in APA format. It provides a template for reporting that when controlling for a covariate, the partial correlation between two variables is r = ___, p = ___. As an example, it states that when controlling for age, the partial correlation between intense fanaticism for a professional sports team and proximity to the city the team resides is r = .82, p = .000.
Reporting Chi Square Test of Independence in APAKen Plummer
This document provides guidance on reporting the results of a chi-square test of independence in APA style. It presents an example problem investigating the relationship between heart disease and gender. It then shows the general template for how to report a chi-square test, including reporting the chi-square value, degrees of freedom, and statistical significance. The template example finds a significant relationship between heart disease and gender, with men more likely to have heart disease than women.
The document provides guidance on reporting the results of an ANCOVA analysis in APA format. It recommends including that a one-way ANCOVA was conducted to determine differences between levels of an independent variable on a dependent variable while controlling for a covariate. An example is given using athlete type as the independent variable, slices of pizza eaten as the dependent variable, and weight as the covariate. The document also provides a template for reporting the F-ratio, degrees of freedom, and significance level.
The document discusses how to report the results of a Pearson correlation analysis in APA style. It provides an example of a problem investigating the relationship between the amount of broccoli extract consumed and scores of well-being. It then shows the template for reporting the Pearson correlation, stating the correlation coefficient r and the p-value.
This document provides guidelines for writing up results sections based on APA style. It discusses reporting statistical tests, including describing test statistics, significance levels, means, standard deviations, and directions of effects. Examples are provided for how to report results from t-tests, ANOVAs, post hoc tests, chi-square tests, correlations, and regressions. Tables and figures can help report complex results. The guidelines emphasize identifying analyses and their relation to hypotheses, and assuming reader knowledge of statistics.
The document provides guidance on reporting paired sample t-test results in APA format. It includes an example of how to write the results in a sentence, explaining that there was a significant/not significant difference between the scores for condition 1 (providing the mean and standard deviation) and condition 2 (providing the mean and standard deviation). It also demonstrates how to fill in the t-statistic, degrees of freedom, and p-value using output from SPSS.
Null hypothesis for Pearson Correlation (independence)Ken Plummer
The document discusses writing null hypotheses for Pearson correlation tests. It provides examples of null hypotheses for two problems: 1) determining if student ACT scores and GPAs are independent, and 2) determining if depression scores and sense of belonging scores are independent. The null hypothesis template is "There is no statistically significant relationship between variable 1 and variable 2". For the first problem, the null hypothesis is "There is no statistically significant relationship between student ACT scores and grade point averages". For the second problem, the null hypothesis is "There is no statistically significant relationship between depression scores and sense of belonging scores".
Reporting an independent sample t- testAmit Sharma
An independent samples t-test was conducted to compare truck driver drowsiness scores for country music listening and no country music listening conditions. There was a significant difference in scores for country music listening (M=4.2, SD=1.3) and no country music listening (M=2.2, SD=0.84); t(8)=2.89, p=0.02.
The document provides guidance on reporting the results of a paired sample t-test in APA format. It includes templates for reporting the study design, results, and statistical analysis. Key details include reporting the means, standard deviations, and standard errors for each condition. It also notes reporting the t-statistic, degrees of freedom, and significance level based on the t-test output.
This document discusses the null hypothesis for a one-way analysis of covariance (ANCOVA). It explains that a one-way ANCOVA compares the influence of an independent variable with at least two levels on a dependent variable, while controlling for the effect of a covariate. The document provides a template for writing the null hypothesis, which states that there is no significant effect of the independent variable on the dependent variable when controlling for the covariate. It gives two examples applying this template.
Reporting the wilcoxon signed ranks testKen Plummer
A Wilcoxon Signed-Ranks Test was conducted to compare pre-test and post-test ranks. The results indicated that post-test ranks were statistically significantly higher than pre-test ranks, with a Z score of 21 and p value less than .027. The document provided guidance and examples for reporting the results of the Wilcoxon Signed-Ranks Test in APA style.
A repeated measures ANOVA is used to test whether a single group of people change over time by comparing distributions from the same group at different time periods, rather than comparing distributions from different groups. The overall F-ratio reveals if there are differences among time periods, and post hoc tests identify exactly where the differences occurred. In contrast, a one-way ANOVA compares distributions between two or more different groups to determine if there are statistical differences between them.
Null hypothesis for multiple linear regressionKen Plummer
The document discusses null hypotheses for multiple linear regression. It provides two templates for writing null hypotheses. Template 1 states there will be no significant prediction of the dependent variable (e.g. ACT scores) by the independent variables (e.g. hours of sleep, study time, gender, mother's education). Template 2 states that in the presence of other variables, there will be no significant prediction of the dependent variable by a specific independent variable. The document provides an example applying both templates to investigate the prediction of ACT scores by hours of sleep, study time, gender, and mother's education.
A pizza café owner conducted a study to determine which sport players (football, basketball, soccer) ate the most slices of pizza on average. After collecting the data, an analysis using the Kruskal-Wallis test was performed due to outliers among basketball players. The results showed a statistically significant difference between the number of slices eaten by different player types, with football players eating the most on average.
Null hypothesis for single linear regressionKen Plummer
The document discusses the null hypothesis for a single linear regression analysis. It explains that the null hypothesis states that there is no effect or relationship between the independent and dependent variables. As an example, if investigating the relationship between hours of sleep and ACT scores, the null hypothesis would be: "There will be no significant prediction of ACT scores by hours of sleep." The document provides a template for writing the null hypothesis in terms of the specific independent and dependent variables being analyzed.
Reporting Statistics in Psychology
This document provides guidelines for reporting statistics in psychology research. It outlines how to round numbers and report means, standard deviations, p-values, effect sizes, and results from t-tests, ANOVAs, and other statistical analyses. Key recommendations include reporting exact p-values to two or three decimal places, using abbreviations like M and SD consistently, and noting any violations of statistical assumptions.
Reporting Pearson Correlation Test of Independence in APAKen Plummer
A Pearson correlation test of independence was conducted to determine if student height and GPA were related. A weak correlation was found between height and GPA (r = .217, p > .05), indicating that student height and GPA are independent of each other.
In the preparation for the Geodetic Engineering Licensure Examination, the BSGE students must memorized the fastest possible solution for the LEAST SQUARES ADJUSTMENT using casio fx-991 es plus calculator technique in order to save time during the said examination. note: lec 2 and above wala akong nilagay na solution para hindi makupya techniques ko. just add me on fb para ituro ko sa inyo solution. Kasi itong solution ko wala sa google, youtube, calc tech books at hindi rin itinuro sa review center.
Week 7 - Linear Regression Exercises SPSS Output Simple.docxcockekeshia
Week 7 - Linear Regression Exercises SPSS Output
Simple Linear Regression SPSS Output
Descriptive Statistics
Mean Std. Deviation N
Family income prior month,
all sources
$1,485.49 $950.496 378
Hours worked per week in
current job
33.52 12.359 378
Correlations
Family income
prior month, all
sources
Hours worked
per week in
current job
Pearson Correlation Family income prior month,
all sources
1.000 .300
Hours worked per week in
current job
.300 1.000
Sig. (1-tailed) Family income prior month,
all sources
. .000
Hours worked per week in
current job
.000 .
N Family income prior month,
all sources
378 378
Hours worked per week in
current job
378 378
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1 .300a .090 .088 $907.877
a. Predictors: (Constant), Hours worked per week in current job
ANOVAb
Model Sum of Squares df Mean Square F Sig.
1 Regression 3.068E7 1 3.068E7 37.226 .000a
Residual 3.099E8 376 824241.002
Total 3.406E8 377
a. Predictors: (Constant), Hours worked per week in current job
b. Dependent Variable: Family income prior month, all sources
Coefficientsa
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
95.0% Confidence Interval
for B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 711.651 135.155 5.265 .000 445.896 977.405
Hours worked per week
in current job
23.083 3.783 .300 6.101 .000 15.644 30.523
a. Dependent Variable: Family income prior month, all sources
Part II: Multiple Regression SPSS Output
This part is going to begin with an example that has been interpreted for you. Analyze the output
provided and read the interpretation of the data so that you will have an understanding of what you
will do for the multiple regression assignment.
Descriptive Statistics
Mean Std. Deviation N
CES-D Score 18.5231 11.90747 156
CESD Score, Wave 1 17.6987 11.40935 156
Number types of abuse .83 1.203 156
Correlations
CES-D Score
CESD Score,
Wave 1
Number types
of abuse
Pearson Correlation CES-D Score 1.000 .412 .347
CESD Score, Wave 1 .412 1.000 .187
Number types of abuse .347 .187 1.000
Sig. (1-tailed) CES-D Score . .000 .000
CESD Score, Wave 1 .000 . .010
Number types of abuse .000 .010 .
N CES-D Score 156 156 156
CESD Score, Wave 1 156 156 156
Number types of abuse 156 156 156
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change F Change df1 df2 Sig. F Change
1 .412a .170 .164 10.88446 .170 31.506 1 154 .000
2 .496b .246 .236 10.41016 .076 15.352 1 153 .000
a. Predictors: (Constant), CESD Score, Wave 1
b. Predictors: (Constant), CESD Score, Wave 1, Number types of abuse
ANOVAc
Model Sum of Squares df Mean Square F Sig.
1 Regression 3732.507 1 3732.507 31.506 .000a
Residual 18244.613 154 118.472
Total 21977.1.
The document provides a template for reporting the results of an independent samples t-test in APA format. It demonstrates how to write a sentence summarizing that there was a significant/non-significant difference between two groups by including the group means, standard deviations, t-statistic, and p-value filled in from a sample SPSS output.
Reporting point biserial correlation in apaKen Plummer
This document provides guidance on reporting point-biserial correlations in APA style. It describes analyzing the relationship between preference for taking a fencing class on a scale of 1-10 and gender, coded as 1 for male and 2 for female. It recommends reporting the point-biserial correlation coefficient rpb, the statistical significance level p, and an interpretation of the relationship, such as "Females tend to prefer taking a fencing class more than males."
Reporting a one way repeated measures anovaKen Plummer
The document provides guidance on reporting the results of a one-way repeated measures ANOVA in APA style. It includes templates for reporting the main ANOVA results and any post-hoc pairwise comparisons between conditions. Key sections are highlighted to fill in values from an example SPSS output to generate a complete APA-style results section reporting a significant effect of time of season on pizza consumption.
The document describes how to report a partial correlation in APA format. It provides a template for reporting that when controlling for a covariate, the partial correlation between two variables is r = ___, p = ___. As an example, it states that when controlling for age, the partial correlation between intense fanaticism for a professional sports team and proximity to the city the team resides is r = .82, p = .000.
Reporting Chi Square Test of Independence in APAKen Plummer
This document provides guidance on reporting the results of a chi-square test of independence in APA style. It presents an example problem investigating the relationship between heart disease and gender. It then shows the general template for how to report a chi-square test, including reporting the chi-square value, degrees of freedom, and statistical significance. The template example finds a significant relationship between heart disease and gender, with men more likely to have heart disease than women.
The document provides guidance on reporting the results of an ANCOVA analysis in APA format. It recommends including that a one-way ANCOVA was conducted to determine differences between levels of an independent variable on a dependent variable while controlling for a covariate. An example is given using athlete type as the independent variable, slices of pizza eaten as the dependent variable, and weight as the covariate. The document also provides a template for reporting the F-ratio, degrees of freedom, and significance level.
The document discusses how to report the results of a Pearson correlation analysis in APA style. It provides an example of a problem investigating the relationship between the amount of broccoli extract consumed and scores of well-being. It then shows the template for reporting the Pearson correlation, stating the correlation coefficient r and the p-value.
This document provides guidelines for writing up results sections based on APA style. It discusses reporting statistical tests, including describing test statistics, significance levels, means, standard deviations, and directions of effects. Examples are provided for how to report results from t-tests, ANOVAs, post hoc tests, chi-square tests, correlations, and regressions. Tables and figures can help report complex results. The guidelines emphasize identifying analyses and their relation to hypotheses, and assuming reader knowledge of statistics.
The document provides guidance on reporting paired sample t-test results in APA format. It includes an example of how to write the results in a sentence, explaining that there was a significant/not significant difference between the scores for condition 1 (providing the mean and standard deviation) and condition 2 (providing the mean and standard deviation). It also demonstrates how to fill in the t-statistic, degrees of freedom, and p-value using output from SPSS.
Null hypothesis for Pearson Correlation (independence)Ken Plummer
The document discusses writing null hypotheses for Pearson correlation tests. It provides examples of null hypotheses for two problems: 1) determining if student ACT scores and GPAs are independent, and 2) determining if depression scores and sense of belonging scores are independent. The null hypothesis template is "There is no statistically significant relationship between variable 1 and variable 2". For the first problem, the null hypothesis is "There is no statistically significant relationship between student ACT scores and grade point averages". For the second problem, the null hypothesis is "There is no statistically significant relationship between depression scores and sense of belonging scores".
Reporting an independent sample t- testAmit Sharma
An independent samples t-test was conducted to compare truck driver drowsiness scores for country music listening and no country music listening conditions. There was a significant difference in scores for country music listening (M=4.2, SD=1.3) and no country music listening (M=2.2, SD=0.84); t(8)=2.89, p=0.02.
The document provides guidance on reporting the results of a paired sample t-test in APA format. It includes templates for reporting the study design, results, and statistical analysis. Key details include reporting the means, standard deviations, and standard errors for each condition. It also notes reporting the t-statistic, degrees of freedom, and significance level based on the t-test output.
This document discusses the null hypothesis for a one-way analysis of covariance (ANCOVA). It explains that a one-way ANCOVA compares the influence of an independent variable with at least two levels on a dependent variable, while controlling for the effect of a covariate. The document provides a template for writing the null hypothesis, which states that there is no significant effect of the independent variable on the dependent variable when controlling for the covariate. It gives two examples applying this template.
Reporting the wilcoxon signed ranks testKen Plummer
A Wilcoxon Signed-Ranks Test was conducted to compare pre-test and post-test ranks. The results indicated that post-test ranks were statistically significantly higher than pre-test ranks, with a Z score of 21 and p value less than .027. The document provided guidance and examples for reporting the results of the Wilcoxon Signed-Ranks Test in APA style.
A repeated measures ANOVA is used to test whether a single group of people change over time by comparing distributions from the same group at different time periods, rather than comparing distributions from different groups. The overall F-ratio reveals if there are differences among time periods, and post hoc tests identify exactly where the differences occurred. In contrast, a one-way ANOVA compares distributions between two or more different groups to determine if there are statistical differences between them.
Null hypothesis for multiple linear regressionKen Plummer
The document discusses null hypotheses for multiple linear regression. It provides two templates for writing null hypotheses. Template 1 states there will be no significant prediction of the dependent variable (e.g. ACT scores) by the independent variables (e.g. hours of sleep, study time, gender, mother's education). Template 2 states that in the presence of other variables, there will be no significant prediction of the dependent variable by a specific independent variable. The document provides an example applying both templates to investigate the prediction of ACT scores by hours of sleep, study time, gender, and mother's education.
A pizza café owner conducted a study to determine which sport players (football, basketball, soccer) ate the most slices of pizza on average. After collecting the data, an analysis using the Kruskal-Wallis test was performed due to outliers among basketball players. The results showed a statistically significant difference between the number of slices eaten by different player types, with football players eating the most on average.
Null hypothesis for single linear regressionKen Plummer
The document discusses the null hypothesis for a single linear regression analysis. It explains that the null hypothesis states that there is no effect or relationship between the independent and dependent variables. As an example, if investigating the relationship between hours of sleep and ACT scores, the null hypothesis would be: "There will be no significant prediction of ACT scores by hours of sleep." The document provides a template for writing the null hypothesis in terms of the specific independent and dependent variables being analyzed.
Reporting Statistics in Psychology
This document provides guidelines for reporting statistics in psychology research. It outlines how to round numbers and report means, standard deviations, p-values, effect sizes, and results from t-tests, ANOVAs, and other statistical analyses. Key recommendations include reporting exact p-values to two or three decimal places, using abbreviations like M and SD consistently, and noting any violations of statistical assumptions.
Reporting Pearson Correlation Test of Independence in APAKen Plummer
A Pearson correlation test of independence was conducted to determine if student height and GPA were related. A weak correlation was found between height and GPA (r = .217, p > .05), indicating that student height and GPA are independent of each other.
In the preparation for the Geodetic Engineering Licensure Examination, the BSGE students must memorized the fastest possible solution for the LEAST SQUARES ADJUSTMENT using casio fx-991 es plus calculator technique in order to save time during the said examination. note: lec 2 and above wala akong nilagay na solution para hindi makupya techniques ko. just add me on fb para ituro ko sa inyo solution. Kasi itong solution ko wala sa google, youtube, calc tech books at hindi rin itinuro sa review center.
Week 7 - Linear Regression Exercises SPSS Output Simple.docxcockekeshia
Week 7 - Linear Regression Exercises SPSS Output
Simple Linear Regression SPSS Output
Descriptive Statistics
Mean Std. Deviation N
Family income prior month,
all sources
$1,485.49 $950.496 378
Hours worked per week in
current job
33.52 12.359 378
Correlations
Family income
prior month, all
sources
Hours worked
per week in
current job
Pearson Correlation Family income prior month,
all sources
1.000 .300
Hours worked per week in
current job
.300 1.000
Sig. (1-tailed) Family income prior month,
all sources
. .000
Hours worked per week in
current job
.000 .
N Family income prior month,
all sources
378 378
Hours worked per week in
current job
378 378
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1 .300a .090 .088 $907.877
a. Predictors: (Constant), Hours worked per week in current job
ANOVAb
Model Sum of Squares df Mean Square F Sig.
1 Regression 3.068E7 1 3.068E7 37.226 .000a
Residual 3.099E8 376 824241.002
Total 3.406E8 377
a. Predictors: (Constant), Hours worked per week in current job
b. Dependent Variable: Family income prior month, all sources
Coefficientsa
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
95.0% Confidence Interval
for B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 711.651 135.155 5.265 .000 445.896 977.405
Hours worked per week
in current job
23.083 3.783 .300 6.101 .000 15.644 30.523
a. Dependent Variable: Family income prior month, all sources
Part II: Multiple Regression SPSS Output
This part is going to begin with an example that has been interpreted for you. Analyze the output
provided and read the interpretation of the data so that you will have an understanding of what you
will do for the multiple regression assignment.
Descriptive Statistics
Mean Std. Deviation N
CES-D Score 18.5231 11.90747 156
CESD Score, Wave 1 17.6987 11.40935 156
Number types of abuse .83 1.203 156
Correlations
CES-D Score
CESD Score,
Wave 1
Number types
of abuse
Pearson Correlation CES-D Score 1.000 .412 .347
CESD Score, Wave 1 .412 1.000 .187
Number types of abuse .347 .187 1.000
Sig. (1-tailed) CES-D Score . .000 .000
CESD Score, Wave 1 .000 . .010
Number types of abuse .000 .010 .
N CES-D Score 156 156 156
CESD Score, Wave 1 156 156 156
Number types of abuse 156 156 156
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change F Change df1 df2 Sig. F Change
1 .412a .170 .164 10.88446 .170 31.506 1 154 .000
2 .496b .246 .236 10.41016 .076 15.352 1 153 .000
a. Predictors: (Constant), CESD Score, Wave 1
b. Predictors: (Constant), CESD Score, Wave 1, Number types of abuse
ANOVAc
Model Sum of Squares df Mean Square F Sig.
1 Regression 3732.507 1 3732.507 31.506 .000a
Residual 18244.613 154 118.472
Total 21977.1.
Bba 3274 qm week 6 part 1 regression modelsStephen Ong
This document provides an overview and outline of regression models and forecasting techniques. It discusses simple and multiple linear regression analysis, how to measure the fit of regression models, assumptions of regression models, and testing models for significance. The goals are to help students understand relationships between variables, predict variable values, develop regression equations from sample data, and properly apply and interpret regression analysis.
This document provides instructions for performing multiple regression analysis in SPSS. It demonstrates entering variables, running the regression using the enter, stepwise, and backward methods, and interpreting the output including R-square values, F-tests, beta coefficients, and equations for predicting the dependent variable based on the independent variables. Age and education were identified as the best predictors of months of full-time employment using both the stepwise and backward regression methods.
This chapter discusses regression models, including simple and multiple linear regression. It covers developing regression equations from sample data, measuring the fit of regression models, and assumptions of regression analysis. Key aspects covered include using scatter plots to examine relationships between variables, calculating the slope, intercept, coefficient of determination, and correlation coefficient, and performing hypothesis tests to determine if regression models are statistically significant. The chapter objectives are to help students understand and appropriately apply simple, multiple, and nonlinear regression techniques.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps in a two-way ANOVA. Specifically, it presents a study on the flavor acceptability of luncheon meat from different sources. It provides the problem statement, hypotheses, assumptions, and 10 step-by-step computations to conduct a two-way ANOVA on the data. The results of the ANOVA show that the flavor acceptability significantly differs between the meat sources, leading to a rejection of the null hypothesis.
The classes used in this study (Class A and Class B) were held at the same campus in Wichita, KS during June through September 2010 by the same instructor. These two classes were held on a Thursday and a Friday night with Class A being held on Thursday and Class B being held on Friday. Class A completed with 13 students and Class B completed with 16 students. The most interesting thing about these two groups of students is that the one group was overprotected through most classes leading up to the class in question and their general attitudes during this class reflected their attitudes prior to this specific class, while the other group was a group of first term students. These first term students were told up front what was expected of them and little to no tolerance was given for late work submission (this rule was also applied to the group that had been overprotected prior to this class).
(1) The predicted average test score is 395.85 and the predicted change in average test score is a decrease of 23.28 points based on the regression.
(2) Using data from a sample of 200 individuals, the regression equation predicts weights based on heights of 70, 65, and 74 inches.
(3) Converting the regression to use centimeters and kilograms, the coefficients are -0.092 and 0.7036 kg/cm with the same R-squared value but a standard error of 4.6267 kg.
Kano GIS Day 2014 - The Application of Multivariate Geostatistical analyses i...eHealth Africa
We are excited to be holding our own GIS Day event on November 19th, 2014!
GIS Day is a global grassroots educational event that enables Geographic Information Systems (GIS) users and vendors to showcase real-world applications of GIS to schools, businesses, and the general public. Organizations that utilize GIS around the world participate by holding or sponsoring an event of their own.
The first formal GIS Day took place in 1999. In 2005, more than 700 GIS Day events were held in 74 countries around the globe. Esri president and co-founder Jack Dangermond credits Ralph Nader with inspiring the creation of GIS Day. He saw GIS Day as providing an opportunity for the world to learn about the uses of GIS in mapping geography, and what that mapping technology could provide. He wanted GIS Day to be a grassroots effort and open to everyone to participate.
Recognizing the power that GIS technology could provide for healthcare, eHealth Africa as an NGO organization stepped to the forefront of using GIS applications to track polio in Nigeria. Using GIS technology, eHealth is able to map out areas previously unreached during immunization campaigns. Once the area is mapped, much-needed polio vaccinations are able to be distributed and the polio epidemic is brought another step closer to being controlled and eliminated.
The theme of GIS Day is “Discovering the world through GIS.” GIS Day provides an international forum for users of GIS technology to demonstrate real-world applications that are making a difference in our society and around the world.
We are excited to take part in GIS Day 2014 on November 19th. We look forward to joining with our community partners in discussing GIS usage, and to take a close look at the exciting contributions GIS provides around our world.
The document covers standard deviation as a measure of dispersion, defining it as the positive square root of the arithmetic mean of the squared deviations
Exploring Support Vector Regression - Signals and Systems ProjectSurya Chandra
Our team competed in a Kaggle competition to predict the bike share use as a part of their capital bike share program in Washington DC using a powerful function approximation technique called support vector regression.
This document summarizes an analysis of using Support Vector Regression (SVR) to predict bike rental data from a bike sharing program in Washington D.C. It begins with an introduction to SVR and the bike rental prediction competition. It then shows that linear regression performs poorly on this non-linear problem. The document explains how SVR maps data into higher dimensions using kernel functions to allow for non-linear fits. It concludes by outlining the derivation of the SVR method using kernel functions to simplify calculations for the regression.
This document discusses multiple regression analysis. It begins by introducing multiple regression as an extension of simple linear regression that allows for modeling relationships between a response variable and multiple explanatory variables. It then covers topics such as examining variable distributions, building regression models, estimating model parameters, and assessing overall model fit and significance of individual predictors. An example demonstrates using multiple regression to build a model for predicting cable television subscribers based on advertising rates, station power, number of local families, and number of competing stations.
InstructionsView CAAE Stormwater video Too Big for Our Ditches.docxdirkrplav
Instructions:
View CAAE Stormwater video "Too Big for Our Ditches"
http://www.ncsu.edu/wq/videos/stormwater%20video/SWvideo.html
Explain how impermeable surfaces in the urban environment impact the stream network in a river basin. Why is watershed management an important consideration in urban planning? Unload you essay (200-400 words).
Neal.LarryBUS457A7.docx
Question 1
Problem:
It is not certain about the relationship between age, Y, as a function of systolic blood pressure.
Goal:
To establish the relationship between age Y, as a function of systolic blood pressure.
Finding/Conclusion:
Based on the available data, the relationship is obtained and shown below:
Regression Analysis: Age versus SBP
Analysis of Variance
Source DF Adj SS Adj MS F-Value P-Value
Regression 1 2933 2933.1 21.33 0.000
SBP 1 2933 2933.1 21.33 0.000
Error 28 3850 137.5
Lack-of-Fit 21 2849 135.7 0.95 0.575
Pure Error 7 1002 143.1
Total 29 6783
Model Summary
S R-sq R-sq(adj) R-sq(pred)
11.7265 43.24% 41.21% 3.85%
Coefficients
Term Coef SE Coef T-Value P-Value VIF
Constant -18.3 13.9 -1.32 0.198
SBP 0.4454 0.0964 4.62 0.000 1.00
Regression Equation
Age = -18.3 + 0.4454 SBP
It is found that there is an outlier in the dataset, which significantly affect the regression equation. As a result, the outlier is removed, and the regression analysis is run again.
Regression Analysis: Age versus SBP
Analysis of Variance
Source DF Adj SS Adj MS F-Value P-Value
Regression 1 4828.5 4828.47 66.81 0.000
SBP 1 4828.5 4828.47 66.81 0.000
Error 27 1951.4 72.27
Lack-of-Fit 20 949.9 47.49 0.33 0.975
Pure Error 7 1001.5 143.07
Total 28 6779.9
Model Summary
S R-sq R-sq(adj) R-sq(pred)
8.50139 71.22% 70.15% 66.89%
Coefficients
Term Coef SE Coef T-Value P-Value VIF
Constant -59.9 12.9 -4.63 0.000
SBP 0.7502 0.0918 8.17 0.000 1.00
Regression Equation
Age = -59.9 + 0.7502 SBP
The p-value for the model is 0.000, which implies that the model is significant in the prediction of Age. The R-square of the model is 70.2%, implies that 70.2% of variation in age can be explained by the model
Recommendation:
The regression model Age = -59.9 +0.7502 SBP can be used to predict the Age, such that over 70% of variation in Age can be explained by the model.
Question 2
Problem:
It is not sure that whether the factors X1 to X4 which represents four different success factors have any influences on the annual savings as a result of CRM implementation.
Goal:
To determine which of the success factors are most significant in the prediction of a successful CRM program, and develop the corresponding model for the prediction of CRM savings.
Finding/Conclusion:
Based on the available da.
Similar to Reporting a multiple linear regression in APA (20)
This document provides an overview of key concepts in hypothesis testing including:
- The null and alternative hypotheses, where the null hypothesis is what we aim to reject or fail to reject.
- The level of significance and critical region, which define the threshold for rejecting the null hypothesis.
- Type I and type II errors, where we aim to minimize both by choosing an appropriate significance level and critical region.
- Common test statistics like z, t, and chi-squared that are used to evaluate hypotheses based on samples.
- The process of hypothesis testing, which involves defining hypotheses, choosing a test statistic and significance level, and making a decision to reject or fail to reject the null based on the critical region.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
- A hypothesis is a tentative statement about the relationship between two or more variables that is tested through collecting sample data. The null hypothesis states there is no relationship and the alternative hypothesis proposes an alternative relationship.
- Type I error occurs when a true null hypothesis is rejected. Type II error is failing to reject a false null hypothesis. Choosing a significance level balances these two errors, with a higher level increasing Type I errors and a lower level increasing Type II errors.
- In medical testing, it is better to make a Type II error and accept a null hypothesis of no drug difference when there actually is a difference, to avoid releasing an ineffective drug. So a lower significance level that increases Type II errors would be chosen.
This document discusses analyzing research data through descriptive and analytical statistics. Descriptive statistics summarize variables one by one through measures like frequency, percentage, mean, median and standard deviation depending on the variable level. Analytical statistics examine relationships between two or more variables. The document demonstrates analyzing a hypertension study dataset in SPSS, including checking normality distribution through histograms, Shapiro-Wilk test and Q-Q plots to determine appropriate tests. Frequency is used to describe categorical gender variable while numerical age is described through mean, standard deviation and histogram with normal curve fitting.
This document provides guidance on writing and reporting clinical case studies. It discusses the key components of a clinical case study such as structure, data collection, variables, and analytical tools. Clinical case studies should analyze a real patient situation to identify problems, suggest solutions, and recommend the best solution. The document also differentiates between a clinical case study and clinical case report, noting that reports are shorter summaries of an individual patient case. It emphasizes writing for the target journal and audience when composing a case study.
The document discusses reporting the results of a split-plot ANOVA in APA style. It provides an example results section that reports the main effects of gender and time as significant but the interaction effect as not significant. It then breaks down each part of the example, explaining what each value represents, such as the F-ratio, degrees of freedom, mean square error, and p-values.
The document provides instructions for conducting an independent samples t-test in SPSS. It explains how to specify the grouping and test variables, define the groups being compared, and set options. It also demonstrates running a t-test to compare mile times between athletes and non-athletes using sample data, and interpreting the output, which includes Levene's test for equal variances and the t-test results.
The document provides instructions for conducting an independent samples t-test in SPSS. It explains how to specify the grouping and test variables, define the groups being compared, and set options. It also demonstrates running a t-test to compare mile times between athletes and non-athletes, checking assumptions, and interpreting the output, including Levene's test for equal variances and the t-test results.
The document describes how to conduct and interpret a paired samples t-test in SPSS. It explains that a paired samples t-test is used to compare the means of two related variables measured on the same subjects. It provides an example using reaction time data collected from participants before and after drinking a beer. It outlines the steps to check assumptions, run the t-test in SPSS, and interpret the output, finding that participants had significantly slower reaction times after consuming alcohol.
Reporting a multiple linear regression in apa Amit Sharma
A multiple linear regression was calculated to predict weight based on height and sex. The regression equation was significant and height and sex were significant predictors of weight, explaining 99.3% of the variance. Participants' predicted weight is equal to 47.138 - 39.133 (sex) + 2.101 (height), where height is measured in inches and sex is coded as 0 for female and 1 for male.
Reporting a single sample t- test revisedAmit Sharma
The document provides instructions for reporting the results of a single sample t-test in APA format. It includes an example result comparing the mean IQ scores of persons who eat broccoli regularly (M=120, SD=12.2) to the general population. The t-test found a statistically significant difference between the samples, t(22)=7.86, p=0.000.
Null hypothesis for single linear regressionAmit Sharma
The document discusses the null hypothesis for a single linear regression model. It explains that a null hypothesis states that there is no effect or relationship between the independent and dependent variables. For a regression predicting ACT scores from hours of sleep, the null hypothesis would be: "There will be no significant prediction of ACT scores by hours of sleep." The document provides a template for writing the null hypothesis and works through an example applying the template to the relationship between hours of sleep and ACT scores.
Here is the updated list of Top Best Ayurvedic medicine for Gas and Indigestion and those are Gas-O-Go Syp for Dyspepsia | Lavizyme Syrup for Acidity | Yumzyme Hepatoprotective Capsules etc
Does Over-Masturbation Contribute to Chronic Prostatitis.pptxwalterHu5
In some case, your chronic prostatitis may be related to over-masturbation. Generally, natural medicine Diuretic and Anti-inflammatory Pill can help mee get a cure.
Adhd Medication Shortage Uk - trinexpharmacy.comreignlana06
The UK is currently facing a Adhd Medication Shortage Uk, which has left many patients and their families grappling with uncertainty and frustration. ADHD, or Attention Deficit Hyperactivity Disorder, is a chronic condition that requires consistent medication to manage effectively. This shortage has highlighted the critical role these medications play in the daily lives of those affected by ADHD. Contact : +1 (747) 209 – 3649 E-mail : sales@trinexpharmacy.com
Osteoporosis - Definition , Evaluation and Management .pdfJim Jacob Roy
Osteoporosis is an increasing cause of morbidity among the elderly.
In this document , a brief outline of osteoporosis is given , including the risk factors of osteoporosis fractures , the indications for testing bone mineral density and the management of osteoporosis
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Kat...rightmanforbloodline
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
Promoting Wellbeing - Applied Social Psychology - Psychology SuperNotesPsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Cell Therapy Expansion and Challenges in Autoimmune DiseaseHealth Advances
There is increasing confidence that cell therapies will soon play a role in the treatment of autoimmune disorders, but the extent of this impact remains to be seen. Early readouts on autologous CAR-Ts in lupus are encouraging, but manufacturing and cost limitations are likely to restrict access to highly refractory patients. Allogeneic CAR-Ts have the potential to broaden access to earlier lines of treatment due to their inherent cost benefits, however they will need to demonstrate comparable or improved efficacy to established modalities.
In addition to infrastructure and capacity constraints, CAR-Ts face a very different risk-benefit dynamic in autoimmune compared to oncology, highlighting the need for tolerable therapies with low adverse event risk. CAR-NK and Treg-based therapies are also being developed in certain autoimmune disorders and may demonstrate favorable safety profiles. Several novel non-cell therapies such as bispecific antibodies, nanobodies, and RNAi drugs, may also offer future alternative competitive solutions with variable value propositions.
Widespread adoption of cell therapies will not only require strong efficacy and safety data, but also adapted pricing and access strategies. At oncology-based price points, CAR-Ts are unlikely to achieve broad market access in autoimmune disorders, with eligible patient populations that are potentially orders of magnitude greater than the number of currently addressable cancer patients. Developers have made strides towards reducing cell therapy COGS while improving manufacturing efficiency, but payors will inevitably restrict access until more sustainable pricing is achieved.
Despite these headwinds, industry leaders and investors remain confident that cell therapies are poised to address significant unmet need in patients suffering from autoimmune disorders. However, the extent of this impact on the treatment landscape remains to be seen, as the industry rapidly approaches an inflection point.
8 Surprising Reasons To Meditate 40 Minutes A Day That Can Change Your Life.pptxHolistified Wellness
We’re talking about Vedic Meditation, a form of meditation that has been around for at least 5,000 years. Back then, the people who lived in the Indus Valley, now known as India and Pakistan, practised meditation as a fundamental part of daily life. This knowledge that has given us yoga and Ayurveda, was known as Veda, hence the name Vedic. And though there are some written records, the practice has been passed down verbally from generation to generation.
Top Effective Soaps for Fungal Skin Infections in India
Reporting a multiple linear regression in APA
1. Reporting a Multiple Linear
Regression in APA Format
Amit Sharma
Associate Professor
Dept. of Pharmacy Practice
ISF COLLEGE OF PHARMACY
Ghal Kalan, Ferozpur GT Road, MOGA, 142001, Punjab
Mobile: 09646755140, 09418783145
Phone: No. 01636-650150, 650151
Website: - www.isfcp.org
2. Note – the examples in this presentation come from,
Cronk, B. C. (2012). How to Use SPSS Statistics: A
Step-by-step Guide to Analysis and Interpretation.
Pyrczak Pub.
5. DV = Dependent Variable
IV = Independent Variable
A multiple linear regression was calculated to predict
[DV] based on [IV1] and [IV2]. A significant regression
equation was found (F(_,__) = ___.___, p < .___), with
an R2 of .___. Participants’ predicted [DV] is equal to
__.___ – __.___ (IV1) + _.___ (IV2), where [IV1] is coded
or measured as _____________, and [IV2] is coded or
measured as __________. Object of measurement
increased _.__ [DV unit of measure] for each [IV1 unit
of measure] and _.__ for each [IV2 unit of measure].
Both [IV1] and [IV2] were significant predictors of [DV].
6. Wow, that’s a lot. Let’s break it down using the
following example:
7. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
8. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
9. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
&
10. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
&
12. A multiple linear regression was calculated to predict
[DV] based on their [IV1] and [IV2].
13. A multiple linear regression was calculated to predict
[DV] based on their [IV1] and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
14. A multiple linear regression was calculated to predict
weight based on their [IV1] and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
15. A multiple linear regression was calculated to predict
weight based on their height and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
16. A multiple linear regression was calculated to predict
weight based on their height and sex.
You have been asked to investigate the degree to which
height and sex predicts weight.
18. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = __.___, p < .___), with an R2 of .____.
19. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
20. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
Here’s the output:
21. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
22. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2,__) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
23. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
24. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
25. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
26. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
27. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Now for the next part of the template:
28. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV2) +
_.___ (IV1), where [IV2] is coded or measured as _____________,
and [IV1] is coded or measured __________.
29. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
30. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
31. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
32. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
33. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (IV1) +
_.___ (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
34. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
_.___ (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
35. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
36. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where [IV1] is coded or measured as
_____________, and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
37. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded or measured as
_____________, and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
38. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
[IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
39. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
40. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is measured in inches.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
41. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is measured in inches.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
42. Now for the second to last portion of the template:
43. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches.
44. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Object of measurement increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
45. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Object of measurement increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
46. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
47. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
48. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each [IV1 unit of measure] and _.__ for each [IV2 unit of
measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
49. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and _.__ for each [IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
50. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
52. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females.
53. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both [IV1] and [IV2] were significant
predictors of [DV].
54. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both [IV1] and [IV2] were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
55. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and [IV2] were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
56. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
57. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of [DV].
. Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
58. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of weight.
. Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
60. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
61. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
62. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
63. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors.
64. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors of weight.
65. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors of weight.