A multiple linear regression was calculated to predict weight based on height and sex. The regression equation was significant and height and sex were significant predictors of weight, explaining 99.3% of the variance. Participants' predicted weight is equal to 47.138 - 39.133 (sex) + 2.101 (height), where height is measured in inches and sex is coded as 0 for female and 1 for male.
This document presents a hybrid framework for facial expression recognition that uses SVD, PCA, and SURF. It extracts features using PCA with SVD, classifies expressions with an SVM classifier, and performs emotion detection with regression and SURF features. The framework achieves 98.79% accuracy and 67.79% average recognition on a database of 50 images with 5 expressions. It provides a concise facial expression recognition system using a combination of dimensionality reduction, classification, and feature detection techniques.
This document discusses using machine learning models to classify breast cancer tumors as benign or malignant based on cell nucleus characteristics without biopsy. It summarizes loading and preprocessing the Wisconsin Breast Cancer dataset, performing explanatory data analysis to identify important features, engineering features, training classifiers including SVC and evaluating models. SHAP and permutation feature importance analysis identified concave point characteristics as most important for classification. The top performing SVC classifier achieved over 99% accuracy, allowing diagnosis without biopsy. Future work could apply these methods to other cancers where biopsy is difficult.
Using Inductive or Deductive Reasoning in an Argument discusses inductive and deductive reasoning. It provides examples to distinguish between the two types of reasoning. Students are given activities where they must identify whether each example uses inductive or deductive reasoning. The document seeks to help students understand and differentiate between inductive and deductive reasoning when evaluating arguments.
This document defines and provides examples of arithmetic sequences. An arithmetic sequence is a sequence where each term after the first is obtained by adding a constant value, called the common difference, to the preceding term. The document provides the arithmetic sequence formula and examples of using it to find terms and sums of arithmetic sequences. Various activities are presented for students to practice identifying, describing, and working with arithmetic sequences.
This document explains the distance formula and how to use it to calculate the distance between points on a Cartesian plane. It provides the steps to use the Pythagorean theorem to derive the distance formula: take the difference between the x-coordinates squared and add it to the difference between the y-coordinates squared, and take the square root of the result. It then works through an example of using the distance formula to calculate the distance between points (3,2) and (8,7), which equals 5 units. The document concludes with practice problems and assignments applying the distance formula.
This document summarizes a presentation on detecting digital image forgery using salient keypoints. It introduces common types of image forgery and clues that reveal forgery. A framework is proposed that selects salient keypoints using distinctiveness, detectability, and repeatability to reduce keypoints and detect copy-move forgery. The approach uses SIFT and KAZE features and achieves promising results on standard datasets, outperforming other methods with lower false positive rates and higher precision and F1 scores. Future work could detect other forgery types and develop more robust detection algorithms.
The document asks if the reader has ever given directions to tourists asking about places or landmarks in their barangay or province. It asks if the reader was able to provide the correct directions and distances. The document suggests that the next time someone asks for the same information, the reader should be able to provide the right details.
An arithmetic sequence is a sequence of numbers where the difference between consecutive terms is constant. The formula for the nth term of an arithmetic sequence is an = a1 + (n-1)d, where a1 is the first term, n is the term number, and d is the common difference. This document provides examples of using the arithmetic sequence formula to find terms, common differences, and formulas for arithmetic sequences given various terms or conditions.
This document presents a hybrid framework for facial expression recognition that uses SVD, PCA, and SURF. It extracts features using PCA with SVD, classifies expressions with an SVM classifier, and performs emotion detection with regression and SURF features. The framework achieves 98.79% accuracy and 67.79% average recognition on a database of 50 images with 5 expressions. It provides a concise facial expression recognition system using a combination of dimensionality reduction, classification, and feature detection techniques.
This document discusses using machine learning models to classify breast cancer tumors as benign or malignant based on cell nucleus characteristics without biopsy. It summarizes loading and preprocessing the Wisconsin Breast Cancer dataset, performing explanatory data analysis to identify important features, engineering features, training classifiers including SVC and evaluating models. SHAP and permutation feature importance analysis identified concave point characteristics as most important for classification. The top performing SVC classifier achieved over 99% accuracy, allowing diagnosis without biopsy. Future work could apply these methods to other cancers where biopsy is difficult.
Using Inductive or Deductive Reasoning in an Argument discusses inductive and deductive reasoning. It provides examples to distinguish between the two types of reasoning. Students are given activities where they must identify whether each example uses inductive or deductive reasoning. The document seeks to help students understand and differentiate between inductive and deductive reasoning when evaluating arguments.
This document defines and provides examples of arithmetic sequences. An arithmetic sequence is a sequence where each term after the first is obtained by adding a constant value, called the common difference, to the preceding term. The document provides the arithmetic sequence formula and examples of using it to find terms and sums of arithmetic sequences. Various activities are presented for students to practice identifying, describing, and working with arithmetic sequences.
This document explains the distance formula and how to use it to calculate the distance between points on a Cartesian plane. It provides the steps to use the Pythagorean theorem to derive the distance formula: take the difference between the x-coordinates squared and add it to the difference between the y-coordinates squared, and take the square root of the result. It then works through an example of using the distance formula to calculate the distance between points (3,2) and (8,7), which equals 5 units. The document concludes with practice problems and assignments applying the distance formula.
This document summarizes a presentation on detecting digital image forgery using salient keypoints. It introduces common types of image forgery and clues that reveal forgery. A framework is proposed that selects salient keypoints using distinctiveness, detectability, and repeatability to reduce keypoints and detect copy-move forgery. The approach uses SIFT and KAZE features and achieves promising results on standard datasets, outperforming other methods with lower false positive rates and higher precision and F1 scores. Future work could detect other forgery types and develop more robust detection algorithms.
The document asks if the reader has ever given directions to tourists asking about places or landmarks in their barangay or province. It asks if the reader was able to provide the correct directions and distances. The document suggests that the next time someone asks for the same information, the reader should be able to provide the right details.
An arithmetic sequence is a sequence of numbers where the difference between consecutive terms is constant. The formula for the nth term of an arithmetic sequence is an = a1 + (n-1)d, where a1 is the first term, n is the term number, and d is the common difference. This document provides examples of using the arithmetic sequence formula to find terms, common differences, and formulas for arithmetic sequences given various terms or conditions.
This document discusses multivariate analysis and the relationship between smoking and lung cancer. It provides several key studies that established this relationship:
- A 1950 case-control study that associated lung cancer with smoking.
- A 1898 study finding elevated lung tumors in tobacco workers exposed to tobacco dust.
- Later studies in the 1930s-1950s further strengthened the relationship by showing higher rates of lung cancer in heavy smokers.
FACE EXPRESSION RECOGNITION USING CONVOLUTION NEURAL NETWORK (CNN) MODELS ijgca
This paper proposes the design of a Facial Expression Recognition (FER) system based on deep
convolutional neural network by using three model. In this work, a simple solution for facial expression
recognition that uses a combination of algorithms for face detection, feature extraction and classification
is discussed. The proposed method uses CNN models with SVM classifier and evaluates them, these models
are Alex-net model, VGG-16 model and Res-Net model. Experiments are carried out on the Extended
Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. In this
study the accuracy of AlexNet model compared with Vgg16 model and ResNet model. The result show that
AlexNet model achieved the best accuracy (88.2%) compared to other models.
Polynomial Function and Synthetic DivisionAleczQ1414
This file is about Polynomial Function and Synthetic Division. A project passed to Mrs. Marissa De Ocampo. Submitted by Group 6 of Grade 10-Galilei of Caloocan National Science and Technology High School '15-'16
This document discusses arithmetic and geometric sequences. An arithmetic sequence is one where the difference between consecutive terms is constant, while a geometric sequence is one where the ratio between consecutive terms is constant. Formulas are provided to calculate individual terms and the sum of terms for both arithmetic and geometric sequences. Examples are worked through demonstrating how to identify sequences and apply the formulas to problems involving cash prizes, savings accounts, and other real-world scenarios.
Attendance system based on face recognition using python by Raihan Sikdarraihansikdar
The document discusses face recognition technology for use in an automatic attendance system. It first defines biometrics and face recognition, explaining that face recognition identifies individuals using facial features. It then covers how face recognition systems work by detecting nodal points on faces to create unique face prints. The document proposes using such a system to take student attendance in online classes during the pandemic, noting advantages like ease of use, increased security, and cost effectiveness. It provides examples of how the system would capture images, analyze features, and recognize enrolled students to record attendance automatically.
Face recognition is a biometric technology that goes beyond just detecting human faces in an image or video. It goes a bit further to determine whose face it is. A face recognition system works by taking an image of a face and predicting whether the face matches another face stored in a dataset (or whether a face in one image matches a face in another). Created By Suman Ahemed Saikan
This document summarizes a student project to design software that can detect human faces in images. The project's objectives are outlined, including converting images to grayscale and using a Haar cascade classifier to detect faces. Implementation examples like Picasa and Facebook are provided. The procedure involves preprocessing the image, converting it to grayscale, loading face properties, and applying a detection algorithm to find faces. Limitations around orientation are noted, with plans to expand capabilities.
Reporting a multiple linear regression in apaKen Plummer
A multiple linear regression was calculated to predict weight based on height and sex. A significant regression equation was found (F(2,13)=981.202, p<.000), with an R2 of .993. Participants' predicted weight is equal to 47.138 + 2.101(height) - 39.133(sex), where height is measured in inches and sex is coded as 0 for male and 1 for female. Both height and sex were significant predictors of weight.
Reporting a single linear regression in apaKen Plummer
The document provides a template for reporting the results of a simple linear regression analysis in APA format. It explains that a linear regression was conducted to predict weight based on height. The regression equation was found to be significant, F(1,14)=25.925, p<.000, with an R2 of .649. The predicted weight is equal to -234.681 + 5.434 (height in inches) pounds.
In the preparation for the Geodetic Engineering Licensure Examination, the BSGE students must memorized the fastest possible solution for the LEAST SQUARES ADJUSTMENT using casio fx-991 es plus calculator technique in order to save time during the said examination. note: lec 2 and above wala akong nilagay na solution para hindi makupya techniques ko. just add me on fb para ituro ko sa inyo solution. Kasi itong solution ko wala sa google, youtube, calc tech books at hindi rin itinuro sa review center.
Week 7 - Linear Regression Exercises SPSS Output Simple.docxcockekeshia
Week 7 - Linear Regression Exercises SPSS Output
Simple Linear Regression SPSS Output
Descriptive Statistics
Mean Std. Deviation N
Family income prior month,
all sources
$1,485.49 $950.496 378
Hours worked per week in
current job
33.52 12.359 378
Correlations
Family income
prior month, all
sources
Hours worked
per week in
current job
Pearson Correlation Family income prior month,
all sources
1.000 .300
Hours worked per week in
current job
.300 1.000
Sig. (1-tailed) Family income prior month,
all sources
. .000
Hours worked per week in
current job
.000 .
N Family income prior month,
all sources
378 378
Hours worked per week in
current job
378 378
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1 .300a .090 .088 $907.877
a. Predictors: (Constant), Hours worked per week in current job
ANOVAb
Model Sum of Squares df Mean Square F Sig.
1 Regression 3.068E7 1 3.068E7 37.226 .000a
Residual 3.099E8 376 824241.002
Total 3.406E8 377
a. Predictors: (Constant), Hours worked per week in current job
b. Dependent Variable: Family income prior month, all sources
Coefficientsa
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
95.0% Confidence Interval
for B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 711.651 135.155 5.265 .000 445.896 977.405
Hours worked per week
in current job
23.083 3.783 .300 6.101 .000 15.644 30.523
a. Dependent Variable: Family income prior month, all sources
Part II: Multiple Regression SPSS Output
This part is going to begin with an example that has been interpreted for you. Analyze the output
provided and read the interpretation of the data so that you will have an understanding of what you
will do for the multiple regression assignment.
Descriptive Statistics
Mean Std. Deviation N
CES-D Score 18.5231 11.90747 156
CESD Score, Wave 1 17.6987 11.40935 156
Number types of abuse .83 1.203 156
Correlations
CES-D Score
CESD Score,
Wave 1
Number types
of abuse
Pearson Correlation CES-D Score 1.000 .412 .347
CESD Score, Wave 1 .412 1.000 .187
Number types of abuse .347 .187 1.000
Sig. (1-tailed) CES-D Score . .000 .000
CESD Score, Wave 1 .000 . .010
Number types of abuse .000 .010 .
N CES-D Score 156 156 156
CESD Score, Wave 1 156 156 156
Number types of abuse 156 156 156
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change F Change df1 df2 Sig. F Change
1 .412a .170 .164 10.88446 .170 31.506 1 154 .000
2 .496b .246 .236 10.41016 .076 15.352 1 153 .000
a. Predictors: (Constant), CESD Score, Wave 1
b. Predictors: (Constant), CESD Score, Wave 1, Number types of abuse
ANOVAc
Model Sum of Squares df Mean Square F Sig.
1 Regression 3732.507 1 3732.507 31.506 .000a
Residual 18244.613 154 118.472
Total 21977.1.
Bba 3274 qm week 6 part 1 regression modelsStephen Ong
This document provides an overview and outline of regression models and forecasting techniques. It discusses simple and multiple linear regression analysis, how to measure the fit of regression models, assumptions of regression models, and testing models for significance. The goals are to help students understand relationships between variables, predict variable values, develop regression equations from sample data, and properly apply and interpret regression analysis.
This document provides instructions for performing multiple regression analysis in SPSS. It demonstrates entering variables, running the regression using the enter, stepwise, and backward methods, and interpreting the output including R-square values, F-tests, beta coefficients, and equations for predicting the dependent variable based on the independent variables. Age and education were identified as the best predictors of months of full-time employment using both the stepwise and backward regression methods.
This chapter discusses regression models, including simple and multiple linear regression. It covers developing regression equations from sample data, measuring the fit of regression models, and assumptions of regression analysis. Key aspects covered include using scatter plots to examine relationships between variables, calculating the slope, intercept, coefficient of determination, and correlation coefficient, and performing hypothesis tests to determine if regression models are statistically significant. The chapter objectives are to help students understand and appropriately apply simple, multiple, and nonlinear regression techniques.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps in a two-way ANOVA. Specifically, it presents a study on the flavor acceptability of luncheon meat from different sources. It provides the problem statement, hypotheses, assumptions, and 10 step-by-step computations to conduct a two-way ANOVA on the data. The results of the ANOVA show that the flavor acceptability significantly differs between the meat sources, leading to a rejection of the null hypothesis.
The classes used in this study (Class A and Class B) were held at the same campus in Wichita, KS during June through September 2010 by the same instructor. These two classes were held on a Thursday and a Friday night with Class A being held on Thursday and Class B being held on Friday. Class A completed with 13 students and Class B completed with 16 students. The most interesting thing about these two groups of students is that the one group was overprotected through most classes leading up to the class in question and their general attitudes during this class reflected their attitudes prior to this specific class, while the other group was a group of first term students. These first term students were told up front what was expected of them and little to no tolerance was given for late work submission (this rule was also applied to the group that had been overprotected prior to this class).
(1) The predicted average test score is 395.85 and the predicted change in average test score is a decrease of 23.28 points based on the regression.
(2) Using data from a sample of 200 individuals, the regression equation predicts weights based on heights of 70, 65, and 74 inches.
(3) Converting the regression to use centimeters and kilograms, the coefficients are -0.092 and 0.7036 kg/cm with the same R-squared value but a standard error of 4.6267 kg.
Kano GIS Day 2014 - The Application of Multivariate Geostatistical analyses i...eHealth Africa
We are excited to be holding our own GIS Day event on November 19th, 2014!
GIS Day is a global grassroots educational event that enables Geographic Information Systems (GIS) users and vendors to showcase real-world applications of GIS to schools, businesses, and the general public. Organizations that utilize GIS around the world participate by holding or sponsoring an event of their own.
The first formal GIS Day took place in 1999. In 2005, more than 700 GIS Day events were held in 74 countries around the globe. Esri president and co-founder Jack Dangermond credits Ralph Nader with inspiring the creation of GIS Day. He saw GIS Day as providing an opportunity for the world to learn about the uses of GIS in mapping geography, and what that mapping technology could provide. He wanted GIS Day to be a grassroots effort and open to everyone to participate.
Recognizing the power that GIS technology could provide for healthcare, eHealth Africa as an NGO organization stepped to the forefront of using GIS applications to track polio in Nigeria. Using GIS technology, eHealth is able to map out areas previously unreached during immunization campaigns. Once the area is mapped, much-needed polio vaccinations are able to be distributed and the polio epidemic is brought another step closer to being controlled and eliminated.
The theme of GIS Day is “Discovering the world through GIS.” GIS Day provides an international forum for users of GIS technology to demonstrate real-world applications that are making a difference in our society and around the world.
We are excited to take part in GIS Day 2014 on November 19th. We look forward to joining with our community partners in discussing GIS usage, and to take a close look at the exciting contributions GIS provides around our world.
This document discusses multivariate analysis and the relationship between smoking and lung cancer. It provides several key studies that established this relationship:
- A 1950 case-control study that associated lung cancer with smoking.
- A 1898 study finding elevated lung tumors in tobacco workers exposed to tobacco dust.
- Later studies in the 1930s-1950s further strengthened the relationship by showing higher rates of lung cancer in heavy smokers.
FACE EXPRESSION RECOGNITION USING CONVOLUTION NEURAL NETWORK (CNN) MODELS ijgca
This paper proposes the design of a Facial Expression Recognition (FER) system based on deep
convolutional neural network by using three model. In this work, a simple solution for facial expression
recognition that uses a combination of algorithms for face detection, feature extraction and classification
is discussed. The proposed method uses CNN models with SVM classifier and evaluates them, these models
are Alex-net model, VGG-16 model and Res-Net model. Experiments are carried out on the Extended
Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. In this
study the accuracy of AlexNet model compared with Vgg16 model and ResNet model. The result show that
AlexNet model achieved the best accuracy (88.2%) compared to other models.
Polynomial Function and Synthetic DivisionAleczQ1414
This file is about Polynomial Function and Synthetic Division. A project passed to Mrs. Marissa De Ocampo. Submitted by Group 6 of Grade 10-Galilei of Caloocan National Science and Technology High School '15-'16
This document discusses arithmetic and geometric sequences. An arithmetic sequence is one where the difference between consecutive terms is constant, while a geometric sequence is one where the ratio between consecutive terms is constant. Formulas are provided to calculate individual terms and the sum of terms for both arithmetic and geometric sequences. Examples are worked through demonstrating how to identify sequences and apply the formulas to problems involving cash prizes, savings accounts, and other real-world scenarios.
Attendance system based on face recognition using python by Raihan Sikdarraihansikdar
The document discusses face recognition technology for use in an automatic attendance system. It first defines biometrics and face recognition, explaining that face recognition identifies individuals using facial features. It then covers how face recognition systems work by detecting nodal points on faces to create unique face prints. The document proposes using such a system to take student attendance in online classes during the pandemic, noting advantages like ease of use, increased security, and cost effectiveness. It provides examples of how the system would capture images, analyze features, and recognize enrolled students to record attendance automatically.
Face recognition is a biometric technology that goes beyond just detecting human faces in an image or video. It goes a bit further to determine whose face it is. A face recognition system works by taking an image of a face and predicting whether the face matches another face stored in a dataset (or whether a face in one image matches a face in another). Created By Suman Ahemed Saikan
This document summarizes a student project to design software that can detect human faces in images. The project's objectives are outlined, including converting images to grayscale and using a Haar cascade classifier to detect faces. Implementation examples like Picasa and Facebook are provided. The procedure involves preprocessing the image, converting it to grayscale, loading face properties, and applying a detection algorithm to find faces. Limitations around orientation are noted, with plans to expand capabilities.
Reporting a multiple linear regression in apaKen Plummer
A multiple linear regression was calculated to predict weight based on height and sex. A significant regression equation was found (F(2,13)=981.202, p<.000), with an R2 of .993. Participants' predicted weight is equal to 47.138 + 2.101(height) - 39.133(sex), where height is measured in inches and sex is coded as 0 for male and 1 for female. Both height and sex were significant predictors of weight.
Reporting a single linear regression in apaKen Plummer
The document provides a template for reporting the results of a simple linear regression analysis in APA format. It explains that a linear regression was conducted to predict weight based on height. The regression equation was found to be significant, F(1,14)=25.925, p<.000, with an R2 of .649. The predicted weight is equal to -234.681 + 5.434 (height in inches) pounds.
In the preparation for the Geodetic Engineering Licensure Examination, the BSGE students must memorized the fastest possible solution for the LEAST SQUARES ADJUSTMENT using casio fx-991 es plus calculator technique in order to save time during the said examination. note: lec 2 and above wala akong nilagay na solution para hindi makupya techniques ko. just add me on fb para ituro ko sa inyo solution. Kasi itong solution ko wala sa google, youtube, calc tech books at hindi rin itinuro sa review center.
Week 7 - Linear Regression Exercises SPSS Output Simple.docxcockekeshia
Week 7 - Linear Regression Exercises SPSS Output
Simple Linear Regression SPSS Output
Descriptive Statistics
Mean Std. Deviation N
Family income prior month,
all sources
$1,485.49 $950.496 378
Hours worked per week in
current job
33.52 12.359 378
Correlations
Family income
prior month, all
sources
Hours worked
per week in
current job
Pearson Correlation Family income prior month,
all sources
1.000 .300
Hours worked per week in
current job
.300 1.000
Sig. (1-tailed) Family income prior month,
all sources
. .000
Hours worked per week in
current job
.000 .
N Family income prior month,
all sources
378 378
Hours worked per week in
current job
378 378
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
1 .300a .090 .088 $907.877
a. Predictors: (Constant), Hours worked per week in current job
ANOVAb
Model Sum of Squares df Mean Square F Sig.
1 Regression 3.068E7 1 3.068E7 37.226 .000a
Residual 3.099E8 376 824241.002
Total 3.406E8 377
a. Predictors: (Constant), Hours worked per week in current job
b. Dependent Variable: Family income prior month, all sources
Coefficientsa
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
95.0% Confidence Interval
for B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 711.651 135.155 5.265 .000 445.896 977.405
Hours worked per week
in current job
23.083 3.783 .300 6.101 .000 15.644 30.523
a. Dependent Variable: Family income prior month, all sources
Part II: Multiple Regression SPSS Output
This part is going to begin with an example that has been interpreted for you. Analyze the output
provided and read the interpretation of the data so that you will have an understanding of what you
will do for the multiple regression assignment.
Descriptive Statistics
Mean Std. Deviation N
CES-D Score 18.5231 11.90747 156
CESD Score, Wave 1 17.6987 11.40935 156
Number types of abuse .83 1.203 156
Correlations
CES-D Score
CESD Score,
Wave 1
Number types
of abuse
Pearson Correlation CES-D Score 1.000 .412 .347
CESD Score, Wave 1 .412 1.000 .187
Number types of abuse .347 .187 1.000
Sig. (1-tailed) CES-D Score . .000 .000
CESD Score, Wave 1 .000 . .010
Number types of abuse .000 .010 .
N CES-D Score 156 156 156
CESD Score, Wave 1 156 156 156
Number types of abuse 156 156 156
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change F Change df1 df2 Sig. F Change
1 .412a .170 .164 10.88446 .170 31.506 1 154 .000
2 .496b .246 .236 10.41016 .076 15.352 1 153 .000
a. Predictors: (Constant), CESD Score, Wave 1
b. Predictors: (Constant), CESD Score, Wave 1, Number types of abuse
ANOVAc
Model Sum of Squares df Mean Square F Sig.
1 Regression 3732.507 1 3732.507 31.506 .000a
Residual 18244.613 154 118.472
Total 21977.1.
Bba 3274 qm week 6 part 1 regression modelsStephen Ong
This document provides an overview and outline of regression models and forecasting techniques. It discusses simple and multiple linear regression analysis, how to measure the fit of regression models, assumptions of regression models, and testing models for significance. The goals are to help students understand relationships between variables, predict variable values, develop regression equations from sample data, and properly apply and interpret regression analysis.
This document provides instructions for performing multiple regression analysis in SPSS. It demonstrates entering variables, running the regression using the enter, stepwise, and backward methods, and interpreting the output including R-square values, F-tests, beta coefficients, and equations for predicting the dependent variable based on the independent variables. Age and education were identified as the best predictors of months of full-time employment using both the stepwise and backward regression methods.
This chapter discusses regression models, including simple and multiple linear regression. It covers developing regression equations from sample data, measuring the fit of regression models, and assumptions of regression analysis. Key aspects covered include using scatter plots to examine relationships between variables, calculating the slope, intercept, coefficient of determination, and correlation coefficient, and performing hypothesis tests to determine if regression models are statistically significant. The chapter objectives are to help students understand and appropriately apply simple, multiple, and nonlinear regression techniques.
A note on estimation of population mean in sample survey using auxiliary info...Alexander Decker
1. The document proposes a class of estimators for estimating the population mean in two-phase sampling using auxiliary information.
2. Some common estimators like the ratio, product, and regression estimators are special cases within the proposed class. Expressions for bias and mean squared error of the estimators are obtained up to the first order of approximation.
3. Asymptotically optimum estimators are identified that have minimum mean squared error. The proposed class of estimators is found to perform better than usual ratio and other estimators for population mean estimation.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps. It analyzes the flavor acceptability of luncheon meat from different sources. The null hypothesis is that there is no significant difference between the sources. The two-way ANOVA calculations show that the computed F-values are greater than the critical values, so the null hypothesis is rejected, indicating there are significant differences between the sources of luncheon meat.
The document discusses factorial analysis of variance (ANOVA) and provides an example to illustrate the steps in a two-way ANOVA. Specifically, it presents a study on the flavor acceptability of luncheon meat from different sources. It provides the problem statement, hypotheses, assumptions, and 10 step-by-step computations to conduct a two-way ANOVA on the data. The results of the ANOVA show that the flavor acceptability significantly differs between the meat sources, leading to a rejection of the null hypothesis.
The classes used in this study (Class A and Class B) were held at the same campus in Wichita, KS during June through September 2010 by the same instructor. These two classes were held on a Thursday and a Friday night with Class A being held on Thursday and Class B being held on Friday. Class A completed with 13 students and Class B completed with 16 students. The most interesting thing about these two groups of students is that the one group was overprotected through most classes leading up to the class in question and their general attitudes during this class reflected their attitudes prior to this specific class, while the other group was a group of first term students. These first term students were told up front what was expected of them and little to no tolerance was given for late work submission (this rule was also applied to the group that had been overprotected prior to this class).
(1) The predicted average test score is 395.85 and the predicted change in average test score is a decrease of 23.28 points based on the regression.
(2) Using data from a sample of 200 individuals, the regression equation predicts weights based on heights of 70, 65, and 74 inches.
(3) Converting the regression to use centimeters and kilograms, the coefficients are -0.092 and 0.7036 kg/cm with the same R-squared value but a standard error of 4.6267 kg.
Kano GIS Day 2014 - The Application of Multivariate Geostatistical analyses i...eHealth Africa
We are excited to be holding our own GIS Day event on November 19th, 2014!
GIS Day is a global grassroots educational event that enables Geographic Information Systems (GIS) users and vendors to showcase real-world applications of GIS to schools, businesses, and the general public. Organizations that utilize GIS around the world participate by holding or sponsoring an event of their own.
The first formal GIS Day took place in 1999. In 2005, more than 700 GIS Day events were held in 74 countries around the globe. Esri president and co-founder Jack Dangermond credits Ralph Nader with inspiring the creation of GIS Day. He saw GIS Day as providing an opportunity for the world to learn about the uses of GIS in mapping geography, and what that mapping technology could provide. He wanted GIS Day to be a grassroots effort and open to everyone to participate.
Recognizing the power that GIS technology could provide for healthcare, eHealth Africa as an NGO organization stepped to the forefront of using GIS applications to track polio in Nigeria. Using GIS technology, eHealth is able to map out areas previously unreached during immunization campaigns. Once the area is mapped, much-needed polio vaccinations are able to be distributed and the polio epidemic is brought another step closer to being controlled and eliminated.
The theme of GIS Day is “Discovering the world through GIS.” GIS Day provides an international forum for users of GIS technology to demonstrate real-world applications that are making a difference in our society and around the world.
We are excited to take part in GIS Day 2014 on November 19th. We look forward to joining with our community partners in discussing GIS usage, and to take a close look at the exciting contributions GIS provides around our world.
The document covers standard deviation as a measure of dispersion, defining it as the positive square root of the arithmetic mean of the squared deviations
Exploring Support Vector Regression - Signals and Systems ProjectSurya Chandra
Our team competed in a Kaggle competition to predict the bike share use as a part of their capital bike share program in Washington DC using a powerful function approximation technique called support vector regression.
This document summarizes an analysis of using Support Vector Regression (SVR) to predict bike rental data from a bike sharing program in Washington D.C. It begins with an introduction to SVR and the bike rental prediction competition. It then shows that linear regression performs poorly on this non-linear problem. The document explains how SVR maps data into higher dimensions using kernel functions to allow for non-linear fits. It concludes by outlining the derivation of the SVR method using kernel functions to simplify calculations for the regression.
This document discusses multiple regression analysis. It begins by introducing multiple regression as an extension of simple linear regression that allows for modeling relationships between a response variable and multiple explanatory variables. It then covers topics such as examining variable distributions, building regression models, estimating model parameters, and assessing overall model fit and significance of individual predictors. An example demonstrates using multiple regression to build a model for predicting cable television subscribers based on advertising rates, station power, number of local families, and number of competing stations.
Similar to Reporting a multiple linear regression in apa (20)
This document provides an overview of key concepts in hypothesis testing including:
- The null and alternative hypotheses, where the null hypothesis is what we aim to reject or fail to reject.
- The level of significance and critical region, which define the threshold for rejecting the null hypothesis.
- Type I and type II errors, where we aim to minimize both by choosing an appropriate significance level and critical region.
- Common test statistics like z, t, and chi-squared that are used to evaluate hypotheses based on samples.
- The process of hypothesis testing, which involves defining hypotheses, choosing a test statistic and significance level, and making a decision to reject or fail to reject the null based on the critical region.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
- A hypothesis is a tentative statement about the relationship between two or more variables that is tested through collecting sample data. The null hypothesis states there is no relationship and the alternative hypothesis proposes an alternative relationship.
- Type I error occurs when a true null hypothesis is rejected. Type II error is failing to reject a false null hypothesis. Choosing a significance level balances these two errors, with a higher level increasing Type I errors and a lower level increasing Type II errors.
- In medical testing, it is better to make a Type II error and accept a null hypothesis of no drug difference when there actually is a difference, to avoid releasing an ineffective drug. So a lower significance level that increases Type II errors would be chosen.
This document discusses analyzing research data through descriptive and analytical statistics. Descriptive statistics summarize variables one by one through measures like frequency, percentage, mean, median and standard deviation depending on the variable level. Analytical statistics examine relationships between two or more variables. The document demonstrates analyzing a hypertension study dataset in SPSS, including checking normality distribution through histograms, Shapiro-Wilk test and Q-Q plots to determine appropriate tests. Frequency is used to describe categorical gender variable while numerical age is described through mean, standard deviation and histogram with normal curve fitting.
This document provides guidance on writing and reporting clinical case studies. It discusses the key components of a clinical case study such as structure, data collection, variables, and analytical tools. Clinical case studies should analyze a real patient situation to identify problems, suggest solutions, and recommend the best solution. The document also differentiates between a clinical case study and clinical case report, noting that reports are shorter summaries of an individual patient case. It emphasizes writing for the target journal and audience when composing a case study.
The document discusses reporting the results of a split-plot ANOVA in APA style. It provides an example results section that reports the main effects of gender and time as significant but the interaction effect as not significant. It then breaks down each part of the example, explaining what each value represents, such as the F-ratio, degrees of freedom, mean square error, and p-values.
The document provides instructions for conducting an independent samples t-test in SPSS. It explains how to specify the grouping and test variables, define the groups being compared, and set options. It also demonstrates running a t-test to compare mile times between athletes and non-athletes using sample data, and interpreting the output, which includes Levene's test for equal variances and the t-test results.
The document provides instructions for conducting an independent samples t-test in SPSS. It explains how to specify the grouping and test variables, define the groups being compared, and set options. It also demonstrates running a t-test to compare mile times between athletes and non-athletes, checking assumptions, and interpreting the output, including Levene's test for equal variances and the t-test results.
The document describes how to conduct and interpret a paired samples t-test in SPSS. It explains that a paired samples t-test is used to compare the means of two related variables measured on the same subjects. It provides an example using reaction time data collected from participants before and after drinking a beer. It outlines the steps to check assumptions, run the t-test in SPSS, and interpret the output, finding that participants had significantly slower reaction times after consuming alcohol.
The document discusses how to report the results of a Pearson correlation analysis in APA style. It provides an example of a problem investigating the relationship between the amount of broccoli extract consumed and scores of well-being. It then shows the template for reporting the Pearson correlation, stating the correlation coefficient r and the p-value.
A One-way ANOVA was conducted to compare the effect of type of athlete on the number of pizza slices eaten. The ANOVA results showed that the effect of type of athlete on number of pizza slices eaten was significant, F(2,66) = 99.82, p = .000.
The document provides guidance on reporting paired sample t-test results in APA format. It includes an example of how to write the results in a sentence, explaining that there was a significant/not significant difference between the scores for condition 1 (providing the mean and standard deviation) and condition 2 (providing the mean and standard deviation). It also demonstrates how to fill in the t-statistic, degrees of freedom, and p-value using output from SPSS.
Reporting a single sample t- test revisedAmit Sharma
The document provides instructions for reporting the results of a single sample t-test in APA format. It includes an example result comparing the mean IQ scores of persons who eat broccoli regularly (M=120, SD=12.2) to the general population. The t-test found a statistically significant difference between the samples, t(22)=7.86, p=0.000.
Reporting an independent sample t- testAmit Sharma
An independent samples t-test was conducted to compare truck driver drowsiness scores for country music listening and no country music listening conditions. There was a significant difference in scores for country music listening (M=4.2, SD=1.3) and no country music listening (M=2.2, SD=0.84); t(8)=2.89, p=0.02.
Null hypothesis for single linear regressionAmit Sharma
The document discusses the null hypothesis for a single linear regression model. It explains that a null hypothesis states that there is no effect or relationship between the independent and dependent variables. For a regression predicting ACT scores from hours of sleep, the null hypothesis would be: "There will be no significant prediction of ACT scores by hours of sleep." The document provides a template for writing the null hypothesis and works through an example applying the template to the relationship between hours of sleep and ACT scores.
Reporting a multiple linear regression in APAAmit Sharma
A multiple linear regression was calculated to predict weight based on height and sex. The regression equation was significant and height and sex were significant predictors of weight, explaining 99.3% of the variance. Participants' predicted weight is equal to 47.138 - 39.133 (sex) + 2.101 (height), where height is measured in inches and sex is coded as 0 for female and 1 for male.
Here is the updated list of Top Best Ayurvedic medicine for Gas and Indigestion and those are Gas-O-Go Syp for Dyspepsia | Lavizyme Syrup for Acidity | Yumzyme Hepatoprotective Capsules etc
Local Advanced Lung Cancer: Artificial Intelligence, Synergetics, Complex Sys...Oleg Kshivets
Overall life span (LS) was 1671.7±1721.6 days and cumulative 5YS reached 62.4%, 10 years – 50.4%, 20 years – 44.6%. 94 LCP lived more than 5 years without cancer (LS=2958.6±1723.6 days), 22 – more than 10 years (LS=5571±1841.8 days). 67 LCP died because of LC (LS=471.9±344 days). AT significantly improved 5YS (68% vs. 53.7%) (P=0.028 by log-rank test). Cox modeling displayed that 5YS of LCP significantly depended on: N0-N12, T3-4, blood cell circuit, cell ratio factors (ratio between cancer cells-CC and blood cells subpopulations), LC cell dynamics, recalcification time, heparin tolerance, prothrombin index, protein, AT, procedure type (P=0.000-0.031). Neural networks, genetic algorithm selection and bootstrap simulation revealed relationships between 5YS and N0-12 (rank=1), thrombocytes/CC (rank=2), segmented neutrophils/CC (3), eosinophils/CC (4), erythrocytes/CC (5), healthy cells/CC (6), lymphocytes/CC (7), stick neutrophils/CC (8), leucocytes/CC (9), monocytes/CC (10). Correct prediction of 5YS was 100% by neural networks computing (error=0.000; area under ROC curve=1.0).
Basavarajeeyam is a Sreshta Sangraha grantha (Compiled book ), written by Neelkanta kotturu Basavaraja Virachita. It contains 25 Prakaranas, First 24 Chapters related to Rogas& 25th to Rasadravyas.
TEST BANK For An Introduction to Brain and Behavior, 7th Edition by Bryan Kol...rightmanforbloodline
TEST BANK For An Introduction to Brain and Behavior, 7th Edition by Bryan Kolb, Ian Q. Whishaw, Verified Chapters 1 - 16, Complete Newest Versio
TEST BANK For An Introduction to Brain and Behavior, 7th Edition by Bryan Kolb, Ian Q. Whishaw, Verified Chapters 1 - 16, Complete Newest Version
TEST BANK For An Introduction to Brain and Behavior, 7th Edition by Bryan Kolb, Ian Q. Whishaw, Verified Chapters 1 - 16, Complete Newest Version
Muktapishti is a traditional Ayurvedic preparation made from Shoditha Mukta (Purified Pearl), is believed to help regulate thyroid function and reduce symptoms of hyperthyroidism due to its cooling and balancing properties. Clinical evidence on its efficacy remains limited, necessitating further research to validate its therapeutic benefits.
share - Lions, tigers, AI and health misinformation, oh my!.pptxTina Purnat
• Pitfalls and pivots needed to use AI effectively in public health
• Evidence-based strategies to address health misinformation effectively
• Building trust with communities online and offline
• Equipping health professionals to address questions, concerns and health misinformation
• Assessing risk and mitigating harm from adverse health narratives in communities, health workforce and health system
TEST BANK For Community Health Nursing A Canadian Perspective, 5th Edition by...Donc Test
TEST BANK For Community Health Nursing A Canadian Perspective, 5th Edition by Stamler, Verified Chapters 1 - 33, Complete Newest Version Community Health Nursing A Canadian Perspective, 5th Edition by Stamler, Verified Chapters 1 - 33, Complete Newest Version Community Health Nursing A Canadian Perspective, 5th Edition by Stamler Community Health Nursing A Canadian Perspective, 5th Edition TEST BANK by Stamler Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Pdf Chapters Download Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Pdf Download Stuvia Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Study Guide Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Ebook Download Stuvia Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Questions and Answers Quizlet Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Studocu Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Quizlet Test Bank For Community Health Nursing A Canadian Perspective, 5th Edition Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Pdf Chapters Download Community Health Nursing A Canadian Perspective, 5th Edition Pdf Download Course Hero Community Health Nursing A Canadian Perspective, 5th Edition Answers Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Ebook Download Course hero Community Health Nursing A Canadian Perspective, 5th Edition Questions and Answers Community Health Nursing A Canadian Perspective, 5th Edition Studocu Community Health Nursing A Canadian Perspective, 5th Edition Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Pdf Chapters Download Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Pdf Download Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Study Guide Questions and Answers Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Ebook Download Stuvia Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Questions Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Studocu Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Quizlet Community Health Nursing A Canadian Perspective, 5th Edition Test Bank Stuvia
Promoting Wellbeing - Applied Social Psychology - Psychology SuperNotesPsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Clinic ^%[+27633867063*Abortion Pills For Sale In Tembisa Central19various
Clinic ^%[+27633867063*Abortion Pills For Sale In Tembisa Central Clinic ^%[+27633867063*Abortion Pills For Sale In Tembisa CentralClinic ^%[+27633867063*Abortion Pills For Sale In Tembisa CentralClinic ^%[+27633867063*Abortion Pills For Sale In Tembisa CentralClinic ^%[+27633867063*Abortion Pills For Sale In Tembisa Central
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Kat...rightmanforbloodline
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
TEST BANK For Basic and Clinical Pharmacology, 14th Edition by Bertram G. Katzung, Verified Chapters 1 - 66, Complete Newest Version.
1. Reporting a Multiple Linear
Regression in APA Format
Amit Sharma
Associate Professor
Dept. of Pharmacy Practice
ISF COLLEGE OF PHARMACY
Ghal Kalan, Ferozpur GT Road, MOGA, 142001, Punjab
Mobile: 09646755140, 09418783145
Phone: No. 01636-650150, 650151
Website: - www.isfcp.org
2. Note – the examples in this presentation come from,
Cronk, B. C. (2012). How to Use SPSS Statistics: A
Step-by-step Guide to Analysis and Interpretation.
Pyrczak Pub.
5. DV = Dependent Variable
IV = Independent Variable
A multiple linear regression was calculated to predict
[DV] based on [IV1] and [IV2]. A significant regression
equation was found (F(_,__) = ___.___, p < .___), with
an R2 of .___. Participants’ predicted [DV] is equal to
__.___ – __.___ (IV1) + _.___ (IV2), where [IV1] is coded
or measured as _____________, and [IV2] is coded or
measured as __________. Object of measurement
increased _.__ [DV unit of measure] for each [IV1 unit
of measure] and _.__ for each [IV2 unit of measure].
Both [IV1] and [IV2] were significant predictors of [DV].
6. Wow, that’s a lot. Let’s break it down using the
following example:
7. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
8. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
9. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
&
10. Wow, that’s a lot. Let’s break it down using the
following example:
You have been asked to investigate the degree to which
height and sex predicts weight.
&
12. A multiple linear regression was calculated to predict
[DV] based on their [IV1] and [IV2].
13. A multiple linear regression was calculated to predict
[DV] based on their [IV1] and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
14. A multiple linear regression was calculated to predict
weight based on their [IV1] and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
15. A multiple linear regression was calculated to predict
weight based on their height and [IV2].
You have been asked to investigate the degree to which
height and sex predicts weight.
16. A multiple linear regression was calculated to predict
weight based on their height and sex.
You have been asked to investigate the degree to which
height and sex predicts weight.
18. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = __.___, p < .___), with an R2 of .____.
19. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
20. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
Here’s the output:
21. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(_,__) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
22. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2,__) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
23. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = ___.___, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
24. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .___), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
25. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .___.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
26. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Model Summary
Model R R Square
Adjusted
R Square
Std. Error of
the Estimate
1 .997a .993 .992 2.29571
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
27. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Now for the next part of the template:
28. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV2) +
_.___ (IV1), where [IV2] is coded or measured as _____________,
and [IV1] is coded or measured __________.
29. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
ANOVAa
Model Sum of Squares df Mean Squares F Sig.
1. Regression
Residual
Total
10342.424
68.514
10410.938
2
13
15
5171.212
5.270
981.202 .000a
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
30. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted [DV] is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
31. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to __.___ + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
32. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 + __.___ (IV1) +
_.___ (IV2), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
33. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (IV1) +
_.___ (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
34. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
_.___ (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
35. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (IV1), where [IV1] is coded or measured as _____________,
and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
36. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where [IV1] is coded or measured as
_____________, and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
37. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded or measured as
_____________, and [IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
38. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
[IV2] is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
39. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is coded or measured __________.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
40. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is measured in inches.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
41. A multiple linear regression was calculated to predict weight
based on their height and sex. A significant regression equation
was found (F(2, 13) = 981.202, p < .000), with an R2 of .993.
Participants’ predicted weight is equal to 47.138 – 39.133 (SEX) +
2.101 (HEIGHT), where sex is coded as 1 = Male, 2 = Female, and
height is measured in inches.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
Independent Variable1: Height
Independent Variable2: Sex
Dependent Variable: Weight
42. Now for the second to last portion of the template:
43. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches.
44. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Object of measurement increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
45. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Object of measurement increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
46. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased _.__ [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
47. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 [DV unit of
measure] for each [IV1 unit of measure] and _.__ for each
[IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
48. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each [IV1 unit of measure] and _.__ for each [IV2 unit of
measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
49. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and _.__ for each [IV2 unit of measure].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
50. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females.
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
52. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females.
53. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both [IV1] and [IV2] were significant
predictors of [DV].
54. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both [IV1] and [IV2] were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
55. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and [IV2] were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
56. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of [DV].
Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
57. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of [DV].
. Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
58. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight is
equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT), where sex
is coded as 1 = Male, 2 = Female, and height is measured in
inches. Participant’s weight increased 2.101 pounds for
each inch of height and males weighed 39.133 pounds
more than females. Both height and sex were significant
predictors of weight.
. Coefficientsa
Model
Unstandardized
Coefficients
Standardized
Coefficients
t Sig.B St. Error Beta
1. (Constant)
Height
Sex
47.138
2.101
-39.133
14.843
.198
1.501
.312
-7.67
-3.176
10.588
-25.071
.007
.000
.000
60. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
61. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
62. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Object of measurement
increased 2.101 pounds for each inch of height and
males weighed 39.133 pounds more than females.
Both height and sex were significant predictors.
63. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors.
64. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors of weight.
65. A multiple linear regression was calculated to predict
weight based on their height and sex. A significant
regression equation was found (F(2, 13) = 981.202, p <
.000), with an R2 of .993. Participants’ predicted weight
is equal to 47.138 – 39.133 (SEX) + 2.101 (HEIGHT),
where sex is coded as 1 = Male, 2 = Female, and height
is measured in inches. Participant’s weight increased
2.101 pounds for each inch of height and males
weighed 39.133 pounds more than females. Both
height and sex were significant predictors of weight.