The document describes conducting a factor analysis on SPSS to measure different aspects of student anxiety towards learning SPSS. A 23-item questionnaire was administered to over 2,500 students. Initial analysis of the correlation matrix found no issues with multicollinearity. The document then provides instructions for running the factor analysis in SPSS, including extracting factors, rotating the factors, and interpreting the output.
Hello everyone, this is Vartika Verma, student of B. El. Ed 4. This presentation titled 'Reliability' is helpful for the subject 'Measurement and Evaluation' in B. El. Ed 4 and also for all the Education students. Thanking you :)
Analysis of Variance and Repeated Measures DesignJ P Verma
This presentation discusses the basic concept used in analysis of variance and it shows the difference between independent measures ANOVA and Repeated measures ANOVA
A presentation meant for non-statisticians on statistics and general statistical analysis. Basically provides a short overview of the processes involved in data collection, storage, hypothesis generation and statistical analysis. It does not deal with bayesian statistics. Presented at PRODVANCE 2016 Ahmedabad
Hello everyone, this is Vartika Verma, student of B. El. Ed 4. This presentation titled 'Reliability' is helpful for the subject 'Measurement and Evaluation' in B. El. Ed 4 and also for all the Education students. Thanking you :)
Analysis of Variance and Repeated Measures DesignJ P Verma
This presentation discusses the basic concept used in analysis of variance and it shows the difference between independent measures ANOVA and Repeated measures ANOVA
A presentation meant for non-statisticians on statistics and general statistical analysis. Basically provides a short overview of the processes involved in data collection, storage, hypothesis generation and statistical analysis. It does not deal with bayesian statistics. Presented at PRODVANCE 2016 Ahmedabad
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Review of literature
This chapter deals with following topics:
Define the concept of literature review
Recognize the importance of literature review
Discuss the purpose of literature review
Explain the types of literature review
Enumerate the sources of literature review
Identify the criteria for selecting resources
Identify and explain the steps for conducting literature review
Review of literature is one of the most important steps in the research process .It accounts for what is already known about a particular phenomenon.
The main purpose of review of literature is to convey to the readers about the work already done and the knowledge and ideas that have been already established on a particular topic of research.
Importance:
To gain background knowledge
Find out problems in the area of interest
Know what others have found out about the subject and how they have done
To find out various concepts relating to it and the potential relationship between them.
To identify potential areas & hypothesis for research
Identification of relevant theoretical or conceptual framework for research problem
To identify potential sources of information for conducting research.
To provide support for the design of research methodology and techniques of analysis.
Determination of any gaps in a body of knowledge
Determination of a need to replicate a prior study.
TYPES:
TRADITIONAL
SYSTEMIC
META ANALYSIS
META SYNTHESIS.
TRADITIONAL METHOD:
It presents Summary of Literature & draws conclusion about the topic in Question
SYSTEMIC:
The main aim is to find out the answer for well focused question of clinical practice.
It should include the methods used for searching, evaluating and synthesizing the literature
Suggested criteria in certain aspects literature review are
Formulation of research question.
Inclusion and exclusion criteria for literature.
Selection and access of literature.
To assess the quality of literature.
Analyze, synthesize and disseminate the findings
It provide a comprehensive .back ground of subject under study
It is essential for getting in-depth insight in subject area, refining the research question and hypotheses and identifying the gaps and inconsistencies in existing literature.
META ANALYSIS:
Meta-analysis is to involve finding of several quantitative studies on single subject area and carry out statistical computations on them using standardized statistical techniques and procedures
META SYNTHESIS:
Meta-synthesis is the non statistical techniques used to integrate ,evaluated and interpret the finding of multiple qualitative research studies .
SOURCES:
Primary
secondary
Teritary
PRIMARY:
Research publications written by the person or people who conducted the research/ theorists who developed the theory
SECONDARY:
Research reports prepared by someone other than the original researcher
Authors paraphrase the works of original researcher
NURSING DATABASE:
CINAHL
MEDLINE
PubMed
British Nursing Index
Medline Plus
Nur
STATISTICS : Changing the way we do: Hypothesis testing, effect size, power, ...Musfera Nara Vadia
STATISTICS : Changing the way we do: Hypothesis testing, effect size, power, confidence interval, two-tailed and one tailed test, and other misunderstood issues.
INFERENTIAL STATISTICS: AN INTRODUCTIONJohn Labrador
For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study.
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Review of literature
This chapter deals with following topics:
Define the concept of literature review
Recognize the importance of literature review
Discuss the purpose of literature review
Explain the types of literature review
Enumerate the sources of literature review
Identify the criteria for selecting resources
Identify and explain the steps for conducting literature review
Review of literature is one of the most important steps in the research process .It accounts for what is already known about a particular phenomenon.
The main purpose of review of literature is to convey to the readers about the work already done and the knowledge and ideas that have been already established on a particular topic of research.
Importance:
To gain background knowledge
Find out problems in the area of interest
Know what others have found out about the subject and how they have done
To find out various concepts relating to it and the potential relationship between them.
To identify potential areas & hypothesis for research
Identification of relevant theoretical or conceptual framework for research problem
To identify potential sources of information for conducting research.
To provide support for the design of research methodology and techniques of analysis.
Determination of any gaps in a body of knowledge
Determination of a need to replicate a prior study.
TYPES:
TRADITIONAL
SYSTEMIC
META ANALYSIS
META SYNTHESIS.
TRADITIONAL METHOD:
It presents Summary of Literature & draws conclusion about the topic in Question
SYSTEMIC:
The main aim is to find out the answer for well focused question of clinical practice.
It should include the methods used for searching, evaluating and synthesizing the literature
Suggested criteria in certain aspects literature review are
Formulation of research question.
Inclusion and exclusion criteria for literature.
Selection and access of literature.
To assess the quality of literature.
Analyze, synthesize and disseminate the findings
It provide a comprehensive .back ground of subject under study
It is essential for getting in-depth insight in subject area, refining the research question and hypotheses and identifying the gaps and inconsistencies in existing literature.
META ANALYSIS:
Meta-analysis is to involve finding of several quantitative studies on single subject area and carry out statistical computations on them using standardized statistical techniques and procedures
META SYNTHESIS:
Meta-synthesis is the non statistical techniques used to integrate ,evaluated and interpret the finding of multiple qualitative research studies .
SOURCES:
Primary
secondary
Teritary
PRIMARY:
Research publications written by the person or people who conducted the research/ theorists who developed the theory
SECONDARY:
Research reports prepared by someone other than the original researcher
Authors paraphrase the works of original researcher
NURSING DATABASE:
CINAHL
MEDLINE
PubMed
British Nursing Index
Medline Plus
Nur
STATISTICS : Changing the way we do: Hypothesis testing, effect size, power, ...Musfera Nara Vadia
STATISTICS : Changing the way we do: Hypothesis testing, effect size, power, confidence interval, two-tailed and one tailed test, and other misunderstood issues.
INFERENTIAL STATISTICS: AN INTRODUCTIONJohn Labrador
For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study.
Assignment 2 Tests of SignificanceThroughout this assignment yo.docxrock73
Assignment 2: Tests of Significance
Throughout this assignment you will review mock studies. You will needs to follow the directions outlined in the section using SPSS and decide whether there is significance between the variables. You will need to list the five steps of hypothesis testing (as covered in the lesson for Week 6) to see how every question should be formatted. You will complete all of the problems. Be sure to cut and past the appropriate test result boxes from SPSS under each problem and explain what you will do with your research hypotheses. All calculations should be coming from your SPSS. You will need to submit the SPSS output file to get credit for this assignment. This file will save as a .spv file and will need to be in a single file. In other words, you are not allowed to submit more than one output file for this assignment.
The five steps of hypothesis testing when using SPSS are as follows:
1. State your research hypothesis (H1) and null hypothesis (H0).
2. Identify your confidence interval (.05 or .01)
3. Conduct your analysis using SPSS.
4. Look for the valid score for comparison. This score is usually under ‘Sig 2-tail’ or ‘Sig. 2’. We will call this “p”.
5. Compare the two and apply the following rule:
a. If “p” is < or = confidence interval, than you reject the null.
Be sure to explain to the reader what this means in regards to your study. (Ex: will you recommend counseling services?)
* Be sure that your answers are clearly distinguishable. Perhaps you bold your font or use a different color.
ASSIGNMENT 2(200) WORD MINIUM
1. They allow us to see if our relationship is "statistically significant". (Remember that this only shows us that there is or is not a relationship but does NOT show us if it is big, small, or in-between.)
2. It let's us know if our findings can be generalized to the population which our sample was selected from and represents.
This week you will decide which test of significance you will use for your project. For this class your choices for tests will include one of the following:
· Chi-square
· t Test
· ANOVA
We will be using a process for hypothesis testing which outlines five steps researchers can follow to complete this process:
1. Write your research hypothesis (H1) and your null hypothesis (H0).
2. Identify and record your confidence interval. These are usually .05 (95%) or .01 (99%).
3. Complete the test using SPSS.
4. Identify the number under Sig. (2-tail). This will be represented by "p".
5. Compare the numbers in steps 2 and 4 and apply the following rule:
1. If p < or = confidence interval, than you reject the null hypothesis
Determine what to do with your null and explain this to your reader. Be sure to go beyond the phrase "reject or fail to reject the null" and explain how that impacts your research and best describes the relationship between variables.
TEST QUESTIONS-NEED FULL ANSWERS
Q1
Make up and discuss research examples corresponding to the various ...
WEEK 6 – EXERCISES Enter your answers in the spaces pr.docxwendolynhalbert
WEEK 6 – EXERCISES
Enter your answers in the spaces provided. Save the file using your last name as the beginning of the file name (e.g., ruf_week6_exercises) and submit via “Assignments.” When appropriate,
show your work
. You can do the work by hand, scan/take a digital picture, and attach that file with your work.
1
.
A psychotherapist studied whether his clients self-disclosed more while sitting in an easy chair or lying down on a couch. All clients had previously agreed to allow the sessions to be videotaped for research purposes. The therapist randomly assigned 10 clients to each condition. The third session for each client was videotaped and an independent observer counted the clients’ disclosures. The therapist reported that “clients made more disclosures when sitting in easy chairs (
M
= 18.20) than when lying down on a couch (
M
= 14.31),
t
(18) = 2.84,
p
< .05, two-tailed.” Explain these results to a person who understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
2.
A researcher compared the adjustment of adolescents who had been raised in homes that were either very structured or unstructured. Thirty adolescents from each type of family completed an adjustment inventory. The results are reported in the table below. Explain these results to a person who understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
Means on Four Adjustment Scales for
Adolescents from Structured versus Unstructured Homes
Scale
Structured Homes
Unstructured Homes
t
Social Maturity
106.82
113.94
–1.07
School Adjustment
116.31
107.22
2.03*
Identity Development
89.48
94.32
1.93*
Intimacy Development
102.25
104.33
.32
______________________
*
p
< .05
3.
Do men with higher levels of a particular hormone show higher levels of assertiveness? Levels of this hormone were tested in 100 men. The top 10 and the bottom 10 were selected for the study. All participants took part in a laboratory simulation in which they were asked to role-play a person picking his car up from a mechanic’s shop. The simulation was videotaped and later judged by independent raters on each of four types of assertive statements made by the participant. The results are shown in the table below. Explain these results to a person who fully understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
Mean Number of Assertive Statements
Type of Assertive Statement
Group
1
2
3
4
Men with High Levels
2.14
1.16
3.83
0.14
Men with Low Levels
1.21
1.32
2.33
0.38
t
3.81**
0.89
2.03*
0.58
______________________
*
p
< .05;
**
p
< 0.1
4.
A manager of a small store wanted to discourage shoplifters by putting signs around the store saying “Shoplifting is a crime!” However, he wanted to make sure this would not result in customers buying less. To test this, he displayed the signs every other W.
The selling environment in which a firm produces and sells its product is called a market structure.*
Defined by three characteristics:
The number of firms in the market
The ease of entry and exit of firms
The degree of product differentiation
The firm is an economic institution that transforms factors of production into consumer goods – it:
Organizes factors of production.
Produces goods and services.
Sells produced goods and services.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
1. F A Using S P S S1 (Saq.Sav) Q Ti A
1. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 1 10/12/2005
Factor Analysis Using SPSS
The theory of factor analysis was described in your lecture, or read Field (2005) Chapter 15.
Example
Factor analysis is frequently used to develop questionnaires: after all if you want to measure
an ability or trait, you need to ensure that the questions asked relate to the construct that you
intend to measure. I have noticed that a lot of students become very stressed about SPSS.
Therefore I wanted to design a questionnaire to measure a trait that I termed ‘SPSS anxiety’. I
decided to devise a questionnaire to measure various aspects of students’ anxiety towards
learning SPSS. I generated questions based on interviews with anxious and non-anxious
students and came up with 23 possible questions to include. Each question was a statement
followed by a five-point Likert scale ranging from ‘strongly disagree’ through ‘neither agree or
disagree’ to ‘strongly agree’. The questionnaire is printed in Field (2005, p. 639).
The questionnaire was designed to predict how anxious a given individual would be about
learning how to use SPSS. What’s more, I wanted to know whether anxiety about SPSS could
be broken down into specific forms of anxiety. So, in other words, are there other traits that
might contribute to anxiety about SPSS? With a little help from a few lecturer friends I
collected 2571 completed questionnaires (at this point it should become apparent that this
example is fictitious!). The data are stored in the file SAQ.sav.
Questionnaires are made up of multiple items each of which elicits a
response from the same person. As such, it is a repeated measures design.
Given we know that repeated measures go in different columns, different
questions on a questionnaire should each have their own column in SPSS.
Initial Considerations
Sample Size
Correlation coefficients fluctuate from sample to sample, much more so in small samples than
in large. Therefore, the reliability of factor analysis is also dependent on sample size. Field
(2005) reviews many suggestions about the sample size necessary for factor analysis and
concludes that it depends on many things. In general over 300 cases is probably adequate but
communalities after extraction should probably be above 0.5 (see Field, 2005).
Data Screening
SPSS will nearly always find a factor solution to a set of variables. However, the solution is
unlikely to have any real meaning if the variables analysed are not sensible. The first thing to
do when conducting a factor analysis is to look at the inter-correlation between variables. If
our test questions measure the same underlying dimension (or dimensions) then we would
expect them to correlate with each other (because they are measuring the same thing). If we
find any variables that do not correlate with any other variables (or very few) then you should
consider excluding these variables before the factor analysis is run. The correlations between
variables can be checked using the correlate procedure (see Chapter 4) to create a correlation
matrix of all variables. This matrix can also be created as part of the main factor analysis.
The opposite problem is when variables correlate too highly. Although mild multicollinearity is
not a problem for factor analysis it is important to avoid extreme multicollinearity (i.e.
variables that are very highly correlated) and singularity (variables that are perfectly
correlated). As with regression, singularity causes problems in factor analysis because it
becomes impossible to determine the unique contribution to a factor of the variables that are
2. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 2 10/12/2005
highly correlated (as was the case for multiple regression). Therefore, at this early stage we
look to eliminate any variables that don’t correlate with any other variables or that correlate
very highly with other variables (R < .9). Multicollinearity can be detected by looking at the
determinant of the R-matrix (see next section).
As well as looking for interrelations, you should ensure that variables have roughly normal
distributions and are measured at an interval level (which Likert scales are, perhaps wrongly,
assumed to be!). The assumption of normality is important only if you wish to generalize the
results of your analysis beyond the sample collected.
Running the Analysis
Access the main dialog box (Figure 1) by using the Analyze⇒Data Reduction⇒Factor …
menu path. Simply select the variables you want to include in the analysis (remember to
exclude any variables that were identified as problematic during the data screening) and
transfer them to the box labelled Variables by clicking on .
Figure 1: Main dialog box for factor analysis
There are several options available, the first of which can be accessed by clicking on to
access the dialog box in Figure 2. The Coefficients option produces the R-matrix, and the
Significance levels option will produce a matrix indicating the significance value of each
correlation in the R-matrix. You can also ask for the Determinant of this matrix and this option
is vital for testing for multicollinearity or singularity. The determinant of the R-matrix should
be greater than 0.00001; if it is less than this value then look through the correlation matrix
for variables that correlate very highly (R > .8) and consider eliminating one of the variables
(or more depending on the extent of the problem) before proceeding. The choice of which of
the two variables to eliminate will be fairly arbitrary and finding multicollinearity in the data
should raise questions about the choice of items within your questionnaire.
Figure 2: Descriptives in factor analysis
3. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 3 10/12/2005
KMO and Bartlett’s test of sphericity produces the Kaiser-Meyer-Olkin measure of sampling
adequacy and Bartlett’s test (see Field, 2005, Chapters 11 & 12). The value of KMO should be
greater than 0.5 if the sample is adequate.
Factor Extraction on SPSS
Click on to access the extraction dialog box (Figure 3). There are several ways to
conduct factor analysis and the choice of method depends on many things (see Field, 2005).
For our purposes we will use principal component analysis, which strictly speaking isn’t factor
analysis; however, the two procedures often yield similar results (see Field, 2005, 15.3.3).
The Display box has two options: to display the Unrotated factor solution and a Scree plot. The
scree plot was described earlier and is a useful way of establishing how many factors should be
retained in an analysis. The unrotated factor solution is useful in assessing the improvement of
interpretation due to rotation. If the rotated solution is little better than the unrotated solution
then it is possible that an inappropriate (or less optimal) rotation method has been used.
Figure 3: Dialog box for factor extraction
The Extract box provides options pertaining to the retention of factors. You have the choice of
either selecting factors with eigenvalues greater than a user-specified value or retaining a fixed
number of factors. For the Eigenvalues over option the default is Kaiser’s recommendation of
eigenvalues over 1. It is probably best to run a primary analysis with the Eigenvalues over 1
option selected, select a scree plot, and compare the results. If looking at the scree plot and
the eigenvalues over 1 lead you to retain the same number of factors then continue with the
analysis and be happy. If the two criteria give different results then examine the
communalities and decide for yourself which of the two criteria to believe. If you decide to use
the scree plot then you may want to redo the analysis specifying the number of factors to
extract. The number of factors to be extracted can be specified by selecting Number of factors
and then typing the appropriate number in the space provided (e.g. 4).
Rotation
The interpretability of factors can be improved through rotation. Rotation maximizes the
loading of each variable on one of the extracted factors whilst minimizing the loading on all
other factors. Rotation works through changing the absolute values of the variables whilst
keeping their differential values constant. Click on to access the dialog box in Figure 4.
Varimax, quartimax and equamax are orthogonal rotations whereas direct oblimin and promax
are oblique rotations (see Field 2005). The exact choice of rotation depends largely on whether
or not you think that the underlying factors should be related. If you expect the factors to be
independent then you should choose one of the orthogonal rotations (I recommend varimax).
If, however, there are theoretical grounds for supposing that your factors might correlate then
direct oblimin should be selected. For this example, choose an orthogonal rotation.
The dialog box also has options for displaying the Rotated solution. The rotated solution is
displayed by default and is essential for interpreting the final rotated analysis.
4. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 4 10/12/2005
Figure 4: Factor analysis: rotation dialog box
Scores
The factor scores dialog box can be accessed by clicking in the main dialog box. This
option allows you to save factor scores for each subject in the data editor. SPSS creates a new
column for each factor extracted and then places the factor score for each subject within that
column. These scores can then be used for further analysis, or simply to identify groups of
subjects who score highly on particular factors. There are three methods of obtaining these
scores, all of which were described in sections 15.2.3. and 15.5.3. of Field (2005).
Figure 5: Factor analysis: factor scores dialog box
Options
Click on in the main dialog box. By default SPSS will list variables in the order in which
they are entered into the data editor. Although this format is often convenient, when
interpreting factors it can be useful to list variables by size. By selecting Sorted by size, SPSS
will order the variables by their factor loadings. There is also the option to Suppress absolute
values less than a specified value (by default 0.1). This option ensures that factor loadings
within ±0.1 are not displayed in the output. This option is useful for assisting in interpretation;
however, it can be helpful to increase the default value of 0.1 to either 0.4 or a value reflecting
the expected value of a significant factor loading given the sample size (see Field section
15.3.6.2.). For this example set the value at 0.4.
Figure 6: Factor analysis: options dialog box
5. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 5 10/12/2005
Interpreting Output from SPSS
Select the same options as I have in the screen diagrams and run a factor analysis with
orthogonal rotation. To save space each variable is referred to only by its label on the data
editor (e.g. Q12). On the output you obtain, you should find that the SPSS uses the value label
(the question itself) in all of the output. When using the output in this chapter just remember
that Q1 represents question 1, Q2 represents question 2 and Q17 represents question 17.
Preliminary Analysis
SPSS Output 1 shows an abridged version of the R-matrix. The top half of this table contains
the Pearson correlation coefficient between all pairs of questions whereas the bottom half
contains the one-tailed significance of these coefficients. We can use this correlation matrix to
check the pattern of relationships. First, scan the significance values and look for any variable
for which the majority of values are greater than 0.05. Then scan the correlation coefficients
themselves and look for any greater than 0.9. If any are found then you should be aware that
a problem could arise because of singularity in the data: check the determinant of the
correlation matrix and, if necessary, eliminate one of the two variables causing the problem.
The determinant is listed at the bottom of the matrix (blink and you’ll miss it). For these data
its value is 5.271E−04 (which is 0.0005271) which is greater than the necessary value of
0.00001. Therefore, multicollinearity is not a problem for these data. To sum up, all questions
in the SAQ correlate fairly well and none of the correlation coefficients are particularly large;
therefore, there is no need to consider eliminating any questions at this stage.
Correlation Matrixa
1.000 -.099 -.337 .436 .402 -.189 .214 .329 -.104 -.004
-.099 1.000 .318 -.112 -.119 .203 -.202 -.205 .231 .100
-.337 .318 1.000 -.380 -.310 .342 -.325 -.417 .204 .150
.436 -.112 -.380 1.000 .401 -.186 .243 .410 -.098 -.034
.402 -.119 -.310 .401 1.000 -.165 .200 .335 -.133 -.042
.217 -.074 -.227 .278 .257 -.167 .101 .272 -.165 -.069
.305 -.159 -.382 .409 .339 -.269 .221 .483 -.168 -.070
.331 -.050 -.259 .349 .269 -.159 .175 .296 -.079 -.050
-.092 .315 .300 -.125 -.096 .249 -.159 -.136 .257 .171
.214 -.084 -.193 .216 .258 -.127 .084 .193 -.131 -.062
.357 -.144 -.351 .369 .298 -.200 .255 .346 -.162 -.086
.345 -.195 -.410 .442 .347 -.267 .298 .441 -.167 -.046
.355 -.143 -.318 .344 .302 -.227 .204 .374 -.195 -.053
.338 -.165 -.371 .351 .315 -.254 .226 .399 -.170 -.048
.246 -.165 -.312 .334 .261 -.210 .206 .300 -.168 -.062
.499 -.168 -.419 .416 .395 -.267 .265 .421 -.156 -.082
.371 -.087 -.327 .383 .310 -.163 .205 .363 -.126 -.092
.347 -.164 -.375 .382 .322 -.257 .235 .430 -.160 -.080
-.189 .203 .342 -.186 -.165 1.000 -.249 -.275 .234 .122
.214 -.202 -.325 .243 .200 -.249 1.000 .468 -.100 -.035
.329 -.205 -.417 .410 .335 -.275 .468 1.000 -.129 -.068
-.104 .231 .204 -.098 -.133 .234 -.100 -.129 1.000 .230
-.004 .100 .150 -.034 -.042 .122 -.035 -.068 .230 1.000
.000 .000 .000 .000 .000 .000 .000 .000 .410
.000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .043
.000 .000 .000 .000 .000 .000 .000 .000 .017
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .006 .000 .000 .000 .000 .000 .000 .000 .005
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .001
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .009
.000 .000 .000 .000 .000 .000 .000 .000 .000 .004
.000 .000 .000 .000 .000 .000 .000 .000 .000 .007
.000 .000 .000 .000 .000 .000 .000 .000 .000 .001
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .039
.000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000
.410 .000 .000 .043 .017 .000 .039 .000 .000
Q01
Q02
Q03
Q04
Q05
Q06
Q07
Q08
Q09
Q10
Q11
Q12
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Q01
Q02
Q03
Q04
Q05
Q06
Q07
Q08
Q09
Q10
Q11
Q12
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Correlation
Sig. (1-tailed)
Q01 Q02 Q03 Q04 Q05 Q19 Q20 Q21 Q22 Q23
Determinant = 5.271E-04a.
SPSS Output 1
SPSS Output 2 shows several very important parts of the output: the Kaiser-Meyer-Olkin
measure of sampling adequacy and Bartlett's test of sphericity. The KMO statistic varies
6. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 6 10/12/2005
between 0 and 1. A value of 0
indicates that the sum of partial
correlations is large relative to the
sum of correlations, indicating
diffusion in the pattern of
correlations (hence, factor analysis
is likely to be inappropriate). A
value close to 1 indicates that
patterns of correlations are
relatively compact and so factor
analysis should yield distinct and reliable factors. Kaiser (1974) recommends accepting values
greater than 0.5 as acceptable (values below this should lead you to either collect more data
or rethink which variables to include). Furthermore, values between 0.5 and 0.7 are mediocre,
values between 0.7 and 0.8 are good, values between 0.8 and 0.9 are great and values above
0.9 are superb (see Hutcheson and Sofroniou, 1999, pp.224-225 for more detail). For these
data the value is 0.93, which falls into the range of being superb: so, we should be confident
that factor analysis is appropriate for these data.
Bartlett's measure tests the null hypothesis that the original correlation matrix is an identity
matrix. For factor analysis to work we need some relationships between variables and if the R-
matrix were an identity matrix then all correlation coefficients would be zero. Therefore, we
want this test to be significant (i.e. have a significance value less than 0.05). A significant test
tells us that the R-matrix is not an identity matrix; therefore, there are some relationships
between the variables we hope to include in the analysis. For these data, Bartlett's test is
highly significant (p < 0.001), and therefore factor analysis is appropriate.
Factor Extraction
SPSS Output 3 lists the eigenvalues associated with each linear component (factor) before
extraction, after extraction and after rotation. Before extraction, SPSS has identified 23 linear
components within the data set (we know that there should be as many eigenvectors as there
are variables and so there will be as many factors as variables). The eigenvalues associated
with each factor represent the variance explained by that particular linear component and
SPSS also displays the
eigenvalue in terms of the
percentage of variance
explained (so, factor 1
explains 31.696% of total
variance). It should be clear
that the first few factors
explain relatively large
amounts of variance
(especially factor 1) whereas
subsequent factors explain
only small amounts of
variance. SPSS then extracts
all factors with eigenvalues
greater than 1, which leaves
us with four factors. The
eigenvalues associated with
these factors are again
displayed (and the
percentage of variance
explained) in the columns
labelled Extraction Sums of Squared Loadings. The values in this part of the table are the same
as the values before extraction, except that the values for the discarded factors are ignored
(hence, the table is blank after the fourth factor). In the final part of the table (labelled
KMO and Bartlett's Test
.930
19334.492
253
.000
Kaiser-Meyer-Olkin Measure of Sampling Adequacy.
Approx. Chi-Square
df
Sig.
Bartlett's Test of Sphericity
SPSS Output 2
Total Variance Explained
7.290 31.696 31.696 7.290 31.696 31.696 3.730 16.219 16.219
1.739 7.560 39.256 1.739 7.560 39.256 3.340 14.523 30.742
1.317 5.725 44.981 1.317 5.725 44.981 2.553 11.099 41.842
1.227 5.336 50.317 1.227 5.336 50.317 1.949 8.475 50.317
.988 4.295 54.612
.895 3.893 58.504
.806 3.502 62.007
.783 3.404 65.410
.751 3.265 68.676
.717 3.117 71.793
.684 2.972 74.765
.670 2.911 77.676
.612 2.661 80.337
.578 2.512 82.849
.549 2.388 85.236
.523 2.275 87.511
.508 2.210 89.721
.456 1.982 91.704
.424 1.843 93.546
.408 1.773 95.319
.379 1.650 96.969
.364 1.583 98.552
.333 1.448 100.000
Component
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Total
% of
Variance
Cumulative
% Total
% of
Variance
Cumulative
% Total
% of
Variance
Cumulative
%
Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Extraction Method: Principal Component Analysis.
SPSS Output 3
7. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 7 10/12/2005
Rotation Sums of Squared Loadings), the eigenvalues of the factors after rotation are
displayed. Rotation has the effect of optimizing the factor structure and one consequence for
these data is that the relative importance of the four factors is equalized. Before rotation,
factor 1 accounted for considerably more variance than the remaining three (31.696%
compared to 7.560, 5.725, and 5.336%), however after extraction it accounts for only
16.219% of variance (compared to 14.523, 11.099 and 8.475% respectively).
SPSS Output 4 shows the table of communalities before and after extraction. Principal
component analysis works on the initial assumption that all variance is common; therefore,
before extraction the communalities are all 1. The communalities in the column labelled
Extraction reflect the common variance in the data structure. So, for example, we can say that
43.5% of the variance associated with question 1 is common, or shared, variance. Another
way to look at these communalities is in terms of the proportion of variance explained by the
underlying factors. After extraction some of the factors are discarded and so some information
is lost. The amount of variance in each variable that can be explained by the retained factors is
represented by the communalities after extraction.
Communalities
1.000 .435
1.000 .414
1.000 .530
1.000 .469
1.000 .343
1.000 .654
1.000 .545
1.000 .739
1.000 .484
1.000 .335
1.000 .690
1.000 .513
1.000 .536
1.000 .488
1.000 .378
1.000 .487
1.000 .683
1.000 .597
1.000 .343
1.000 .484
1.000 .550
1.000 .464
1.000 .412
Q01
Q02
Q03
Q04
Q05
Q06
Q07
Q08
Q09
Q10
Q11
Q12
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Initial Extraction
Extraction Method: Principal Component
Component Matrixa
.701
.685
.679
.673
.669
.658
.656
.652 -.400
.643
.634
-.629
.593
.586
.556
.549 .401 -.417
.437
.436 -.404
-.427
.627
.548
.465
.562 .571
.507
Q18
Q07
Q16
Q13
Q12
Q21
Q14
Q11
Q17
Q04
Q03
Q15
Q01
Q05
Q08
Q10
Q20
Q19
Q09
Q02
Q22
Q06
Q23
1 2 3 4
Component
Extraction Method: Principal Component Analysis.
4 components extracted.a.
SPSS Output 4
This output also shows the component matrix before rotation. This matrix contains the
loadings of each variable onto each factor. By default SPSS displays all loadings; however, we
requested that all loadings less than 0.4 be suppressed in the output and so there are blank
spaces for many of the loadings. This matrix is not particularly important for interpretation.
At this stage SPSS has extracted four factors. Factor analysis is an exploratory tool and so it
should be used to guide the researcher to make various decisions: you shouldn't leave the
computer to make them. One important decision is the number of factors to extract. By
Kaiser's criterion we should extract four factors and this is what SPSS has done. However, this
criterion is accurate when there are less than 30 variables and communalities after extraction
are greater than 0.7 or when the sample size exceeds 250 and the average communality is
greater than 0.6. The communalities are shown in SPSS Output 4, and none exceed 0.7. The
average of the communalities can be found by adding them up and dividing by the number of
communalities (11.573/23 = 0.503). So, on both grounds Kaiser's rule may not be accurate.
However, you should consider the huge sample that we have, because the research into
Kaiser's criterion gives recommendations for much smaller samples. We can also use the scree
8. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 8 10/12/2005
plot, which we asked SPSS to produce. The scree plot is shown below with a thunderbolt
indicating the point of inflexion on the curve. This curve is difficult to interpret because the
curve begins to tail off after three factors, but there is another drop after four factors before a
stable plateau is reached. Therefore, we could probably justify retaining either two or four
factors. Given the large sample, it is probably safe to assume Kaiser's criterion; however, you
could rerun the analysis specifying that SPSS extract only two factors and compare the results.
Scree Plot
Component Number
2321191715131197531
Eigenvalue
8
6
4
2
0
SPSS Output 5
If there are less than 30 variables and communalities after extraction are
greater than 0.7 or if the sample size exceeds 250 and the average
communality is greater than 0.6 then retain all factors with Eigen values
above 1 (Kaiser’s criterion).
If none of the above apply, a Scree Plot can be used when the sample size
is large (around 300 or more cases).
Factor Rotation
The first analysis I asked you to run was using an orthogonal rotation. SPSS Output 6 shows
the rotated component matrix (also called the rotated factor matrix in factor analysis) which is
a matrix of the factor loadings for each variable onto each factor. This matrix contains the
same information as the component matrix in SPSS Output 4 except that it is calculated after
rotation. There are several things to consider about the format of this matrix. First, factor
loadings less than 0.4 have not been displayed because we asked for these loadings to be
suppressed. If you didn't select this option, or didn't adjust the criterion value to 0.4, then
your output will differ. Second, the variables are listed in the order of size of their factor
loadings because we asked for the output to be Sorted by size. If this option was not selected
your output will look different. Finally, for all other parts of the output I suppressed the
variable labels (for reasons of space) but for this matrix I have allowed the variable labels to
be printed to aid interpretation.
Compare this matrix with the unrotated solution. Before rotation, most variables loaded highly
onto the first factor and the remaining factors didn't really get a look in. However, the rotation
of the factor structure has clarified things considerably: there are four factors and variables
load very highly onto only one factor (with the exception of one question). The suppression of
loadings less than 0.4 and ordering variables by loading size also makes interpretation
considerably easier (because you don't have to scan the matrix to identify substantive
loadings).
9. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 9 10/12/2005
Rotated Component Matrixa
.800
.684
.647
.638
.579
.550
.459
.677
.661
-.567
.473 .523
.516
.514
.496
.429
.833
.747
.747
.648
.645
.586
.543
.427
I have little experience of computers
SPSS always crashes when I try to use it
I worry that I will cause irreparable damage because
of my incompetenece with computers
All computers hate me
Computers have minds of their own and deliberately
go wrong whenever I use them
Computers are useful only for playing games
Computers are out to get me
I can't sleep for thoughts of eigen vectors
I wake up under my duvet thinking that I am trapped
under a normal distribtion
Standard deviations excite me
People try to tell you that SPSS makes statistics
easier to understand but it doesn't
I dream that Pearson is attacking me with correlation
coefficients
I weep openly at the mention of central tendency
Statiscs makes me cry
I don't understand statistics
I have never been good at mathematics
I slip into a coma whenever I see an equation
I did badly at mathematics at school
My friends are better at statistics than me
My friends are better at SPSS than I am
If I'm good at statistics my friends will think I'm a nerd
My friends will think I'm stupid for not being able to
cope with SPSS
Everybody looks at me when I use SPSS
1 2 3 4
Component
Extraction Method: Principal Component Analysis.
Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 9 iterations.a.
SPSS Output 6
Use orthogonal rotation when you believe your factors should theoretically
independent (unrelated to each other).
Use oblique rotation when you believe factors should be related to each
other.
Interpretation
The next step is to look at the content of questions that load onto the same factor to try to
identify common themes. If the mathematical factor produced by the analysis represents some
real-world construct then common themes among highly loading questions can help us identify
what the construct might be. The questions that load highly on factor 1 seem to all relate to
using computers or SPSS. Therefore we might label this factor fear of computers. The
questions that load highly on factor 2 all seem to relate to different aspects of statistics;
therefore, we might label this factor fear of statistics. The three questions that load highly on
factor 3 all seem to relate to mathematics; therefore, we might label this factor fear of
mathematics. Finally, the questions that load highly on factor 4 all contain some component of
social evaluation from friends; therefore, we might label this factor peer evaluation. This
analysis seems to reveal that the initial questionnaire, in reality, is composed of four sub-
scales: fear of computers, fear of statistics, fear of maths, and fear of negative peer
evaluation. There are two possibilities here. The first is that the SAQ failed to measure what it
set out to (namely SPSS anxiety) but does measure some related constructs. The second is
that these four constructs are sub-components of SPSS anxiety; however, the factor analysis
does not indicate which of these possibilities is true.
Guided Example
The University of Sussex is constantly seeking to employ the best people possible as lecturers
(no, really, it is). Anyway, they wanted to revise a questionnaire based on Bland’s theory of
SPSS Output 6
10. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 10 10/12/2005
research methods lecturers. This theory predicts that good research methods lecturers should
have four characteristics: (1) a profound love of statistics; (2) an enthusiasm for experimental
design; (3) a love of teaching; and (4) a complete absence of normal interpersonal skills.
These characteristics should be related (i.e. correlated). The ‘Teaching Of Statistics for
Scientific Experiments’ (TOSSE) already existed, but the university revised this questionnaire
and it became the ‘Teaching Of Statistics for Scientific Experiments — Revised’ (TOSSE—R).
The gave this questionnaire to 239 research methods lecturers around the world to see if it
supported Bland’s theory.
The questionnaire is below.
SD = Strongly Disagree, D = Disagree, N = Neither, A = Agree, SA = Strongly Agree
SD D N A SA
1
I once woke up in the middle of a vegetable patch hugging a turnip that I'd mistakenly dug
up thinking it was Roy's largest root
A A A A A
2
If I had a big gun I'd shoot all the students I have to teach A A A A A
3 I memorize probability values for the F-distribution A A A A A
4 I worship at the shrine of Pearson A A A A A
5 I still live with my mother and have little personal hygiene A A A A A
6
Teaching others makes me want to swallow a large bottle of bleach because the pain of my
burning oesophagus would be light relief in comparison
A A A A A
7 Helping others to understand Sums of Squares is a great feeling A A A A A
8 I like control conditions A A A A A
9 I calculate 3 ANOVAs in my head before getting out of bed every morning A A A A A
10 I could spend all day explaining statistics to people A A A A A
11 I like it when people tell me I've helped them to understand factor rotation A A A A A
12
People fall asleep as soon as I open my mouth to speak A A A A A
13 Designing experiments is fun A A A A A
14
I'd rather think about appropriate dependent variables than go to the pub A A A A A
15 I soil my pants with excitement at the mere mention of Factor Analysis A A A A A
16 Thinking about whether to use repeated or independent measures thrills me A A A A A
17
I enjoy sitting in the park contemplating whether to use participant observation in my next
experiment
A A A A A
18 Standing in front of 300 people in no way makes me lose control of my bowels A A A A A
19 I like to help students A A A A A
20 Passing on knowledge is the greatest gift you can bestow an individual A A A A A
11. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 11 10/12/2005
21
Thinking about Bonferroni corrections gives me a tingly feeling in my groin A A A A A
22 I quiver with excitement when thinking about designing my next experiment A A A A A
23 I often spend my spare time talking to the pigeons ... and even they die of boredom A A A A A
24
I tried to build myself a time machine so that I could go back to the 1930s and follow Fisher
around on my hands and knees licking the floor on which he'd just trodden
A A A A A
25 I love teaching A A A A A
26 I spend lots of time helping students A A A A A
27 I love teaching because students have to pretend to like me or they'll get bad marks A A A A A
28 My cat is my only friend A A A A A
The Teaching of Statistics for Scientific Experiments — Revised (TOSSE-R)
Load the data in the file TOSSE-R.sav
Conduct a factor analysis (with appropriate rotation) to see the factor
structure of the data.
Would you exclude any items on the questionnaire on the basis of
multicollinearity or singularity? (Quote Relevant statistics).
Your Answer:
12. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 12 10/12/2005
Is the sample size adequate? Explain your answer quoting any relevant
statistics.
Your Answer:
How many factors should be retained? Explain your answer quoting any
relevant statistics.
Your Answer:
13. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 13 10/12/2005
What method of rotation have you used and why?
Your Answer:
Which items load onto which factors? Do these factors make psychological
sense (i.e. can you name them based on the items that load onto them?)
Your Answer:
14. C8057 (Research Methods II): Factor Analysis on SPSS
Dr. Andy Field Page 14 10/12/2005
Unguided Example
Re-run the SAQ analysis using oblique rotation (Use Field, 2005 to help you).
Compare the results to the current analysis. Also, look over Field (2005) and
find out about Factor Scores and how to interpret them.
Multiple Choice Questions
Go to http://www.sagepub.co.uk/field/multiplechoice.html and test yourself
on the multiple choice questions for Chapter 15. If you get any wrong, re-
read this handout (or Field, 2005, Chapter 15) and do them again until you
get them all correct.
This handout is an abridged version of Chapter 15 of Field (2005) and so is copyright
protected.
Field, A. P. (2005). Discovering statistics using SPSS (2nd
edition). London: Sage.