This document provides an overview of various multidimensional measurement and factor analysis techniques, including elementary linkage analysis, factor analysis, cluster analysis, multidimensional scaling, structural equation modeling, and multilevel modeling. It discusses the key stages and considerations for conducting factor analysis and interpreting the results, and provides examples of interpreting outputs from SPSS.
Multivariate data analysis regression, cluster and factor analysis on spssAditya Banerjee
Using multiple techniques to analyse data on SPSS. A basic software that can easily help run the numbers. Multivariate Data Analysis runs regressions models, factor analyses, and clustering models apart from many more
Multivariate data analysis regression, cluster and factor analysis on spssAditya Banerjee
Using multiple techniques to analyse data on SPSS. A basic software that can easily help run the numbers. Multivariate Data Analysis runs regressions models, factor analyses, and clustering models apart from many more
The use of one of several methods for reducing a set of variables to a lesser number of new variables, each of which is a function of one or more of the original variables.
Regression, Multiple regression in statistics Rashna Asif
This presentation all about the regression and multiple regression equation of multiple regression and the most important thing is the research question of multiple regression with examples
Causal Inference in Data Science and Machine LearningBill Liu
Event: https://learn.xnextcon.com/event/eventdetails/W20042010
Video: https://www.youtube.com/channel/UCj09XsAWj-RF9kY4UvBJh_A
Modern machine learning techniques are able to learn highly complex associations from data, which has led to amazing progress in computer vision, NLP, and other predictive tasks. However, there are limitations to inference from purely probabilistic or associational information. Without understanding causal relationships, ML models are unable to provide actionable recommendations, perform poorly in new, but related environments, and suffer from a lack of interpretability.
In this talk, I provide an introduction to the field of causal inference, discuss its importance in addressing some of the current limitations in machine learning, and provide some real-world examples from my experience as a data scientist at Brex.
Foundations of Statistics for Ecology and Evolution. 2. Hypothesis TestingAndres Lopez-Sepulcre
1. Science and Falsification
2. Significance Testing
- What is a p-value?
- How to build a Null Hypothesis
3. How about the Alternative Hypothesis?
- False Alarms and Power
The use of one of several methods for reducing a set of variables to a lesser number of new variables, each of which is a function of one or more of the original variables.
Regression, Multiple regression in statistics Rashna Asif
This presentation all about the regression and multiple regression equation of multiple regression and the most important thing is the research question of multiple regression with examples
Causal Inference in Data Science and Machine LearningBill Liu
Event: https://learn.xnextcon.com/event/eventdetails/W20042010
Video: https://www.youtube.com/channel/UCj09XsAWj-RF9kY4UvBJh_A
Modern machine learning techniques are able to learn highly complex associations from data, which has led to amazing progress in computer vision, NLP, and other predictive tasks. However, there are limitations to inference from purely probabilistic or associational information. Without understanding causal relationships, ML models are unable to provide actionable recommendations, perform poorly in new, but related environments, and suffer from a lack of interpretability.
In this talk, I provide an introduction to the field of causal inference, discuss its importance in addressing some of the current limitations in machine learning, and provide some real-world examples from my experience as a data scientist at Brex.
Foundations of Statistics for Ecology and Evolution. 2. Hypothesis TestingAndres Lopez-Sepulcre
1. Science and Falsification
2. Significance Testing
- What is a p-value?
- How to build a Null Hypothesis
3. How about the Alternative Hypothesis?
- False Alarms and Power
Introduction to experimental designsPH2600 2019Neil O’TatianaMajor22
Introduction to experimental
designs
PH2600 2019
Neil O’Connell
Learning outcomes
By the end of the lecture students should
be able to:
Describe basic common experimental
study designs
Consider some of the biases that
attempt we control for
Describe the basic purpose and
structure of a systematic review
A bottom line
The choice of design should
arise from the research
question - not the other way
around.
Experimental design - definition
In which one (or more) variable(s) is
manipulated and the effect of this
manipulation is observed in other
variables.
It aims to control all other variables.
It allows us to infer causality
Causality
If there is change to A does a change
in B result?
◦ Cause must precede the effect
◦ The cause and effect must co-vary
◦ If the cause does not occur then neither
does the effect
Inferring causation - problems
Confounding
Regression to the mean
Natural recovery
Placebo/ non-specific effects
Hawthorne Effect (Observer)
Rosenthal Effect (Experimenter
expectancy)
Time itself is a
confounder
Se
ve
ri
ty
Time
Se
ve
ri
ty
Time
Control group
By including a group who undergo the
same conditions (except…) as the
experimental group we control for
numerous possible confounders
For within-subjects designs this might
be a control condition
Blinding
Why conceal the identity of
the experimental condition?
◦ A function of placebo groups -
‘sham’ interventions
◦ Single blind
◦ Double-blind
◦ Triple Blind
◦ What confounders might
blinding control for?
Who can
we blind
in trials
of PT?
Group designs – within or
between subjects
Within Group design
One group of
participants receives
all experimental
conditions (including
control)
Offers paired data
Between-Group
Design
Different groups
receive the different
experimental
conditions
Offers unpaired data
Designs
Randomised controlled experiment.
Parallel, cross-over, factorial
Controlled experiment
Quasi experimental study
Single group pre-test post-test
design
Group before
Same group
after
IN
T
ERV
EN
T
IO
N
Time series design
IN
T
ER
V
EN
T
IO
N
measure measure
Basic parallel experimental design (pre
test-post test)
Experimental
group
INTERVENTION
CONTROL Follow up
Follow up
Control group
Pre
test
Post test
SAME
POPULATION
TAKE
BASELINE
MEASURES
INTERVENTION
CONTROL FOLLOW UP
FOLLOW UP
Pre
test
Post test
How to ensure the groups are
the same?
Matching groups
Or
Use the same group for the different
conditions
Or
Randomisation
RANDOMISATION
The beauty of randomisation
It solves all your problems (maybe)!
In NRS you can only control for known
confounders
Successful randomisation controls for
all
Even imbalances at baseline occur at
random and are unsystematic biases.
RA Fisher (1935)
“Randomisation
relieves the
experimenter from the
anxiety of considering
and estimating ...
Dowhy: An end-to-end library for causal inferenceAmit Sharma
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step.
Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. The basic assumption of factor analysis is that for a collection of observed variables there are a set of underlying variables called factors (smaller than the observed variables), that can explain the interrelationships among those variables.
U3 IP.sav
MKTG420_U3IP.doc
Unit 3 Individual Project 1
MACROBUTTON DoFieldClick Type your Name Here
American Intercontinental University
MACROBUTTON DoFieldClick Type your Paper Title
Project Type: MKTG420 Unit 3 Individual Project
MACROBUTTON DoFieldClick Date of Submission
Abstract
This is a single paragraph, no indentation is required. The next page will be an abstract; “a brief, comprehensive summary of the contents of the article; it allows the readers to survey the contents of an article quickly” (Publication Manual, 2010). The length of this abstract should be 35-50 words (2-3 sentences). NOTE: the abstract must be on page 2 and the body of the paper will begin on page 3.
MACROBUTTON DoFieldClick Type your Paper Title
Introduction
Remember to always indent the first line of a paragraph (use the tab key). The introduction should be short (2-3 sentences). The margins, font size, spacing, and font type (italics or plain) are set in APA format. While you may change the names of the headings and subheadings, do not change the font.
Part 1: Research background on the scales
Introduce the concept and be sure to indent the first line of the paragraph. Provide background on each of the 4 scales (assurance, empathy, reliability and responsiveness), not limited to a simple definition but as a measurement that aids marketers. Discuss how the questions in the survey are transformed into "scales" (also called "factors"). In other studies using SERVQUAL, how many and what types of respondents were included? Part 1 of the Individual Project should be 1 page in length. Be sure to cite your resources.
Part 1: Concept of Scales/Factors
Introduce the concept and be sure to indent the first line of the paragraph.
Part 1: SERVQUAL Samples
Introduce the concept and be sure to indent the first line of the paragraph.
Part 2: (Full-Text Research) Service Quality and Segmentation
Introduce the concept and be sure to indent the first line of the paragraph. Connect information from at least 3 articles. Do not write and overview or critique of the articles. Synthesize and connect the information contained to develop a solid understanding of how service quality and segmentation are related. Part 2 of the Individual Project should be 2 pages in length and should be predominately from at least three articles in AIU's full-text databases. Be sure to cite your resources.
Part 3: Null/Hypo 1, ANOVA, Decision
Attached is a small set of data that has been collected from brand loyal customers of Store 1 and Store 2. Write out a Null hypothesis and an alternate hypothesis for each of the 4 aspects of service quality that are include in the analysis (assurance, empathy, reliability and responsiveness) to see if there is a difference between stores. Run 4 ANOVAs to test the Null hypotheses. State the decision for each of the tests.
Part 3: Null/Hypo 2, ANOVA, Decision
Write out a Null hypothesis and an alternate hypothesis ...
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
2. STRUCTURE OF THE CHAPTER
• Elementary linkage analysis
• Factor analysis
• What to look for in factor analysis output
• Cluster analysis
• Examples of studies using multidimensional
scaling and cluster analysis
• Multi-dimensional data: some words on
notation
• A note on structural equation modelling
• A note on multilevel modelling
3. ELEMENTARY LINKAGE ANALYSIS
• A way of exploring the relationship between
personal constructs, of assessing the
dimensionality of the judgements that are
made.
• It seeks to identify and define the clusterings
of certain variables within a set of variables.
• Like factor analysis, elementary linkage
analysis searches for interrelated groups of
correlation coefficients. The objective of the
search is to identify ‘types’.
4. WHAT IS FACTOR ANALYSIS?
• A method of grouping together variables which
have something in common.
• It enables the researcher to take a set of variables
and reduce them to a smaller number of
underlying factors (latent variables) which account
for as many variables as possible.
• It detects structures and commonalities in the
relationships between variables. Researchers can
identify where different variables in fact are
addressing the same underlying concept.
• It detects latent (unobservable) factors.
5. WHAT IS FACTOR ANALYSIS?
• Factor analysis can take two main forms:
– Exploratory factor analysis: the use of factor
analysis (principal components analysis in
particular) to explore previously unknown
groupings of variables, to seek underlying
patterns, clusterings and groups.
– Confirmatory factor analysis is more stringent,
testing a found set of factors against a
hypothesized model of groupings and
relationships.
6. STAGE ONE IN FACTOR ANALYSIS
1. Check that the data are suitable for factor analysis:
(a) Sample size (varies in the literature, from a
minimum of 30 to a minimum of 300); if the sample
size is small then the factors loadings should be high
to be included);
(b) Number of variables;
(c) Ratio of sample size to number of variables
(different ratios given in literature, from 5:1 to 30:1);
(c) Strength of intercorrelations should be no less
than .3.;
(d)Bartlett’s test of sphericity should be statistically
significant (ρ<.05);
(e) Kaiser-Mayer-Olkin measure of sampling
adequacy should be .6 or higher (maximum is 1).
7. STAGE TWO IN FACTOR ANALYSIS
2. Decide which form of extraction method to use:
(a) Principal components analysis is widely
used;
(b) Set the Kaiser criterion (the Eigenvalues
to be set at greater than 1); the Eigenvalue of
a factor indicates the amount of the total
variance explained by that factor – if it is less
than 1.00 then it does not have any additional
explanatory value and should be ignored
(SPSS does this automatically);
(c) Unrotated factor solution to be set;
(d) Scree plot to be set.
8. SCREE PLOT IN SPSS
Scree Plot
Component Number
2321191715131197531
Eigenvalue
10
8
6
4
2
0
9. STAGE THREE IN FACTOR ANALYSIS
3. Conduct the factor rotation:
(a) Decide which of the two main approaches
to use:
i. Oblique (related variables): Direct Oblimin;
ii. Orthogonal (unrelated variables): Varimax;
(b) People often use the varimax solution
when it should not be used, as it is sometimes
easier to use than other kinds;
(c) Check that the rotated solution is set.
10. ROTATION
Rotation keeps together those items
that are closely related and
separates them clearly from other
items, i.e. it includes and excludes
(keeps together a group of
homogeneous items and keeps
them apart from other groups).
11. EXAMPLE OF FACTOR ANALYSIS
USING SPSS
• Factor analysis for an oblique rotation.
• Direct Oblimin rotation.
12. Analyze → Dimension Reduction → Factor → Move the
variables to be included to the ‘Variables’ box →
13. Click on ‘Descriptives’ → Click on ‘KMO and Bartlett’s test
of sphericity’ → Click on Coefficients’ → Click ‘Continue’ →
14. Click on ‘Extraction’ → Click on ‘Principal components’ → Click on
‘Correlation matrix’ → Click on ‘Unrotated factor solution’ → Click on
‘Scree plot’ → Click on ‘Based on Eigenvalue’ →Click ‘Continue’ →
15. Click on ‘Rotation’ → Click on ‘Direct Oblimin’ or ‘Varimax’
(depending on whether the rotation is oblique or orthogonal) →
Click ‘Continue’ → return to main screen and Click ‘OK’
16. ANALYSIS OF THE EXAMPLE
FROM SPSS
• SPSS produces many tables for factor
analysis. Be selective but fair to the data.
17. Check correlation coefficients (most should be over .
3) (selection only reproduced here, not the full table)
How much do
you feel that
working with
colleagues all
day is really a
strain for you?
How much
do you feel
emotionally
drained by
your work?
How much do
you worry that
your job is
hardening
you
emotionally?
How much
frustration
do you feel
in your
job?
Correlation How much do you
feel that working
with colleagues all
day is really a
strain for you?
1.000 .554 .507 .461
How much do you
feel emotionally
drained by your
work?
.554 1.000 .580 .518
How much do you
worry that your job
is hardening you
emotionally?
.507 .580 1.000 .646
How much
frustration do you
feel in your job?
.461 .518 .646 1.000
18. SUITABILITY FOR FACTOR ANALYSIS
KMO and Bartlett's Test
Kaiser-Meyer-Olkin Measure of Sampling
Adequacy.
.845
Bartlett's Test of
Sphericity
Approx. Chi-Square 5460.475
df 36
Sig. .000
KMO >.6
Bartlett’s test Sig.: ρ<.05
∴The data are suitable for factor analysis
19. How much of the variance is explained by each
item (lower than .3 and the item is a poor fit)
Communalities
Initial Extraction
How hard do you feel you are working in your job? 1.000 .779
How much do you feel exhausted by the end of the
workday?
1.000 .818
How much do you feel that you cannot cope with your
job any longer?
1.000 .578
How much do you feel that you treat colleagues as
impersonal objects?
1.000 .578
How much do you feel that working with colleagues all
day is really a strain for you?
1.000 .602
How much do you feel emotionally drained by your
work?
1.000 .629
How tired do you feel in the morning, having to face
another school day?
1.000 .595
How much do you worry that your job is hardening you
emotionally?
1.000 .661
How much frustration do you feel in your job? 1.000 .595
Extraction Method: Principal Component Analysis.
20. Two factors found: factor one explains 45.985 per cent of total
variance; factor two explains 18.852 per cent of total variance.
Total Variance Explained
Compo
nent
Initial Eigenvalues
Extraction Sums of Squared
Loadings
Rotation
Sums of
Squared
Loadingsa
Total
% of
Variance
Cumulative
% Total
% of
Variance Cumulative % Total
1 4.139 45.985 45.985 4.139 45.985 45.985 4.028
2 1.697 18.851 64.836 1.697 18.851 64.836 1.991
3 .661 7.342 72.178
4 .542 6.023 78.202
5 .531 5.900 84.102
6 .451 5.006 89.107
7 .395 4.390 93.497
8 .323 3.593 97.090
9 .262 2.910 100.000
Extraction Method: Principal Component Analysis.
a. When components are correlated, sums of squared loadings cannot be added to
obtain a total variance.
21. Pattern Matrixa
Component
1 2
How hard do you feel you are working in your job? .005 .882
How much do you feel exhausted by the end of the workday? .252 .834
How much do you feel that you cannot cope with your job any longer? .691 .234
How much do you feel that you treat colleagues as impersonal objects? .674 -.459
How much do you feel that working with colleagues all day is really a
strain for you?
.782 -.158
How much do you feel emotionally drained by your work? .774 .096
How tired do you feel in the morning, having to face another school day? .697 .247
How much do you worry that your job is hardening you emotionally? .814 -.008
How much frustration do you feel in your job? .752 .097
Extraction Method: Principal Component Analysis.
Rotation Method: Oblimin with Kaiser Normalization.
a. Rotation converged in 6 iterations.
Decide the cut-off points and
which variables to include.
22. WHICH VARIABLES TO INCLUDE IN A FACTOR
For each variable:
1. Include the highest scoring variables;
2. Omit the low scoring variables;
3. Look for where there is a clear scoring distance
between those included and those excluded;
4. Review your selection to check that no lower scoring
variables have been excluded which are conceptually
close to those included;
5. Review your selection to check whether some higher
scoring variables should be excluded if they are not
sufficiently conceptually close to the other that have
been included;
6. Review your final selection to see that they are
conceptually similar.
N. B. Inclusion and exclusion are an art, not a
science; there is no simple formula, so you have
to use your judgement.
23. WHAT TO REPORT
1. Method of factor analysis used (Principal
components; Direct Oblimin); KMO and Bartlett test
of sphericity; Eigenvalues greater than 1; scree
test; rotated solution).
2. How many factors were extracted with Eigenvalues
greater than 1.
3. How many factors were included as a result of the
scree test.
4. Give a name/title to each of the factors.
5. Indicate how much of the total variance was
explained by each factor.
6. Report the cut-off point for the variables included in
each factor.
7. Indicate the factor loadings of each variable in the
factor.
8. What the results tell us.
24. CLUSTER ANALYSIS
• Factor analysis and elementary linkage
analysis enable the researcher to group
together factors and variables, but cluster
analysis enables the researcher to group
together similar and homogeneous sub-
samples of people.
• SPSS creates a dendrogram of clusters of
people into groups.
26. INTERPRETING THE DENDROGRAM
• There are two main clusters:
– Cluster One: Persons 19, 20, 2, 13, 15, 9,
11, 18, 14, 16, 1, 10, 12, 5, 17
– Cluster Two: Persons 7, 8, 4, 3, 6
• If one wishes to have smaller clusters then
three clusters can be found:
– Cluster One: Persons 19, 20, 2, 13, 15, 9,
11, 18
– Cluster Two: Persons 14, 16, 1, 10, 12, 5,
17
– Cluster Three: Persons 7, 8, 4, 3, 6
27. STRUCTURAL EQUATION MODELLING
• The name given to a group of techniques that
enable researchers to construct models of
putative causal relations, and to test those
models against data.
• It is designed to enable researchers to
confirm, modify and test their models of
causal relations between variables.
• It is based on multiple regression and factor
analysis.
28. STRUCTURAL EQUATION MODELLING
• It works with observed and unobserved
variables, not latent factors (as in factor
analysis).
• It is a particular kind of multiple regression
analysis that enables the researcher to see the
relative weightings of observed independent
variables on each other and on a dependent
variable, to establish pathways of causation,
and to determine the direct and indirect effects
of independent variables on a dependent
variable.
29. A CAUSAL MODEL (USING AMOS WITH SPSS)
Part-time work
Level of motivation
for academic study
Class of degree
Socio-economic status
e1
e2
e3
30. THE CAUSAL MODEL WITH CALCULATIONS ADDED
Part-time work
Level of motivation
for academic study
Class of degree
Socio-economic status
.18
e1
e2
e3
.04
.52
-.21
-1.45 1.37
-.01
31. INTERPRETING THE CAUSAL MODEL
– Socio-economic’ status exerts a direct powerful
influence on class of degree (.18), which is higher
than the direct influence of either ‘part-time work’
(-.01) or ‘level of motivation for academic study’ (.04);
– ‘Socio-economic status’ exerts a powerful direct
influence on ‘level of motivation for academic study’
(.52), which is higher than the influence of ‘socio-
economic status’ on ‘class of degree’ (.18);
– ‘Socio-economic status’ exerts a powerful direct and
negative influence on ‘part-time work’ (–.21), i.e. the
higher the socio-economic status, the lesser is the
amount of part-time work undertaken;
32. INTERPRETING THE CAUSAL MODEL
– ‘Part-time work’ exerts a powerful direct influence on
‘level of motivation for academic study’ (1.37), and this is
higher than the influence of ‘socio-economic status’ on
‘level of motivation for academic study’ (.52);
– ‘Level of motivation for academic study’ exerts a
powerful negative direct influence on ‘part-time work’ (–
1.45), i.e. the higher is the level of motivation for
academic study, the lesser is the amount of part-time
work undertaken;
– ‘Level of motivation for academic study’ exerts a slightly
more powerful influence on ‘class of degree’ (.04) than
does ‘part-time work’ (–.01);
– ‘Part-time work’ exerts a negative influence on the class
of degree (–.01), i.e. the more one works part-time, the
lower is the class of degree obtained.
33. A STRUCTURAL EQUATION
MODEL (USING AMOS IN SPSS)
Ovals = factors
Rectangles = variables
for each factor
E = Error factor
34. A NOTE ON MULTILEVEL MODELLING
• Data and variables exist at individual and group
levels, e.g.:
– between students over all groups
– between groups
– between students within groups
– individual
– group
– class
– school
– local
– regional
– national
– international
35. A NOTE ON MULTILEVEL MODELLING
• Data are ‘nested’, i.e. individual-level data are nested
within group, class, school, regional etc. levels.
• A dependent variable is affected by independent
variables at different levels, i.e. data are hierarchical.
• Multilevel modelling uses regression analysis and
multilevel regression.
• Multilevel modelling enables the researcher to
calculate the relative impact on a dependent variable
of one or more independent variables at each level of
the hierarchy, and, thereby to identify factors at each
level of the hierarchy that are associated with the
impact of that level.