Your SlideShare is downloading. ×
  • Like
Advanced Scaling and Factor Analysis
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Advanced Scaling and Factor Analysis

  • 278 views
Published

Advanced Scaling and Factor Analysis

Advanced Scaling and Factor Analysis

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
278
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
4
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Advanced Scaling 1Advanced Scaling and Factor Analysis Edgardo Donovan RES 610 – Dr. Joshua Shackman Module 2 – Case Analysis Monday, August 8, 2011
  • 2. Advanced Scaling 2 Advanced Scaling and Factor Analysis Rather than build upon existing research involving learner satisfaction Wang attempted todelineate new constructs to treat electronic learning as a different discipline requiring a newanalytical model with related hypotheses. Arguably, his work is more valuable as a textbookattempt to conduct exploratory and confirmatory factor analysis than a seminal work regardingthe dynamics of learner and electronic learner satisfaction. When it is necessary to delineate newconstructs involving human behavior and attitudes, rather than seeking mere single variables,exploratory and confirmatory factor analysis can be very useful. In studies measuring electroniclearning system satisfaction (Wang 75) attempts are made to validate the aggregation of specificmeasurements in specific ways to construct more useful complex measures. The challengenormally associated with this procedure is typically the difficulty in gauging data to frameworkfit. Due primarily to the simplicity of the hypothetical model and the obvious correlationsWang’s work succeeds in achieving a good data to framework fit but does not deliver compellingnew knowledge to the learning studies field that justified the employment of new constructs inthe first place. Wang contends that, by virtue of its different delivery and interaction method, measuringelectronic learning satisfaction cannot incorporate past constructs. Traditionally, both student’sevaluation of teaching effectiveness (SETE) and user satisfaction (US) scales have been used toassess teaching quality or user satisfaction with IS. However, measures of US and SETEdeveloped for the organizational IS or classroom teaching context may no longer be appropriatefor the e-learning context, because the role of an e-learner is different to that of a traditional end
  • 3. Advanced Scaling 3user or student (Wang 76). To assess the extent and specific nature of e-learner satisfaction,different dimensions of ELS must be theoretically and operationally defined (Wang 76). Often research requires formulating hypotheses that involves showing the relationshipsbetween multiple independent (predictor) and a dependent (criterion variable) with the end goalbeing showing either a positive or negative ideally (possibly casual) corelational relationshipbetween them. Rather in this case, exploratory factor analysis is required by Wang to reduce alarge number of variables to a smaller number of factors for modeling purposes, where the largenumber of variables precludes modeling all the measures individually. This is common in socialsciences as researchers attempt to use it in an exploratory fashion through surveys in an attemptto categorize questions into constructs representative of certain behavioral phenomena. It is a researchers duty to attempt to bring about new knowledge to the academic andpractitioner community by observing phenomena in their chosen specialty area, relating it toprevious theoretical literary work, and attempting to isolate said phenomena into the smallestamount of categories as possible so to have the best chance to operationalize them into constructsduring the exploratory phase and possibly into operational variables in the confirmatory process.Constructs are latent variables. They are latent in the sense that they cannot be measured directly,but only though measureable indicator variables. While there are many ways to combineindicators to achieve a measure of a construct, all methods assume that what is being measured isa single entity, even if it is an abstraction like "efficiency" or "happiness" (Garson 1). During the literary review portion of his research Wang must have surmised that existingconstructs related to learning satisfaction could not have been applied effectively in some formof experimental or quasi experimental research involving electronic learning satisfaction.
  • 4. Advanced Scaling 4Perceived service quality and customer satisfaction now being two distinct constructs, they weremeasured using different instruments (Wang 76). To make sure that important aspects ofsatisfaction were not omitted, Wang chose to utilize exploratory and later confirmatory factoranalysis to best articulate phenomena involving electronic learning satisfaction. Initially the firststeps of this series of processes involved using experience surveys and personal interviews on e-learning satisfaction with 2 professionals, 4 college teachers, and 10 e-learners were conducted(Wang 78). This is very ambitious in that, regardless of how well the data gathered fit theeventual model and hypotheses, this is a very small and targeted data source used to attempt toderive new constructs hopefully applicable universally not only to electronic learning satisfactionbut eventually to learning as a whole as electronic technology is progressively becoming evermore ubiquitous and transparent to future users blurring the lines between classroom andelectronic learning. An exploratory ELS instrument involving 26 items was used with a seven-point Likert-type scale, with anchors ranging from ‘‘strongly disagree’’ to ‘‘strongly agree’’(Wang 78). Scales are ordinal indexes which are thought to measure a latent variable. If scalesare defined as sets of items which stand in ordinal relationship to each other, then Guttman andMokken scales meet this test of ordinality between items. Likert scales do not meet this narrowerdefinition (Garson 1). This is problematic because we have no way of knowing how the surveyquestions were formulated. Already the survey respondent pool is very limited and possibly toosmall and/or focused. Coupled with the fact that it is very easy to badly construct surveyquestions making multiple responses to the same questions correct or loading multiple questionsinto one, this makes the attempt by Wang to fashion a new paradigm involving electroniclearning and learning less credible.
  • 5. Advanced Scaling 5 H1. A positive relationship exists between ELS score and the reuse intention of the e-learning systems. H2. A negative relationship exists between ELS score and the extent of post-usage complaint behavior Figure 1. Electronic Learning Model with Related Hypotheses (Wang 77) One of the strong points in Wang’s study is that he constructs a very simple model withquasi-obvious hypotheses. A total of 17 variables are constructed and measured through a surveywhich stream with a ratio of 4:1 into four major grouping variables: personalization, learnercommunity, user interface, and content. The hypotheses basically state that if the user experiencein these different areas is positive it will lead to a positive ELS score which in turn lead to ahigher probability that the system will be reused again and fewer complaints from past users. It
  • 6. Advanced Scaling 6can be questioned if these hypotheses actually bring us any new knowledge. System usability,support from the learner community, course structure personalization, and engaging content havearguably always been, with the exception of system usability, the positively correlated factorsthat learners have always referred to express satisfaction with a structured learning experience. Wang’s work is of value independently of the quality of its contributions to learningstudies because it shows a methodical attempt to carry out confirmatory factor analysis after theexploratory phase led to a new model with associated hypotheses. Confirmatory factor analysiscan establish that multiple tests measure the same factor, thereby giving justification foradministering fewer tests. It can be used to validate a scale or index by demonstrating that itsconstituent items load on the same factor, and to drop proposed scale items which cross-load onmore than one factor (Garson 1). In confirmatory factor analysis researchers seek to determine ifthe number of factors and the loadings of measured (indicator) variables on them conform towhat is expected on the basis of pre-established theory (Garson). In Wang’s study the first step inconfirmatory factor analysis was purifying the instrument by calculating the coefficient alphaand item-to-total correlations that would be used to delete garbage items (Wang 78). The resultssuggested that the intercorrelation matrix contains sufficient common variance to make factoranalysis worthwhile. To improve the unidimensionality/convergent validity and discriminantvalidity of the instrument through exploratory factor analysis, four commonly employed decisionrules were applied to identify the factors underlying the ELS construct: using a minimumeigenvalue of 1 as a cutoff value for extraction; deleting items with factor loadings less than 0.5on all factors or greater than 0.5 on two or more factors; a simple factor structure; and exclusionof single item factors from the standpoint of parsimony (Wang 79). A large class of omnibus
  • 7. Advanced Scaling 7tests exists for assessing how well a model matches observed data. χ2 is a classic goodness-of-fitmeasure to determine overall model fit. The null hypothesis is that the implied or predictedcovariance matrix Σ is equivalent to the observed sample covariance matrix S, Σ=S. A large χ2and rejection of the null hypothesis means that model estimates do not sufficiently reproducesample covariance; the model does not fit the data well. By contrast, a small χ2 and failure toreject the null hypothesis is a sign of a good model fit (Albright 6). Although factor analysis can be one of the various useful tools that can be leveraged inacademic research, it cannot be over relied upon to provide meaningful research despite howwell it is utilized given its inherent limitations. Factor analysis was invented nearly 100 years agoby psychologist Charles Spearman, who hypothesized that the enormous variety of tests ofmental ability--measures of mathematical skill, vocabulary, other verbal skills, artistic skills,logical reasoning ability, etc.--could all be explained by one underlying "factor" of generalintelligence that he called g. It was an interesting idea, but it turned out to be wrong. Supposeeach of 500 people, who are all familiar with different kinds of automobiles, rates each of 20automobile models on the question, "How much would you like to own that kind ofautomobile?" We could usefully ask about the number of dimensions on which the ratings differ.A one-factor theory would posit that people simply give the highest ratings to the most expensivemodels. A two-factor theory would posit that some people are most attracted to sporty modelswhile others are most attracted to luxurious models. Three-factor and four-factor theories mightadd safety and reliability. Or instead of automobiles you might choose to study attitudesconcerning foods, political policies, political candidates, or many other kinds of objects(Darlington 1).
  • 8. Advanced Scaling 8 Rather than build upon existing research involving learner satisfaction Wang attempted todelineate new constructs to treat electronic learning as a different discipline requiring a newanalytical model with related hypotheses. Arguably, his work is more valuable as a textbookattempt to conduct exploratory and confirmatory factor analysis than a seminal work regardingthe dynamics of learner and electronic learner satisfaction. When it is necessary to delineate newconstructs involving human behavior and attitudes, rather than seeking mere single variables,exploratory and confirmatory factor analysis can be very useful. In studies measuring electroniclearning system satisfaction (Wang 75) attempts are made to validate the aggregation of specificmeasurements in specific ways to construct more useful complex measures. The challengenormally associated with this procedure is typically the difficulty in gauging data to frameworkfit. Due primarily to the simplicity of the hypothetical model and the obvious correlationsWang’s work succeeds in achieving a good data to framework fit but does not deliver compellingnew knowledge to the learning studies field that justified the employment of new constructs inthe first place.
  • 9. Advanced Scaling 9 BibliographyAlbright, Jeremy (2006). Confirmatory factor analysis using Amos, LISREL, Mplus, SAS/STATCALI. Indiana UniversityAuthor Unknown. (1982) Review of corporate crime by Marshall B. Clinard and Peter C.Yeager. Survey of Books Relating to the Law Michigan Law Review, 80(4). 978-980.Darlington, RB (N.D.) Factor analysis. Cornell University. Retrieved March 2, 2008,fromhttp://www.psych.cornell.edu/Darlington/factor.htmGarson, D. (2008) Cluster analysis. StatNotes. North Carolina State University. Retrieved March2, 2008, from http://www2.chass.ncsu.edu/garson/PA765/cluster.htmGarson, D. (2008) Factor analysis. StatNotes. North Carolina State University. Retrieved March2, 2008, fromhttp://www2.chass.ncsu.edu/garson/PA765/factor.htmGarson, D. (2008) Scales and standard measures. StatNotes. North Carolina State University.Retrieved March 2, 2008, fromhttp://www2.chass.ncsu.edu/garson/PA765/standard.htmWang, YI (2003) Assessment of learner satisfaction with asynchronous electronic learningsystems. Information and Management. 41:75-86. Retrieved May 16, 2010, fromhttp://www.sciencedirect.com.lb-proxy6.touro.edu/science?_ob=MImg&_imagekey=B6VD0-48CFV9D-1-10&_cdi=5968&_user=3546441&_pii=S0378720603000284&_orig=browse&_coverDate=10
  • 10. Advanced Scaling 10%2F31%2F2003&_sk=999589998&view=c&wchp=dGLbVzW-zSkWb&md5=9cc1f6d91f63a15b5c6c1e0942b8c69c&ie=/sdarticle.pdfWilliams, Larry (2008) Measurement models for linking latent variables and indicators: AReview of Alternatives for Organizational Researchers. Virginia Commonwealth University