Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Measurement and Reliability Test (updated in March 2011)

on

  • 4,387 views

Measurement, Reliability and Case Study. A series of six presentation, introduce scientific research in the areas of cross-cultural, using quantitative approach.

Measurement, Reliability and Case Study. A series of six presentation, introduce scientific research in the areas of cross-cultural, using quantitative approach.

Statistics

Views

Total Views
4,387
Views on SlideShare
4,386
Embed Views
1

Actions

Likes
0
Downloads
154
Comments
2

1 Embed 1

http://www.slideshare.net 1

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Measurement and Reliability Test (updated in March 2011) Measurement and Reliability Test (updated in March 2011) Presentation Transcript

  • Quantitative Research Methodologies (3/6): Reliability and Measurement Prof. Dr. Hora Tjitra & Dr. He Quan www.SinauOnline.com
  • The level of measurement Nominal Scale ... a naming scale Ordinal Scale ... a scale indicate rank ordering Equal Interval ... the distance between attributes does Scale have meaning. Ratio Scale ... a scale fit number system very well 2
  • Nominal Scales • Adjacent values have no inherent relationship • Nominal scales name values on a scale – sex, eye color, region of country, teacher ID • Numbers are often substituted for names – sex: male=1 female=2 • Mathematical operations cannot be performed • Frequency is the only calculation possible 3
  • Ordinal Scales • Order things from least to highest, best to worse, etc. • first, second, third; good, better, best • Some ordinal scales use numbers to represent order • class rank: 1, 2, 3,... • Adjacent numbers on an ordinal scale form some kind of a continuum • Adjacent numbers are not equidistant 4
  • Equal Interval Scales • The distance between attributes does have meaning - in temperature (Fahrenheit ), 30-40 vs 50-60 • Scales without an absolute zero – temperature C or F • Measurement starts at a predetermined point along a continuum – intelligence, summations of rating scales 5
  • Ratio Scales • Have all the characteristics of Ordinal Scales, but have two additional characteristics: – magnitude of distance between adjacent points on the scale are the same – they have an absolute and logical zero sales, age, weight • All mathematical operations can be performed 6
  • Variables in Research …Any characteristic or phenomenon that can vary across organisms, situations, variable or environments Continuous Categorical Independent Dependent Moderator Mediator 7
  • Continuous and Categorical Variables Continuous Variables Categorical Variables – Ordinal Scales – Nominal (always) – Ratio Scales – Ordinal (may be used) – Equal Interval Scales 8
  • Dependent and Independent Variables Independent Dependent variable variable Presumed or Presumed possible cause results 9
  • Confounding Variables Mediator Variables Moderator Variables variable is the generative variable influences the strength mechanism through which the and/or direction of the relation independent variable influences between the independent and the dependent variable dependent variables Z X Z Y X Y Example The relation between social class and frequency of health self-exams Mediator: Education Moderator: Age How to examine the mediator or moderator effects? 10
  • Relationship of Variables Confounding Variables Independent Dependent variable variable Presumed or Presumed possible cause results identified and selected by the researcher for Independent Variables experimental manipulation or classification: identified and classified by researcher identified and selected by the researcher for Dependent Variables measurement of change: directly measured identified/acknowledged by the researcher as Confounding Variables possible influence on dependent variable: controlled for by research design 11
  • Tour guide communication competence: French, German and American tourists’ perceptions Denis Leclerc, Judith N. Martin Department of Recreation Management and Tourism Arizona State University Best Practices 12
  • Summary • Researchers should continue examining ICC in a variety of contexts. This study • Investigates the applicability and relevance of communication competency research in a cross-cultural tourism setting • Focus on how American, French and German travellers perceived the importance of tour guide communication competence. 13
  • Research Methods 1 Participants Tourists from America(234), France(72), Germans(135) 2 Procedure Guidelines 2 Participants completed survey, rating the importance of four dimensions of tour guide’s CC behaviour. Data collection was done toward the end of travel 3 Instrument Adapted from result of factor analysis from Martin behavioral expectations model 14
  • Item bias Item bias is said to have occurred when some items in a test are found to function differently for a specific subgroup of the general group being tested (Plake and Hoover, 1979; Westers and Kelderman, 1991) • The analysis of cross-cultural data involved an initial probing for item bias occurrence and data quality. • Following Van de Vijver & Leung (1997) suggestions, the first step in the detection of item bias is to create a score distribution for the combined sample, identifying cutoff points for forming equal size groups in each culture. @ 18 were determined to be unbiased. This indicates that the distribution of these items does not differ in any systematic way among the three cultural groups: French, German and American. @ The other 16 items are biased, discriminating better in one group than in the others. These biased items do, however, provide important clues about cross-cultural differences 15
  • Reliability …the quality of measurement. How consistent the results are for different items for the same construct within the measure. American French German Nonverbal 1.Approachability .83* .64 .84 Smile Laugh Pleasant facial expression 2. Poise .66 .76 .68 Nice appearance Appropriate distance Appropriate posture 3. Attentiveness .72 .66 .83 Maintain direct eye contact Pay close attention Use gesture Nod head Lean toward other person Listen 4. Touch .56 .66 .67 Shake hands Touch other person 16 Talk loudly but not loudly
  • Multiple analysis of variance for the behavioural communication competence scales ….compare groups formed by categorical independent variables on group differences in a set of interval dependent variables. Behavioral communication American French German F competences scales (n=190) (n=54) (n=107) value Approachability 5.56 4.86 4.49  Poise 5.08 4.45 3.77  Attentiveness 5.17 4.28 4.68  Touch 4.46 3.84 3.31  Language adaptability 5.89 5.03 4.90  Interpersonal inclusion 4.40 3.43 3.33  Assertiveness 3.52 3.33 3.15 Traits 5.98 6.05 5.00  17
  • Result Results  American, France, German held statistically different communication competence perceptions.  F and G tended to perceive the different CCs at the same level. American group consistently perceived CCs skills as being more important than the European groups. One exception is the set of traits perceived as more important by the French. 18
  • Measuring the Reliability …refers to the consistency, stability, or repeatability of results Classical test theory X=T+e Variance(X)=Variance(T+e) =Variance(T)+Variance(e) Reliability =Variance(T)/Variance(X) =1-[Variance(e)/Variance(X)] If there is no error, reliability would be 1; there is only error, the reliability would be 0 19
  • The Types of Reliability Inter-rater The degree of agreement between two independent raters reliability Test-retest Consistency of results on repeated assessment reliability Parallel- Forms Consistency of results obtained from equivalent forms of a test reliability Internal consistency The degree to which the items of a measure are in agreement reliability 20
  • Inter-Rater or Inter-Observer Reliability …how do we determine whether two observers are being consistent in their observations? 1. If your measurement consists of categories -- the raters are checking off which category each observation falls in – you can calculate the percent of agreement between the raters. 2. If the measure is a continuous one. There, all you need to do is calculate the correlation between the ratings of the two observers. 21
  • Test-Retest Reliability • We estimate test-retest reliability when we administer the same test to the same (or a similar) sample on two different occasions. This approach assumes that there is no substantial change in the construct being measured between the two occasions. The amount of time allowed between measures is critical. 22
  • Parallel-Forms Reliability • administer both instruments to the same sample of people. The correlation between the two parallel forms is the estimate of reliability. One major problem with this approach is that you have to be able to generate lots of items that reflect the same construct. 23
  • Internal Consistency Reliability Average Inter-item Correlation Average Itemtotal Correlation Split-Half Reliability Cronbach's Alpha (a) 24
  • Reliability: Facets of Generalizability, Traditional Definitions of Reliability Coefficients, and Estimation Procedures Facets of Major sources Traditional reliability generalizability of error coefficient Procedure Statistical analysis •Change of Times •Retest (or stability) •Test participants at •Pearson or intraclass participant’s different times with correlation responses over time same form •Change in testing situation •Differences in •Equivalence •Test participants at •Pearson or intraclass Forms content sampling one time two forms correlation across “parallel” covering same forms content •Content •Split-half •Test participants •Correlation between Items heterogeneity and •Internal consistency with multiple items test halves low content at one time (Spearman-Brown saturation in the corrected) items •Coefficient alpha Judges of •Disagreement •Internal consistency •Obtain ratings from •Pairwise interjudge observers among judges multiple judges on correlation one form and •Coefficient alpha occasion •Intraclass correlation 25
  • Comparison of Reliability Estimators • Inter-rater reliability: one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers • Test-retest reliability: you could have a single rater code the same videos on two different occasions. especially feasible in most experimental and quasi-experimental designs that use a no-treatment control group. • Parallel forms Reliability: typically only used in situations where you intend to use the two forms as alternate measures of the same thing. • Cronbach's Alpha: tends to be the most frequently used estimate of internal consistency. 26
  • How large should Alpha be There is no particular level of alpha that is necessary, adequate, or even desirable in all contexts These two parameters fit Two parameters: Interitem correlation alpha the nature and definition of the construct to be Scale length measured Internal consistency Attenuation Validity paradox 27
  • Correcting for attenuation Researchers should be concerned about reliability because the reliability of a measure constrains how strongly that measure may correlate with another variable(e.g., an external criterion) The length of scale should be considered in planning one’s research Sometimes reliability indices(typically alpha) is used to correct observed correlations between two measures for attenuation due to unreliability 28
  • Thanks You An y co m m e n ts & q u e s ti o n s are welcome Contact me at hora_t@sianuonline.com www.SinauOnline.com @ Tjitra, 2010 29