Successfully reported this slideshow.
Upcoming SlideShare
×

Measurment and scale

1,581 views

Published on

Measurment and scale

Published in: Data & Analytics
• Full Name
Comment goes here.

Are you sure you want to Yes No

Measurment and scale

1. 1. MEASUREMENT AND SCALE Presented By: Jaspreet Kaur Dep. Of Food Science & Nutrition ASPEE College of Home Science Sardar Krushinagar Dantiwada Agricultural University
2. 2. 3 • Measurement is a process of mapping aspect of a domain on to other aspect of a range according to some rules of correspondence.  Acc. To Stevens(1946), Measurement is assigning numbers to the objects or events .  The purpose of measurement is to have information in a form in which a variables can be related to each other. MEASUREMENT
3. 3. VARIABLES IntervalNominal Ordinal Ratio The "levels of measurement" are expressions that typically refer to the theory of scale developed by the psychologist Stanley Smith Stevens . Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval" and "ratio".
4. 4. • Nominal measurement is a system of assigning number symbols to event in order to label them. • No quantitative information is conveyed in nominal data. • No ordering of the items is implied. • Nominal scales are used to measure QUALITATIVE variables only. • Nominal scale simply describe differences between things by assigning them to categories. • Nominal scale are very useful and are widely used in survey and other ex-post facto research when data are being classified by major sub- group of the population . • For example… Assignment of numbers of basketball player in order to identify them. • Other example… Religious preference, race, and gender.
5. 5. Ordinal Scale: The lowest level of the ordered scale that is commonly used is the ordinal scale. The ordinal scale place event in order.  The intervals between the numbers are not necessarily equal.  Allow us to rank order the items in terms of “which has less?” and “which has more?” Cannot say “how much more?” Rank order represent ordinal scales and are frequently used in research related to qualitative phenomena. A student’s rank in his graduation class involves the use of an ordinal scale  Examples:- If Ram’s positions in his class is 10th and Mohan’s positions is 40th it cannot be said that Ram’s position is four time as good as that of Mohan. All that can be said that one person is higher or lower on the scale than another, but not precise comparisons cannot be made. Other examples.. Socio economic status of families, Level of education, Gold Silver and Bronze at the Olympics.
6. 6.  In Interval scale, the intervals are adjusted in term of some rule that has been established as a basis for making the unit equal.  The are equal only in so far as one accepts the assumptions on which the rules is based.  Interval scale can have an arbitrary zero, but it is not possible to determine them what may be called an absolute zero or the unique origin.  The primary limitation of the interval scale is the lack of a true zero; it does not have the capacity to measure the complete absence of a trait or characteristic.
7. 7. Allows us not only to rank order the items that are measured, but also to quantify and compare the sizes of differences between them. For example… temperature, as measured in degrees Fahrenheit or Celsius, constitutes an interval scale. Equal differences on this scale represent equal differences in temperature, but a temperature of 30 degrees is not twice as warm as one of 15 degrees.
8. 8.  Ratio is very similar to interval variables; in addition to all the properties of interval variables, it features an identifiable absolute zero "0" point.  For Example :the zero point on a centimeter scale indicate the complete absence of length or height.  With ratio scales involved one can make statement like “Jyoti’s” typing performance was twice as good as that of Reetu.  The ratio involved does have significance and facilitates a kind of comparision which is not possible in case of an interval scale.
9. 9.  Ratio scale represents the actual amount of variable.  Measure of physical dimension such as weight, height, physical distance etc. are examples.
10. 10. Student Mark out of 100% Mark relative to 40% pass mark Position Result Ahmed 56 16 6 Pass Ali 48 8 7 Pass Comara 65 25 3 Pass Dawod 73 33 2 Pass Elias 62 22 4 Pass Fatima 35 -5 10 Fail Sayyed 20 -20 9 Fail Hana 38 -2 8 Fail Nurul 58 18 5 Pass Zaleha 82 42 1 Pass Ratio Interval Ordinal Nominal
11. 11.  Possible sources of error  Respondent  Situation  Measurer  Instrument Source of Error in Measurement
12. 12.  At time the respondent may be reluctant to express strong negative feeling . Transient factor like fatigue, boredom, anxiety, etc may limit the ability of the respondent to respond accurately and fully. Respondent
13. 13.  Any condition which palaces a strain on interview can have serious effects on the interviewer-respondent rapport.  For instance ,if someone else is present, he/she can distort responses by joining in or merely by being present. Situation
14. 14. The interviewer can distort responses by rewording and reordering questions. Careless mechanical processing can distort the finding Incorrect coding ,faulty tabulation or statistical calculation particularly in the data analysis stage. measurer
15. 15. Error may arise because of defective measuring instrument. Those may be:  Use of complex word  Beyond the comprehension of the respondent  Ambiguous meaning  Poor printing  Inadequate space for replies  Response choice omission etc .  Another type of instrument deficiency is the poor sampling of the universe of items of concern. Instrument
16. 16. ss  Sound measurement must meet the tests of validity, reliability, practicality .  These are three major consideration one should use in evaluating a measurement tool. Tests of sound measurement
17. 17. 18  Validity means truthfulness .  Validity refers to the extent to which a tests measures what we actually wish to measure.  Lindquist (1951) defined validity of test as “the accuracy with which it measures that which is intended to measure”.  For example, a test to measure farmers’ knowledge about plant protection is valid for measuring that dimensions & nothing else. test of validity
18. 18.  Content Validity.  Construct validity.  Predictive validity.  Concurrent Validity. 19 TYPES OF VALIDITY
19. 19.  It is the degree to which a test measures an intended content area.  It involves essentially the systematic examination of the test content to determine whether it covers a representative sample of behaviour domain to be measured.  It is established in two ways by experts judgement & statistical analysis. 20 CONTENT VALIDITY
20. 20.  For example the items to be measured were sent to judges who were experts, with two categories ‘agree’ & ‘disagree’ against each item . In final selection, the items for which there were at least 80% judges’ agreement were retained. This indicated validity of scale content.  Similarly statistical methods are also applied, for example if one wants to know the content of validity of a Hindi spelling test, then the teacher can correlate the scores on the said test with another similar Hindi spelling test. A high correlation coefficient would provide an index for the content validity (Singh,1997). 21 Contd………
21. 21.  It is defined as the extent to which the test may be said to measure a theoretical construct or trait. (Anastasi,1968)  It is a more complex & difficult process. Hence, a decision to compute construct validity is taken only when the researcher is fully satisfied that neither any valid & reliable criterion to define the quality of test is available.  For example, the attitude of farmer towards the use of nitrogenous fertilisers. The construct for this purpose was ‘the more favourable the attitude of a respondent to an improved farming innovation, the greater is the adoption of that innovation by the respondent’. This theory or construct was tested by calculating correlation coefficient between adoption scores of nitrogenous fertilisers for 50 respondents & the attitude scores for them obtained on the basis of attitude scale of the study. The correlation coefficient was found to be positive & highly22 CONSTRUCT VALIDITY
22. 22.  It is defined as the degree to which a measure predicts a second future measure(Sproull,1988).  In this, test scores are obtained and then a time of gap of months or years is allowed to elapse, after which the criterion scores are obtained. Subsequently, the test scores & the criterion scores are correlated & the obtained correlation becomes the index of predictive validity.  For example, an investigator may administer a test of intelligence to the students at the time of their admission to a college & thus obtains a set of scores. After two years, marks obtained in the final examination are noted which constitutes the criterion scores. A product moment correlation may be computed between the sets of intelligence scores at the time of admission & the marks obtained after two years. 23 Predictive validity
23. 23. If the correlation is positive & significant it can be said that scores on intelligence at the time of admission are directly predicting the future performance of the students in the college. The correlation becomes the index of validity coefficient. Predictive validity is needed for tests which include long range forecast of academic achievement, industrial management etc. 24 Contd….
24. 24.  In this method a test is correlated with a criterion which is available at the present time. Scores on newly constructed intelligence test may be correlated with scores obtained on an already standardised test of intelligence. The resulting coefficient of correlation is the indicator of concurrent validity(Singh, 1997). 25 Concurrent validity
25. 25. RELIABILITY & validity OF MEASUREMENT 26  The key indicators of the quality of a measuring instrument are the reliability and validity of the measures. The process of developing and validating an instrument is in large part focused on reducing error in the measurement process. RELIABILITY & validity OF MEASUREMENT
26. 26. 27  Reliability refers to the consistency of scores obtained by the same individuals when re-examined with test on different occasions, or with different sets of equivalent items or under variable examining conditions.(Anastasi,1968).  For example, if an individual receives a score of 60 on an achievement test & is assigned a rank, the person should receive approximately the same rank when the test is administered on the second occasion. RELIABILITY
27. 27. TYPES OF RELIABILITY 28  Inter-rater reliability.  Test-retest reliability.  Inter method reliability.  Internal consistency reliability. TYPES OF RELIABILITY
28. 28. 29  Inter-rater reliability: assesses the degree to which test scores are consistent when measurements are taken by different people using the same methods.  Test-retest reliability: assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. This includes intra-rater reliability. Contd….
29. 29. Contd… 30  Inter method reliability: assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out.  Internal consistency reliability: assesses the consistency of results across items within a test. Contd…..
30. 30.  Practicality : Practicality is concerned with a wide range of factors of economy, convenience and interpretability.  From the operation point of view, the measuring instrument ought to be practical i.e.it should be economical, convenient, and interpretable.  Economy consideration suggest that some trade off is needed between the ideal research project that which the budget can afford.  Convenience test suggest that the should be easy to measuring instrument should be easy to administer.  For this purpose one should give due attention to the proper layout of the measuring instrument.  For Instance, A questionnaire with clear instruction is certainly more effective and easier to complete to complete than one which lacks these features. Test of practicality
31. 31.  Interpretability consideration is specially important when person other then the designers of the test are to interpret the result.  The measuring instrument , in the order to be interpretable, must be supplemented by a) Detailed instructions for administering the test; b) Scoring keys; c) Evidence about the reliability and d) Guides for using the and for interpreting results. cont......
32. 32. The technique of developing measurement tools involves a four stage process ,consisting of the following :  Concept development;  Specification of concept dimensions;  selection of indicator; and  Formation of index. Technique of developing measurement tool
33. 33.  First and foremost step which means that the researcher should arrive at an understanding of the major concepts pertaining to his/her study.  This step is more apparent in theoretical studies than in the more pragmatic research , where the fundamental concepts are already established. Step first-concept development
34. 34.  This step require the researcher to specify the dimensions of the concepts that he/she developed in the first stage.  This task may either be accomplished by deduction i.e. by adopting a more or less intuitive approach or by empirical correlation of the individual dimensions with the total concept and/or other concepts.  For instance: one may think of several dimension such as product reputation, customer treatment, corporate leadership, concern for individuals, sense of social responsibility and so forth when one is thinking about the image of a certain company. Step second -concept development
35. 35.  After specification the dimension of concept ,the researcher must develop indicators for measuring each concept element.  Indicator are specific question, scales or other devices by which respondent’s knowledge, opinion expectation, etc are measured.  As there is seldom a perfect measure of a concept, the researcher should consider several alternatives for the purpose.  The use of more than one indicator gives stability to the scores and it is also improves their validity. Step third -selecton of indicator
36. 36.  When we have several dimensions of a concept or different measurements of a dimensions, we may need to combine them into a single index .  One simple way for getting an overall index is to provide scale value to the responses and then sum up the corresponding scores.  Such an overall index would provide a better measurement tool than a single indicator because of that an individual indicator has only a probability relation to what we really want to know.  This way we must obtain an overall index for the various concepts Step fourth - formation of index
37. 37. conclusion