• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
mEASURING
 

mEASURING

on

  • 393 views

 

Statistics

Views

Total Views
393
Views on SlideShare
393
Embed Views
0

Actions

Likes
2
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    mEASURING mEASURING Presentation Transcript

    • MASTER OF SOCIAL WORK II YR PRESENTATION ON MEASURING INTERVENTION & CHANGE.
    • SYNOPSISINTRODUCTIONMEASUREMENT OF OD INTERVENTIONS:SELECTING VARIABLEDESIGNING GOOD MEASURERELIABILITYVALIDITYRESEARCH DESIGNOD CHANGES
    • INTRODUCTIONAssessing OD Interventions involves judgments aboutwhether an intervention has been implemented asintended & if so, whether it is having desired results.Managers investing resources in OD efforts increasinglyare being held accountable for results being asked tojustify in terms of bottom-line outcomes. Measurementof Organizational Interventions provides development ofuseful implementation & evaluation feedback.
    • MEASUREMENT OF OD INTERVENTIONS Selecting Appropriate Variable. Designing good measure.
    • Selecting Appropriate VariableIdeally, the variables measured in OD evaluation should derive fromthe theory or conceptual model underlying the intervention. Themodel should incorporate the key features of the intervention as wellas its expected results. For example, the joblevel diagnostic model proposes several major features of work: task variety, feedback, and autonomy. The theory argues that high levels of these elements can be expected to result in high levels of work quality and satisfaction. Whether the intervention is being implemented could be assessed by determining how many job descriptions have been rewritten to include more responsibility or how many organization members have received cross-training in other job skills. Again, these measures would likely be included in the initial diagnosis, when the company‟s problems or areas for improvement are discovered.
    • OPERATIONAL DEFINITION FOR DESIGNING GOODMEASURE A good measure is operationally defined; that is, it specifies the empirical data needed how they will be collected and, most important, how they will be converted from data to information. These measures consist of specific computational rules that can be used to construct measures for each of the behaviour. They provide precise guidelines about what characteristics of the situation are to be observed and how they are to be used.
    • RELIABILITYReliability concerns the extent to which a measure represents the “true” value of a variable; thatis, how accurately the operational definition translates data into information. The 1st source ofreliability is by or through measurement, rigorously & operationally defining the chosen variables.Clearly specified operational definitions contribute to reliability by explicitly describing howcollected data will be converted into information about a variable. Second, use multiple methods tomeasure a particular variable through use of questionnaire, interviews, observation.Third, usemultiple items to measure the same variable on a questionnaire.Fourth, use of standardquestionnaire.
    • Validity Validity concerns the extent to which, a measure actually reflects the variable it isintended to reflect.On a measure of happiness of employees, for e.g., the test would besaid to have face validity if it appeared to actually measure levels of happiness.In other words, a test can be said to have face validity if it „looks like ‟ it measures what it is supposed to measure. If the experts agree that the measure appears valid it is called Content Validity.If measures of similar variables correlate highly with each other, it is called Criterion/Convergent validity. If measures of non similar variables show no association, it is called Discriminant Validity
    • RESEARCH DESIGN In assessing OD Interventions, practitioners have turned to quasi- experimental Research design with the following features: Longitudinal measurement: This involves measuring results repeatedly over relatively long time periods. Ideally, the data collection should start before the change program is implemented and continue for a period considered reasonable for producing expected results. Comparison unit: It is always desirable to compare results in the intervention situation with those in another situation where no such change has taken place. Although it is never possible to get a matching group identical to tile intervention group, most organizations include a number of similar work units that can be used for comparison purposes. Statistical analysis Whenever possible, statistical methods should be used to rule out the possibility that the results are caused by random error or chance. Various statistical techniques are applicable to quasi experimental designs, and OD practitioners should apply these methods or seek help from those who can apply them.
    • Assessing OD Changes The use of multiple measures also is important in assessing perceptual changes resulting from intervention. Considerable research has identified three types of change Alpha, Beta, and Gamma change.
    • OD CHANGESAlpha Change:It concerns a difference that occurs along some relatively stabledimension of reality. . For example, comparative measures ofperceived employee discretion might show an increase after a jobenrichment program. If this increase represents alpha change, itcan be assumed that the job enrichment program actually increasedemployee perceptions of discretion.Beta Change: It refers to recalibration of units of measure in astable dimension.Gamma change: It involves fundamental redefinition of dimension.