Sample literature review

37,867
-1

Published on

This is what literature review looks liked

Published in: Technology, Business
1 Comment
4 Likes
Statistics
Notes
No Downloads
Views
Total Views
37,867
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
262
Comments
1
Likes
4
Embeds 0
No embeds

No notes for slide

Sample literature review

  1. 1. SAMPLE LITERATURE REVIEW Mgmt 430 MMMS 530 Research Paper on a Selected Aspect of Management This review has been made available with permission for learning purposes only. Do not quote. Deborah.jones@vuw.ac.nzLiterature reviewCounting what Counts - Key Performance Indicators for Adult andCommunity EducationIntroductionHow do you evaluate organisations if they do not measure what they do? This is thequestion faced by voluntary Adult and Community Education (ACE) organisations1.In order to “justify public expenditure”, education policy has become “increasinglydriven by the need to measure outcomes” (Brindley, 2001, 138). In ACE providers,two major factors conspire to make the use of evaluation troublesome. Theoutcomes of ACE fall into two broad areas. Educational outcomes, such as theability to speak English can be measured through extant tests2. But for some socialoutcomes, specific tests do not exist. Jackson offers an Australian perspective onAdult Migrant Education Programmes (AMEP), stating that “non-content outcomes…[are] rarely a matter of serious debate, the recording of such outcomes… [is] evenviewed with suspicion” (1994, p. 59). Improved civic engagement (for example theknowledge of how to access services such as taking one’s children to the doctor)has repercussions in society and in the next generation which ACE providers do notmeasure.The already difficult challenge of developing the capability (suitable processes,trained workforce) to measure these complex results is exacerbated by the secondfactor. Volunteering. Many New Zealand organisations facilitate willingness in1 A group of education providers wholly or partly funded by the Tertiary Education Commissionworking in five ‘priority’ areas: targeting learners whose initial learning was not successful, raisingfoundation skills, encouraging lifelong learning, strengthening communities by meeting identifiedcommunity learning needs, and strengthening social cohesion2 The communicative competence test is one basis for measurement here (Savignon, 2000)SAMPLE LITERATURE REVIEW MGMT 430.MMMS 530 2006
  2. 2. communities to help their co-citizens for no pay. This is made possible by a sharedbelief in the benefits of the work (Zappalá, 2000, Derby 2001). So can the expenseand voluntary time of developing and implementing complicated evaluationprocedures be justified, in the context of low-budget3, high-value-added4 activitiessuch as voluntary community education?What are the requirements of a set of key performance indicators (KPIs) that meetthe needs of voluntary ACE?This paper will survey management and education management literature for thecharacteristics of a set of KPIs suitable for voluntary ACE. This paper does notattempt to recommend specific KPIs for ACE; rather it unpacks the philosophicalunderpinnings of why KPIs have evolved and why different KPIs are chosen inorganisations. The movement towards more balanced measures in management(Kaplan and Norton, 1992, 1996a, 1996b) is argued to be of increasingly moreinterest to the ACE sector, although many specific evaluation tools are stillinapplicable to the case of volunteering. The service industry will warrantattention, as it stands considerably closer in core business to education thanindustrial models do. The Balanced Scorecard (ibid) already in use in some schoolsettings will also be reviewed.This paper will argue that KPIs for voluntary ACE must strike the correct balancebetween measuring financial and other management, and evaluating outcomes forlearners. I will propose that a set of KPIs must focus on quality, not as a‘determinant of outcomes’ (Ballantine and Brignall 1994), but as an outcome initself. The measures chosen should be simple, and have an impact on members ofthe organisation suitable to the realities of voluntary bodies. The work uncoversquestions for further research.The case for balanced measures in managementFrom the industrial age on, the way organisations rated performance was to consultthe bookkeeper. The bottom line summarised achievement (Kaplan and Norton,3 The entire ACE funding pool for New Zealand is $24.9million annually4 For a full discussion of the value added by volunteering, see the VAVA (Value Added by VoluntaryAgencies) Report 2004, Price Waterhouse Coopers 2
  3. 3. 1992). As long as this was the case, theories of management provided littlepossibility for cross application to measuring in providers of ACE.During the 19th century however, the ‘cooperative movement’ and ‘guild socialism’(Bonner, 1961) were forces not specifically driven by organisational managementpractices, but that encapsulated ideas giving rise to such organisations as theWorkers Education Association, founded in New Zealand in 1915 and still providingvoluntary ACE today (The WEA, 2005).But fundamental changes occurred.Technology started developing at a rate that necessitated more and ongoinginvestment into equipment. Globalisation required industries to be moreadaptable. Growing environmental concerns became the problem of business,increasing pressure on corporations. Even customers had changed (Neely, 1998).So companies adapted. One reaction was to become ‘learning organisations’(Kochan and Useem, 1992). They created corporate missions that had a specificfocus on the customer (Kaplan and Norton, 1992). Top-down management waschallenged by a call for ‘a much greater participation in the management ofenterprises by all the workforce’ (Heywood, 1989, p. xii). In the name ofsustainability, many started conducting social audits that regarded their societaland environmental outcomes of their operations.The financials were no longer enough. Their ‘documented inadequacies’ includedtheir ‘backward-looking focus’ and their ‘inability to reflect …value-creatingactivities’ (Kaplan and Norton, 1992, p. 73). For corporations to remaincompetitive, looking at the monthly balance sheet no longer provided incentivesfor the necessary investments in technology, community, environment orinnovation (Neely, 1998). The stage was set for a more balanced style of evaluationto enter, increasing the possibility of cross application in broader contexts.Can management tools help education questions?Strategic management tools sprang up in response to the new needs. WangCorporation’s ‘SMART’ model (Strategic Measurement and Analysis Reporting 3
  4. 4. Technique) shapes a company’s strategies into a pyramid, and then translates theminto actions for each member of the organisation (Lynch and Cross, 1991). Thissignalled that the success of an organisation requires the involvement and goodperformance of all the staff (ibid). This is in line with a customer (learner) –focusmore suitable to our case.Its areas of measurement however, have a strong production orientation that wouldnot be applicable to voluntary ACE. It was designed with the production industry inmind, and its assumption of countable outputs is not immediately transferable tosofter outcomes.After an overview of similar strategic measuring tools in industry (Fitzgerald et al.,1991, Brown, 1996), it became apparent that the priorities of industry make theirmodels inapplicable for our purposes. With its stronger people orientation, answerswere next sought in the service industry.The service industryIn organisations where individual consumer acquisition and retention are central tosuccess, the inadequacies of financial measures were further exaggerated(Fitzgerald et al., 1991), suggesting potential cross application to our question.In light of the service industry’s requirements, Warwick University created theResults/Determinants matrix (Ballantine and Brignall 1994). This distinguishesbetween measuring outcomes and measuring what determines the outcomes. Thesetwo categories break down into six ‘dimensions’. Notably in its case however,‘quality of service’ is defined as determining results in financial and competitivearenas. Its assumption that quality of service is not an output in itself, but adeterminant of competitive and financial results restricts its suitability only toprofit-oriented companies. It would not offer appropriate solutions for not-for-profit organisations, where service provision is the major outcome.A model is needed that focuses on quality. The EFQM (European Foundation forQuality Management) model also distinguishes between ‘results’ and ‘enablers’ butincludes ‘society’ in the results and ‘leadership’ as an enabler. This answers ourprevious question but throws up another. Where the Results/Determinants matrix 4
  5. 5. has six, EFQM has nine areas in which to select and set several measures each. Thetime spent collecting data in order to achieve meaningful information in all theseevaluations will make it complicated ‘beyond probable pay-off’ (Neely and Adams,2000). This view was substantiated by Worthen, Sanders and Fitzpatrick: One can hardly oppose using objectives and assessing their attainment, but the use of dozens, or even hundreds of objectives for each area of endeavour… amount[s] to a monopolization of staff time and skills for a relatively small payoff. (1997, p. 90)The Higher Education Quality Council in London issued an even stronger warningagainst over-evaluation; that ‘Constant external review potentially diminishesquality” (1995, p. 29).In order to increase the probability of finding a simpler and more appropriate tool,techniques already in practice in the education setting were sought.What they do at schoolsSchools are a group of organisations that have always focused on outcomes otherthan financial. Educational institutions have long known that ‘the real test of goodteaching is its effect on students’ (Higher Education Quality Council, 1995, p. 100).For these organisations, rating performance has never consisted of financialmeasures alone, and has usually been approximated to student achievement(Education Review Office, 2003) as seen on standardised and other tests.On comparison with the contemporary management tools already outlined, justmeasuring (approximated) outcomes does not represent a balanced set ofmeasures. A school with incredibly successful students could still be inefficient iffor example, it is mismanaging monies or staff. Measurements need to encompassboth outcome and process indicators, and such tools are indeed used by educationorganisations.The Balanced ScorecardThe Balanced Scorecard has been applied, initially by private teaching institutions(e.g. Berlitz Language Services Worldwide in 2000, personal experience) more 5
  6. 6. closely connected with the business sector and later by members of state-runsystems (for example Charlotte-Mecklenburg schools, 2005). Pioneered by Kaplanand Norton in the early nineties (Kaplan and Norton, 1992) in a corporate context,it was later proposed for schools by Somerset (Somerset, 1998) and others. Itbalances financial evaluation with three other ‘perspectives’; customer (orlearner), internal processes and (organisational) learning and growth.Although originally developed for the business sector, its advocates see ‘no reasonwhy it shouldn’t be used for charities or public sector organisations’ (Bourne andBourne, 2000, p.17). Its purported advantages over traditional systems are seen inthree major areas. The first is its simplicity (as already outlined), followed by itsapplications to organisational communication and finally its ability to influencebehaviour (Somerset, 1998).To assess their appropriateness to our question, the latter two areas will lookbriefly to some psychology literature.The Balanced Scorecard and communicationSomerset asserted that ‘building a performance measurement system is not aboutcontrol; it is about communication’ (Somerset, 1998, p.2). If high-level strategy is‘decompose[d]’ into measures for actions at local levels (Kaplan and Norton, 1992,p.75), each member of an organisation is informed about how to contribute toachieving the overall mission. Literature on communication however alerts us thatbeing informed is not equivalent to successful communication. To ensure that themessage has been understood would require communication in more than onedirection, and include mechanisms such as ‘perception checking’ and ‘feedback’(Gundykunst, 1994).An ACE organisation with a mission to provide English support to migrants mightinform a volunteer English tutor that his number of visits to the English resourcelibrary is being tracked in an attempt to measure overall performance. Withouttwo-way communication about why this is happening, there are risks of deterringthe volunteer, making his voluntary experience less satisfactory or in factpressuring him to do more than he is willing. 6
  7. 7. The Balanced Scorecard and behaviourWhen setting up measures, managers are warned of the strong effects the processwill have on the behaviour of employees (Kaplan and Norton, 1992). Sometraditional measurement systems “specify the particular actions they wantemployees to take and then measure to see whether the employees have in facttaken those actions. In that way, the systems try to control behaviour” (ibid, p.79).Within power relations however, (such as between a target setter and the workercharged to achieve the target) behaviour can better be influenced through‘competent authority’ than this form of ‘coercive authority’ (Wrong, 1995). UnderWrong’s model, the employee’s behaviour is best influenced out of a belief in theauthority’s superior competence (in this case, to interpret strategy and setmeasures for actions accordingly). Under the Balanced Scorecard regime, targetsare not set to dictate actions, but to influence behaviour seen to achieve thecorporate vision simply by ‘focusing attention on key areas’ (Bourne and Bourne,2000, p.10). Some organisations will find the choice of competent authoritypreferable to coercive authority favouring the Balanced Scorecard for this reason.But the concept of setting targets in order to influence behaviour is fundamentallyproblematic in our case. It suggests that measures taken will evaluate progressagainst goals and influence behaviour towards achieving the vision of theorganisation, rather than unobtrusively trying to gain a picture of actualachievement and value produced by the organisation.Our case necessitates avoiding interference in the work of volunteers, rather thaninfluencing their behaviour towards more or different kinds of work. The KPIs thatreflect ‘actual’ work done (as opposed to measures that aim to inspire achievingthe vision and drive performance) must be as simple and easy to measure aspossible, and not adversely influence volunteer behaviour.A distinction arises between evaluation that aims to reflect status quo, andmeasurement as an ‘instrument to pursue goals’ (Worthen et al, 1997, p.22). Whilethe Balanced Scorecard may be a useful tool for ACE organisations in workingtowards their missions, it will not provide KPIs to indicate the current value oftheir work. 7
  8. 8. Management, education, and back againThe waves of change in organisational management have both informed and beeninformed by education literature. In his important work ‘Learning, Adaptability andChange, The Challenge for Education and Industry’ (1989), John Heywood appliesthe cognitive theory of how children learn, to create lessons for organisations inbeing more adaptable. He then relates his theories on learning organisations backto the school setting with recommendations for educationalists. His work is astrong example of how management and education thinking have grown towardseach other: schools have an increasing emphasis on effective management whileorganisations try to learn.Conclusions and further researchThe applications of various performance measurement systems to voluntary ACEseem reasonable efforts to satisfy public demands for accountability of funds.However, research on the adaptations necessary to fit the requirements of thesector in question is still required, and research into measuring non-educationaloutcomes is also needed.ReferencesBallantine, J. and Brignall, S., (1994). A Taxonomy of Performance MeasurementFrameworks, Warwick: Warwick Business School Research PaperBonner, Arnold (1961). British Co-operation. ManchesterBourne, Mike and Bourne, Pippa, (2000). Understanding the Balanced Scorecard ina week. London, Hodder and StoughtonBrindley, Geoff (2001). Assessment. In R. Carter and D. Nunan (Eds). TheCambridge guide to teaching English to speakers of other languages (Pp 137-143).Cambridge: Cambridge University Press.Brown, M. G. (1996). Keeping Score: Using the Right Metrics to Drive World-ClassPerformance. Quality Resources: New York. 8
  9. 9. Charlotte-Mecklenburg schools (2005). District level balanced scorecard: publicdocument, accessed 14 July 2005,http://www.cms.k12.nc.us/discover/goals/BSC.pdfDerby, Mark (2001) Good work and no pay. Wellington, Steele RobertsEducation Review Office (2003). Evaluation Indicators for EducationReviews in Schools, paper on the internet, accessed 10 July 2005,http://www.ero.govt.nz/EdRevInfo/Schedrevs/SchoolEvaluationIndicators.htmFitzgerald, L., Johnston, R., Brignall, T. J., Silvestro, R. & Voss, C., (1991).Performance Measurement in Service Businesses London: The Chartered Institute ofManagement Accountants.Gundykunst, William B. (1994). Bridging Differences, Effective intergroupcommunication. California, Sage PublicationsHeywood, John (1989). Learning, Adaptability and Change. London, Paul ChapmanPublishingHigher Education Quality Council of Britain, (1995). Managing for quality, storiesand strategies. London, Chameleon PressJackson, Elaine (1994). Non-language Outcomes in the Adult Migrant EnglishPopulation. Sydney: NCELTR.Kaplan, R. S. and Norton, D. P., (1992), The Balanced Scorecard – Measures thatDrive Performance, Harvard Business Review, Vol. 70, No. 1, January / February,(71 – 79).Kaplan, R. S. and Norton, D. P., (1996a), The Balanced Scorecard - TranslatingStrategy into Action, Harvard Business School Press: Boston, MA.Kaplan, R. S. and Norton, D. P., (1996b), Linking the Balanced Scorecard toStrategy, California Management Review, Vol. 39, No. 1, (53 – 79) 9
  10. 10. Kochan, T and Useem, M (1992). Transforming Organizations. New York: OxfordUniversity Press.Lynch, R. L. and Cross, K. F., (1991). Measure Up – The Essential Guide toMeasuring Business Performance London: Mandarin.Morrison, K (1998). Management theories for educational change. California, PaulChapman PublishingMurphy, Joseph (1996). The privatisation of schooling, problems and possibilities.California, Corwin PressNeely, A. D., (1998), Performance Measurement: Why, What and How London:Economist Books.Neely, A. D., and Adams, C. A., (2000). Perspectives on Performance: ThePerformance Prism Cranfield: Cranfield School of ManagementPrice Waterhouse Coopers (2004). VAVA (Value Added by Voluntary Agencies)Report: report on the internet, New Zealand Federation of Voluntary WelfareOrganisations Inc. Accessed May 2005,http://www.nzfvwo.org.nz/files/file/VAVAO_overview_report.pdfSavignon, S (2000). Communicative language teaching. In M. Byran (ed). Routledgeencyclopaedia of language teaching and learning (Pp 124-129). London and NewYork: Routledge.Somerset, John (1998). Creating a balanced performance measurement system fora school, report on the internet. Hall Chadwick, accessed 13 June 2005http://www.hallchadwick.com.au/05_publications/ra_creating.pdfWEA, the (2005). Telling our stories. Wellington, the WEA 10
  11. 11. Worthen, Blaine R., Sanders, James R. and Fitzpatrick, Jody L., (1997). Programmeevaluation, Alternative approaches and practical guidelines (2nd Edition). NewYork, LongmanWrong, Dennis H., (1995). Power, Its forms, bases and uses. New Jersey,Transaction PublishersZappalá, Gianni (2000). How many people volunteer in Australia and why do theydo it? briefing paper on the internet. The Smith Family, Research and AdvocacyBriefing 4, accessed 17 July 2005http://www.smithfamily.com.au/documents/Briefing_Paper_4_DA10F.pdf 11

×