Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What stakeholders really think of eLearning. Quality through self-evaluation.


Published on

Presentation given at Online Educa Berlin, Dec 3rd 2010

  • Be the first to comment

  • Be the first to like this

What stakeholders really think of eLearning. Quality through self-evaluation.

  1. 1. What stakeholders really think of eLearning. Quality through self-evaluation.<br />Deborah Arnold, Vidéoscop-Université Nancy 2, France<br />This project has been funded with support from the European Commission. This communication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.<br />
  2. 2. <ul><li>Theoretical background
  3. 3. How to design sound-self-evaluation questionnaires
  4. 4. Interpreting results
  5. 5. SEVAQ+ user testing and survey
  6. 6. Scenarios for implementing SEVAQ+</li></ul>What are wegoing to cover?<br />
  7. 7. Whatis SEVAQ+?<br /><ul><li>A combined tool & approach for the self-evaluation of quality in technology enhanced learning
  8. 8. A Lifelong Learning Programme KA4 Project (2009-2010) “Dissemination and exploitation of results”
  9. 9. Support for improving the quality and attractiveness of Vocational Education and Training (VET)
  10. 10. Support for the HE modernisation agenda: governance, curricular reform</li></li></ul><li>What can we do with SEVAQ+?<br /><ul><li>Create questionnaires based on recognised quality approaches.
  11. 11. Analyse results with easy-to-interpret data.
  12. 12. Improve the quality of our courses.
  13. 13. Teachers and trainers can gather feedback on what learners really think of their learning experience.
  14. 14. Training managers can get the full picture by comparing responses from the different stakeholders involved.
  15. 15. Organisations can use the results of SEVAQ+ to benchmark against others.
  16. 16. Learners too have a voice in the process.</li></li></ul><li>1st conceptual model: the Kirkpatrick levels <br />Level 5: ROI Financial incidence of the training on the global costs and incentive <br />Level 4: Business results Indicators to be determined:productivityincrease, sales increase, turnover’s drop …<br />Level 3: Transfer of learning outcomes in the workplace: Measuring weeks after what has been changed by learning (surveys to learners, peers, line manager… witness groups)<br />Level 2: Learning assessment: by tests, exams. Measuring just after what has been achieved through learning (skills, knowledge and / or competences)<br />Level 1: Reactions satisfaction survey completed by the learner <br />
  17. 17. 2nd conceptual model: the EFQM model <br />© EFQM European Foundation for Quality Management<br />A management model for total quality (all processes, all actors) Each domain acts in interaction with all the others<br />
  18. 18. The main stakeholder in SEVAQ v1.0: the learner <br />Partially EFQM and partially Kirkpatrick <br />learner<br />3 domains of the evaluation:<br /> The resourcesused by the learner during his learning experience<br /> The processes (activities) proposed to the learner during the delivery of the course <br /><ul><li>The results:learning objectives achieved, effects of the experience on the learner, some measure of the transfer </li></ul> in the workplace <br />
  19. 19. A wider scope of processes: the learning processes + all the design processes of the provider <br />A wider target of users: the learner + the other stakeholders interested in the delivery of the offer <br />Two distinctive contexts: e-learning in VET (as before) + e-learning in HE<br />SEVAQ+: extending the approach <br />Object<br />E-learning design <br />Learning process<br />Teacher/ trainer<br />User<br />Manager<br />TGs<br />VET<br />HE<br />Context<br />
  20. 20. The extension of SEVAQ+<br />Managers <br />Trainers<br />Providers <br />Learners<br />+ stakeholders+ contexts (HE and VET) + EFQM domains+Kirkpatrick levels <br />
  21. 21. Sub-criterion a<br />Sub-criterion<br />b<br />Statements<br />Sub-criterionc<br />Sub-criteriond<br />Sub-criterione<br />208 statements723 statements <br />38 sub-criteria99 Sub-criteria<br />Resources<br />Criterion 1<br />Criterion 2<br />Activities <br />Criterion 3<br />Criterion 4<br />Results<br />SEVAQ 1SEVAQ + <br />17 Criteria22 Criteria <br />The structure of the SEVAQ+ questionnaires<br />
  22. 22. EFQM & Kirkpatrick applied<br />EFQM<br />Kirkpatrick 1 & 2<br />EFQM<br />Kirkpatrick 3<br />
  23. 23. Testing SEVAQ+: Step 1 - getting started<br />SEVAQ+ demos:<br />
  24. 24. Step 2: design and implement questionnaires<br />Demo 1<br />Demo 2a<br />Demo 2b<br />Demo 3<br />Demo 4<br />
  25. 25. Interpreting results: Histograms<br />Question 247 : You knew every week (or day) what you were expected to do.<br />
  26. 26. Interpreting results: radar graphs<br />Improvement needed Sub-criteria with results under the mean<br />37 : Time management [ 2.49 ]<br />41 : Blended approach [ 2.59 ]<br />79 : Levels of overall knowledge outcomes  [ 2.62 ]<br />86 : Learning management [ 2.41 ]<br />88 : Self-motivation [ 2.55 ]<br />Improvement less or not needed Sub-criteria with results above the mean<br />18 : Pedagogical aspects of learning content [ 2.76 ]<br />20 : Technical requirements [ 3.04 ]<br />22 : Instructional design [ 2.77 ]<br />38 : Navigation and resource options [ 2.84 ]<br />84 : Awareness of learning preferences [ 2.72 ]<br />Mean value for the evaluation = 2.7<br />
  27. 27. Interpreting results: table<br />Summary table of critical questions by giving the percentage of persons quoting the question as important and giving a bad evaluation. <br />A question is consider to be critical when its score is above 40% and very critical when its score is 60% and above.<br />
  28. 28. SEVAQ+ validation process<br />
  29. 29. SEVAQ+ user testing (key figures)<br /><ul><li>136 direct responses to call for testers
  30. 30. Leading to over 16 000 potential testers (58% HE / 42% VET)
  31. 31. 7 local and national seminars in FR, PL, LT and IT
  32. 32. 3 international workshops (Valencia, Oeiras & Budapest) with EDEN & EFQUEL
  33. 33. 1500 current testers (more end Jan with end of semester evaluations)
  34. 34. 300 Designers
  35. 35. 1200 Respondents
  36. 36. 3094 Questionnaires distributed
  37. 37. 2158 Questionnaires completed (Response rate = 70%)</li></li></ul><li>European-wide user survey (key figures)<br />
  38. 38. User feedback<br />
  39. 39. User feedback<br />
  40. 40. Scenarios of use<br /><ul><li>Tell us how YOU want to use SEVAQ+!</li></ul><<br />
  41. 41. Join us<br /><<br />Facebook: <br />SEVAQ+<br /><br />LinkedIn: <br />Evaluation of training and courses<br />
  42. 42. Join us<br />YouTubechannel: <br />Quality in eLearning<br /><<br /><ul><li>Test the tool and have your say!
  43. 43. User evaluation – ongoing till end January 2011
  44. 44.
  45. 45. Join the Delphi expert group: March – June 2011
  46. 46. Invitation to validation workshop June 2011</li></li></ul><li><br /><br />Thank you for listening!<br />With the indispensable contribution of:<br /><ul><li>Anne-Marie Husson (CCIP-Le Préau)
  47. 47. Dr Ulf-Daniel Ehlers (EFQUEL)
  48. 48. Rolf Reinhardt (EFQUEL)</li></ul>And all the SEVAQ+ partners!<br /><br />