Presentation in Incheon, South Korea, 3rd November 2010 by Prof John Ure, Director Telecommunications Research Project, University of Hong Kong
Director, TRPC Pte Ltd Singapore www.trpc.biz
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
APCICT M&E Framework Initiative (Part 2)
1. APCICT M&E Framework Initiative
Incheon 3rd
November 2010
Presented by Prof John Ure
Director, Telecommunications Research Project,
University of Hong Kong
Director, TRPC Pte Ltd, Singapore
http//www.trpc.com.hk
Title slide
3. Why M&E?
Short term: test for …
course improvements (delivery, organization, etc)
course relevance of content to stakeholders
(sponsors, clients, participants, others?)
Long term: sustainability
Results/course outcomes (practical measures)
Returns on investment or expectations (value for
money for the sponsor or client)
3
5. Kirkpatrick & Kirkpatrick note …
“Trainers must begin with desired results and then
determine what behavior is needed to accomplish
them…” (Don Kirkpatrick)
It is unfortunate that the message above has been
missed by many learning professionals. For
decades, practitioners have attempted to apply the
four levels after a program has been developed and
delivered. It is difficult, if not impossible, to create
significant training value that way. (Jim & Wendy
Kayser Kirkpatrick)
5
6. Revisions to TKM
Level 5: value-for-money for the sponsor/client
⇒ sustainability
return on investment (Philips,1997)
return on expectations (Kirkpatrick & Kirkpatrick,
2009)
Level 0: planning, monitoring & evaluation
(PME) including baseline data for next cycle
added for this Report (to take into account Kirkpatrick
& Kirkpatrick)
Links to Level 5
6
8. M&E Frameworks, Methodologies
and Assessment Tools
M&E Framework
Offers a systematic approach to M&E
Does not specify methodologies
M&E Methodologies
Offer consistent ways of measuring
Offer consistent things to measure
M&E Tools
Offer specific ways to measure
Offer alternative ways to measure
8
11. Level 0: Framework
PME to create value for the sponsor/client
Links to Level 5 and the sponsor/client
Identify principal stakeholder needs/design a course
accordingly
Identify learning targets and indicators
Design M&E mechanism
Baseline data
Get baseline data from sponsor/client, or…
Design method for baseline data
11
12. Level 0: Methods
Select an approach that fits
Directly measurable inputs-outputs-outcomes favours
LFA and its derivatives (Cost-Benefit Analysis,
Results-Based Monitoring, etc)
Boundary conditions favour OM and its derivatives
(Utilization-Focused Evaluation, Enterprise Variables,
etc)
If needing to target a group, choose a method
GEM for gender monitoring and evaluation
Communications-4-Development for M&E of
individual ICT benefits
12
13. Level 0: Tools
Plan to use tools that are resource effective
Questionnaires are easy to use, but not so easy to
design for useful information
Interviews may be a good way to collect baseline data
Problem-solving workshops, role-play scenarios, etc
can be planned into the design of the course so the
tutoring/learning/assessment is simultaneous
etc
Distinguish where possible between M and E
Think of M (L1) + E1 (L2) + E2 (L3)
13
14. Monitoring vs. Evaluation
Monitoring
How participants react… but reaction to what?
React to learning? (ask participants if they have
learned something as they go along! Is the course
working?)
Evaluation
How relevant was the learning?
E1 = evaluation at the end-of-course relevant learning
outcomes at L2
E2 = evaluation post-course = on-the-job (OTJ)
behavioural outcomes at L3
14
15. Examples of M&E tools: for any level, but the
questions they are designed to answer will vary with
levels
15
16. Examples of M&E tools: for any level, but the
questions they are designed to answer will vary with
levels
16
17. Examples of M&E tools: for any level, but the
questions they are designed to answer will vary with
levels
17
18. Level 1: Reaction (1)
The “happiness test”
Usually end-of-course questionnaire
But happy with what? Too many boundary conditions?
(Happy to have time off to study?)
Ideally measure happiness with the learning process/
ideally capture sense of what has been learned so far
Compare with baseline data to check learning
momentum
Mostly subjective Qs, but objective Qs are helpful
18
19. Level 1: Reaction (2)
Is it M or E? Ideally, M comes before E
Course design can build in practical workshops, role
play, simulations, etc., each providing evidence of
learning – requires an experienced tutor /assessor
M&E can be done simultaneously
Questionnaire needs to make distinction in its Qs
between
• satisfaction with the learning process
• the relevance of what was learned
Does APCICT practice succeed?
19
21. Level 2: Learning (1)
Evaluation (E1) of the learning that has taken place and
its relevance to the participants/clients/sponsors
Typically end-of-course by questionnaire, but there are
other means for E1
FGDs involving all participants or a random selection
Case studies presentations by teams, maybe using their
workplace examples
Examinations, tests, demonstrations, etc
Individual initiatives
Building in time at course end to discuss the feedback with
participants can also yield additional insights.
21
23. Level 2: Learning (2)
Baseline data comparisons
This is what it is all about – measuring learning and
assessing progress
More objective than only relying upon subjective end-
of-course opinions of participants
Meaningful comparisons rely upon the quality of the
design and administration of the assessment tool
Making analytical sense of evaluation
Data reveals more through analysis – training in how
to analyze data is important
Component analysis can be used in qualitative data
23
24. Component Analysis: Q24 Module 7
24
Do you any comments on how this module could be improved?
25. Level 3: Behavioural (1)
Evaluating (E2) behavioural changes in the
workplace/other post-course environment
Is where it usually stops!
How to identify behavioural changes?
Objective indicators = productivity, costs, quality of
work, new OTJ capabilities, etc
Subjective indicators = post-course personal
experience/feelings of confidence, knowledge,
awareness, etc
25
26. Level 3: Behavioural (2)
How to attribute them to learning outcomes?
Objective = trace through specific skill sets, knowledge,
etc, directly to related OTJ responsibilities/ performance
Subjective = personal experience
What tools to use?
Objective = tests, performance reviews, pay and
promotions, etc
Subjective = storytelling (GEM), personal assessment
(OM), FGDs, individual initiative, etc
Who does the evaluation?
Trainers or client/sponsor or collaboration? (See PME)
26
28. Level 4: Results (1)
28
Results are the long term outcomes of the learning process which
may have contributed only 10% (cited Kirkpatrick & Kirkpatrick)
29. Level 4: Results (2)
How are Results estimated?
If inputs-outputs-outcomes are directly attributable,
then using LFA methodologies results can, in theory,
be quantified
If not, then another helpful technique is the
counterfactual = what might the situation have been
without the training?
Who does the estimations?
Probably not training institutions
Development agencies running projects probably do
29
32. Level 5: Returns
Returns
If monetized = return on investment (Philips) – eg. cost
savings of new software package/cost of the skills training
If not = return on expectations (K&K) – eg. were the
enhanced staff competencies worth paying for?
Who calculates?
Almost certainly the client/sponsor (doesn’t always follow;
training budgets are sometimes there to be spent!)
Often the development agency (= sponsor?)
Sustainability
If returns are positive = more likely to fund next cycle
32