Scenario-Based Validation of the
Online Tool for Assessing Teachers’
Digital Competences
Mart Laanpere, Kai Pata (Tallinn University)
Piret Luik, Liina Lepp (University of Tartu)
Context
• Estonian National Strategy for Lifelong Learning 2020:
Digital Turn towards 1:1 computing, BYOD, new pedagogy
• Teachers’ digital competence is a key, hard to assess
• Teachers’ professional qualification standard refers to
digital competence model based on ISTE standard
• Three competing approaches to digital competence:
– digital competence as generic key competence (European
Parliament, 2006)
– digital competencies as a minimal set of universal technical
skills (ECDL, DigComp)
– digital competence as a subset of professional skills that are
highly dependent on the specific professional context (ISTE)
Assessment rubric
• Five competence domains, four competences
in each (see cnets.iste.org)
• Five-point scale, detailed descriptions of
performance for each level of competence
(inspired by Bloom’s taxonomy)
• Seven pages in small script
• Used for self-assessment (PDF) and
implemented in an online self- and peer
assessment tool DigiMina (Põldoja et al 2011)
DigiMina
screenshot
Research problem
• Both DigiMina and underlying assessment
rubric were “too heavy”, teachers’ workload too
big (both in self- and peer-assessment)
• Estonian adaptation of the rubric was confusing,
disconnected from teachers’ everyday life and
vocabulary they use (= low ecological validity)
• How to validate/improve the rubric and define
the requirements for the next online
assessment tool?
Research questions
• Which performance indicators are difficult to
understand or irrelevant?
• What are main factors affecting the teachers’ workload
while self-assessing one’s digital competence with this
rubric and how to reduce it?
• How to increase the ecological validity of the rubric,
self-assessment tool and its application scenarios?
• How suitable is the 5-point scale used in the rubric and
might there exist some better alternatives (e.g. Levels
of Use)?
• Which changes in the rubric, tool and procedure would
improve their wide-scale uptake?
• Which incentives would motivate the majority of
teachers to use the rubric and self-assessment tool?
Scenario-based participatory design
• Personas (typical users): teacher with low self-
efficacy, experienced teacher, student teacher,
school principal, teacher trainer, qualification
authority
• Scenarios:
– Self-assessment (teacher 1, principal)
– School’s digital strategy (teachers 1&2, principal)
– Accreditation (teacher 2, authority)
– School practice (student teacher)
– Grouping for teacher training (teacher 1, trainer)
Data collection
• Quota sample of 2 groups (Tallinn & Tartu) of
6 respondents corresponding 6 personas
• A few days prior to interviews: individual self-
assessment based on the online rubric, adding
evidences and written comments
• Four 120-minute long focus group interviews
• Audio was recorded, transcribed and analysed
Results
• Level 5 looks often “easier” than level 3, level 4
stays often untouched, taxonomy was not clear
• Respondents: no need to change the scale
• Comments: some statements in the rubric were
difficult to understand
• Evidences provided by respondents showed that
sometimes they misinterpreted the statements in
the rubric
• Workload too high, motivation low, no incentives
Discussion
• There is a difference between what the
respondents WANT and what they actually
NEED
• Unfamiliar concepts: definitions vs examples
• Scale: 5-point contextualised vs 3-point
• Scenario-based approach was helpful
• Not enough input for requirements for
software tool
Conclusions
• Based on the suggestions from this study, the
work group shortened and simplified the
statements of the rubric
• Switch to 3-point scale inspired by LoU/CBAM:
– “I know what it is and have tried it”
– “I practice it on a regular basis”
– “I am expert in this, leading others”
• Suggestions regarding requirements for online
tool development
• Unexpectedly, the ministry changed the
preference towards MENTEP tool and rubric

Scenario-Based Validation of the Online Tool for Assessing Teachers’ Digital Competences

  • 1.
    Scenario-Based Validation ofthe Online Tool for Assessing Teachers’ Digital Competences Mart Laanpere, Kai Pata (Tallinn University) Piret Luik, Liina Lepp (University of Tartu)
  • 2.
    Context • Estonian NationalStrategy for Lifelong Learning 2020: Digital Turn towards 1:1 computing, BYOD, new pedagogy • Teachers’ digital competence is a key, hard to assess • Teachers’ professional qualification standard refers to digital competence model based on ISTE standard • Three competing approaches to digital competence: – digital competence as generic key competence (European Parliament, 2006) – digital competencies as a minimal set of universal technical skills (ECDL, DigComp) – digital competence as a subset of professional skills that are highly dependent on the specific professional context (ISTE)
  • 3.
    Assessment rubric • Fivecompetence domains, four competences in each (see cnets.iste.org) • Five-point scale, detailed descriptions of performance for each level of competence (inspired by Bloom’s taxonomy) • Seven pages in small script • Used for self-assessment (PDF) and implemented in an online self- and peer assessment tool DigiMina (Põldoja et al 2011)
  • 4.
  • 5.
    Research problem • BothDigiMina and underlying assessment rubric were “too heavy”, teachers’ workload too big (both in self- and peer-assessment) • Estonian adaptation of the rubric was confusing, disconnected from teachers’ everyday life and vocabulary they use (= low ecological validity) • How to validate/improve the rubric and define the requirements for the next online assessment tool?
  • 6.
    Research questions • Whichperformance indicators are difficult to understand or irrelevant? • What are main factors affecting the teachers’ workload while self-assessing one’s digital competence with this rubric and how to reduce it? • How to increase the ecological validity of the rubric, self-assessment tool and its application scenarios? • How suitable is the 5-point scale used in the rubric and might there exist some better alternatives (e.g. Levels of Use)? • Which changes in the rubric, tool and procedure would improve their wide-scale uptake? • Which incentives would motivate the majority of teachers to use the rubric and self-assessment tool?
  • 7.
    Scenario-based participatory design •Personas (typical users): teacher with low self- efficacy, experienced teacher, student teacher, school principal, teacher trainer, qualification authority • Scenarios: – Self-assessment (teacher 1, principal) – School’s digital strategy (teachers 1&2, principal) – Accreditation (teacher 2, authority) – School practice (student teacher) – Grouping for teacher training (teacher 1, trainer)
  • 8.
    Data collection • Quotasample of 2 groups (Tallinn & Tartu) of 6 respondents corresponding 6 personas • A few days prior to interviews: individual self- assessment based on the online rubric, adding evidences and written comments • Four 120-minute long focus group interviews • Audio was recorded, transcribed and analysed
  • 9.
    Results • Level 5looks often “easier” than level 3, level 4 stays often untouched, taxonomy was not clear • Respondents: no need to change the scale • Comments: some statements in the rubric were difficult to understand • Evidences provided by respondents showed that sometimes they misinterpreted the statements in the rubric • Workload too high, motivation low, no incentives
  • 10.
    Discussion • There isa difference between what the respondents WANT and what they actually NEED • Unfamiliar concepts: definitions vs examples • Scale: 5-point contextualised vs 3-point • Scenario-based approach was helpful • Not enough input for requirements for software tool
  • 11.
    Conclusions • Based onthe suggestions from this study, the work group shortened and simplified the statements of the rubric • Switch to 3-point scale inspired by LoU/CBAM: – “I know what it is and have tried it” – “I practice it on a regular basis” – “I am expert in this, leading others” • Suggestions regarding requirements for online tool development • Unexpectedly, the ministry changed the preference towards MENTEP tool and rubric