090511 Appleby Magna Overview Presentation
Upcoming SlideShare
Loading in...5
×
 

090511 Appleby Magna Overview Presentation

on

  • 530 views

Slides used as an introduction to E-Learning Resources: Evaluation course at Appleby Magna on 11 May 2009 run by Martin Bazley on behalf of Renaissance East Midlands

Slides used as an introduction to E-Learning Resources: Evaluation course at Appleby Magna on 11 May 2009 run by Martin Bazley on behalf of Renaissance East Midlands

Statistics

Views

Total Views
530
Slideshare-icon Views on SlideShare
530
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

090511 Appleby Magna Overview Presentation 090511 Appleby Magna Overview Presentation Presentation Transcript

  • User testing and evaluation: why, how and when to do it Evaluating and user testing… Appleby Magna Centre 11 May 2009 Martin Bazley Martin Bazley & Associates www.martinbazley.com
  • Intro: Martin Bazley
    • Consultancy/websites/training/user testing ICT4Learning.com (10+ yrs)
    • Chair of E-Learning Group for Museums
      • Previously:
    • E-Learning Officer, MLA South East (3yrs)
    • Science Museum, London, Internet Projects (7yrs)
    • Taught Science in secondary schools (8yrs)
  • Why evaluate websites?
    • Why do evaluation and user testing?
    • Isn’t it really expensive and time consuming?
    • Save money – avoid substantial, hurried redevelopment later in project
    • Audience feedback improves resource in various ways – new activity ideas, etc
    • Demonstrate involvement of key stakeholders throughout project
  • Making websites effective
    • 3 key success factors
    • Understanding audience
    • Learning experience and learning outcomes – right for audience and clearly stated
    • Evaluation – esp in classroom or home (observe in ‘natural habitat’ wherever possible…)
  • Who for what for ...
    • Who for? (audience)
      • Need to be clear from start e.g. ‘ for teachers of yr5/6 in local area with whiteboards’
    • What ‘real-world’ outcomes? (learning outcomes)
      • What will they learn or do as a result? e.g. plan a visit to museum, learn that Romans wore funny clothes, discover that they enjoy using a digital camera…
    • How will they use it? (learning experiences)
      • What do they actually do with the site? e.g. work online or need to print it? - in pairs or alone? - with or without teacher help?
    • Where, when and why will they use it?
      • context is important
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  • Website evaluation and testing
    • Need to think ahead a bit:
      • what are you trying to find out?
      • how do you intend to test it?
      • why? what will do you do as a result ?
      • The Why? should drive this process
  • Test early
    • Testing one user early on in the project…
    • … is better than testing 50 near the end
  • When to evaluate or test and why
    • Before funding approval – project planning
    • Post-funding - project development
    • Post-project – summative evaluation
  • Testing is an iterative process
    • Testing isn’t something you do once
    • Make something
    • => test it
    • => refine it
    • => test it again
  • Before funding – project planning
    • *Evaluation of other websites
      • Who for? What for? How use it? etc
      • awareness raising: issues, opportunities
      • contributes to market research
      • possible elements, graphic feel etc
    • *Concept testing
      • check idea makes sense with audience
      • reshape project based on user feedback
    Focus group Research
  •  
  • Post-funding - project development
    • *Concept testing
      • refine project outcomes based on feedback from intended users
    • Refine website structure
      • does it work for users?
    • *Evaluate initial look and feel
      • graphics,navigation etc
    Focus group Focus group One-to-one tasks
  •  
  •  
  •  
  •  
  • Post-funding - project development 2
    • *Full evaluation of a draft working version
      • usability AND content: do activities work, how engaging is it, what else could be offered, etc
    Observation of actual use of website by intended users , using it for intended purpose , in intended context – classroom, workplace, library, home, etc
  •  
  •  
  •  
  •  
  •  
    • Video clip Moving Here key ideas not lesson plans
  •  
  •  
  •  
  •  
  • Post-funding - project development 3
    • Acceptance testing of ‘finished’ website
      • last minute check, minor corrections only
      • often offered by web developers
    • Summative evaluation
      • report for funders, etc
      • learn lessons at project level for next time
  • Two usability testing techniques
    • “ Get it” testing
    • - do they understand the purpose, how it works, etc
    • Key task testing
    • ask the user to do something, watch how well they do
    • Ideally, do a bit of each, in that order
  •  
  • User testing – who should do it?
    • The worst person to conduct (or interpret) user testing of your own site is…
      • you!
    • Beware of hearing what you want to hear…
    • Useful to have an external viewpoint
    • First 5mins in a genuine setting tells you 80% of what’s wrong with the site
    • etc
  • User testing – more info
    • User testing can be done cheaply – tips on how to do it available (MLA SE guide): www.ICT4Learning.com/onlineguide
    • Strengths and weaknesses of different data gathering techniques
  • Data gathering techniques
    • User testing - early in development and again near end
    • Online questionnaires – emailed to people or linked from website
    • Focus groups - best near beginning of project, or at redevelopment stage
    • Visitor surveys - link online and real visits
    • Web stats - useful for long term trends /events etc
    • Need to distinguish between:
    • Diagnostics – making a project or service better
    • Reporting
    • – to funders, or for advocacy
  • Online questionnaires
    • (+) once set up they gather numerical and qualitative data with no further effort – given time can build up large datasets
    • (+) the datasets can be easily exported and manipulated, can be sampled at various times, and structured queries can yield useful results
    • (–) respondents are self-selected and this will skew results – best to compare with similar data from other sources, like visitor surveys
    • (–) the number and nature of responses may depend on how the online questionnaire is displayed and promoted on the website
  • Focus groups
    • (+) can explore specific issues in more depth, yielding rich feedback
    • (+) possible to control participant composition to ensure representative
    • (–) comparatively time-consuming (expensive) to organise and analyse
    • (–) yield qualitative data only - small numbers mean numerical comparisons are unreliable
  • Visitor surveys
    • (+) possible to control participant composition to ensure representative
    • (–) comparatively time-consuming (expensive) to organise and analyse
    • (–) responses can be affected by various factors including interviewer, weather on the day, day of the week, etc, reducing validity of numerical comparisons between museums
  • Web stats
    • (+) Easy to gather data – can decide what to do with it later
    • (+) Person-independent data generated - it is the interpretation, rather than the data themselves, which is subjective. This means others can review the same data and verify or amend initial conclusions reached
  • Web stats
    • (–) Different systems generate different data for the same web activity – for example no of unique visits measured via Google Analytics is generally lower than that derived via server log files
    • (–) Metrics are complicated and require specialist knowledge to appreciate them fully
  • Web stats
    • (–) As the amount of off-website web activity increases (e.g. Web 2.0 style interactions) the validity of website stats decreases, especially for reporting purposes, but also for diagnostics
    • (–) Agreeing a common format for presentation of data and analysis requires collaborative working to be meaningful
  • Who for what for ...
    • Who for? (audience)
      • Need to be clear from start e.g. ‘ for teachers of yr5/6 in local area with whiteboards’
    • What ‘real-world’ outcomes? (learning outcomes)
      • What will they learn or do as a result? e.g. plan a visit to museum, learn that Romans wore funny clothes, discover that they enjoy using a digital camera…
    • How will they use it? (learning experiences)
      • What do they actually do with the site? e.g. work online or need to print it? - in pairs or alone? - with or without teacher help?
    • How can you ensure you do get these right?
      • Build questions into the planning process
      • Evaluate/test regularly
      • Get informal feedback whenever possible – and act on it
    • Who is it for?
    • What are the real world outcomes?
    • How will they use it?
    • Also When, Where, Why?
    Who for what for ...
    • Martin Bazley
    • 0780 3580 737
    • www.martinbazley.com
    More information