Understanding online audiences:   How to evaluate your website and why   World wide wonder: Museums on the web   V&A Sackl...
Martin Bazley <ul><li>Science Museum, London,  Internet Projects (7yrs) </li></ul><ul><li>E-Learning Officer, MLA South Ea...
 
www.martinbazley.com
<ul><li>Why do we need to research  our online audiences? </li></ul>
<ul><li>Because otherwise users don’t ‘get’ what we are offering: </li></ul><ul><li>a real–world example (10 Jun09: 43% of...
Even a slight difference in viewpoints… … can cause real problems for users
 
 
 
 
 
 
A conflict between  visual affordance … … and  written instructions visual affordance  almost always wins
<ul><li>Another example </li></ul>
Hmm… the button is really small… And it’s not green… You can’t push it in… Just push the big green button by the gate
Huge green button
<ul><li>How can we get a sense of who our online visitors are and what they do with our online content?    </li></ul><ul><...
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Define audience research goal Analyse data Collect data  Use results to guide changes Plan methodology
Reasons for doing audience research: Evaluation <ul><li>Did your project/product/service do what you wanted it to do? </li...
Reasons for doing audience research: Promotion <ul><li>Improve your offer for your target audiences </li></ul><ul><li>Incr...
Reasons for doing audience research: Planning <ul><li>Inform development of a new product/service </li></ul><ul><li>Inform...
Tools available <ul><li>Qualitative  – focus groups, “free text” questions in surveys, interviews </li></ul><ul><li>Quanti...
When to evaluate or test and why <ul><li>Before funding approval – project planning </li></ul><ul><li>Post-funding - proje...
Testing is an iterative process <ul><li>Testing isn’t something you do once  </li></ul><ul><li>Make something </li></ul><u...
Before funding – project planning <ul><li>*Evaluation of other websites </li></ul><ul><ul><li>Who for? What for? How use i...
 
Post-funding - project development <ul><li>*Concept testing </li></ul><ul><ul><li>refine project outcomes based on  feedba...
 
 
 
 
Post-funding - project development 2 <ul><li>*Full evaluation of a draft working version  </li></ul><ul><ul><li>usability ...
 
 
 
 
 
 
 
<ul><li>Video clip Moving Here key ideas not lesson plans  </li></ul>
 
 
 
 
Post-funding - project development 3 <ul><li>Acceptance testing of ‘finished’ website </li></ul><ul><ul><li>last minute ch...
Website evaluation and testing <ul><li>Need to think ahead a bit: </li></ul><ul><ul><li>what are you trying to find out? <...
User test early <ul><li>Testing one user early on in the project… </li></ul><ul><li>… is better than testing 50 near the e...
Two usability testing techniques  <ul><li>“ Get it” testing </li></ul><ul><li>- do they understand the purpose, how it wor...
 
User testing – who should do it? <ul><li>The worst person to conduct (or interpret) user testing of your own site is… </li...
Extracts from London Museums Hub Online audience research
Data used for different things: Reporting and diagnostics <ul><li>Diagnostics  – aimed at improvement of resources, identi...
<ul><li>Google Analytics = </li></ul><ul><li>Most popular web-based web stats tool.  </li></ul><ul><li>But also Piwik, Min...
Stats solutions <ul><li>Exclusion of bots / internal traffic </li></ul><ul><li>most people aware of need to do so  </li></...
Issues <ul><li>Browser-based statistics applications (like GA) cannot be installed on the whole of a site because  </li></...
Issues <ul><li>Barriers to effective usage of web stats </li></ul><ul><li>Subsections of websites or microsites have diffe...
Issues <ul><li>Off-website activity not reflected in server stats </li></ul><ul><li>More of users’ interaction with museum...
Issues <ul><li>As museums engage further with web 2.0 services their online identities become fuzzier </li></ul><ul><li>=>...
Issues <ul><li>Ways to ameliorate:  </li></ul><ul><li>Inbound RSS feeds from those services could be embedded into the mus...
Concerns <ul><li>Benchmarks: people want to know what is “good” and what is not when it comes to web statistics  >P re Goo...
Geffrye Museum web stats linked with events etc
Key points <ul><li>What we did </li></ul><ul><ul><li>Online questionnaire </li></ul></ul><ul><ul><li>Visitor survey </li><...
Online questionnaire <ul><li>SurveyMonkey – questions developed with museums then refined following trialling </li></ul><u...
 
 
EM Online collections research by MHM
 
 
 
 
 
Visitor survey <ul><li>gathered data for comparison with OQs </li></ul><ul><li>recruitment for focus groups </li></ul>
Focus groups <ul><li>18-24 / families / adults 25+ </li></ul>
 
What we found out <ul><li>Link between websites and physical visits </li></ul><ul><ul><li>Majority of web use is for plann...
What we found out <ul><li>Very little use of websites other than for planning the visit, except: </li></ul><ul><ul><li>Tea...
What we found out <ul><li>Very little interest in ‘user generated content’, but… </li></ul><ul><ul><li>…  this may be beca...
What we found out <ul><li>Demographic data </li></ul><ul><li>Motivations for using website </li></ul><ul><li>Expectations ...
Online users
 
 
 
 
Most commonly requested web content relates to visiting <ul><li>The top 4 overall are:  </li></ul><ul><li>information on w...
<ul><li>Value = Reach x Quality </li></ul>
 
 
 
Metrics <ul><li>Net Promoter may not be appropriate as Quality metric for cultural sector websites </li></ul><ul><li>Could...
Defining market share <ul><li>‘Reach’ may be very small for niche topics / audiences – does this matter? </li></ul><ul><li...
Metrics for reporting <ul><li>Currently ‘visits’ = default metric for reporting </li></ul><ul><li>Not v meaningful, but do...
Metrics for development <ul><li>Use a variety of metrics in combination, based on nature of project issues,  future object...
Upcoming SlideShare
Loading in …5
×

Understanding Online Audiences Bazley Ma Wonder Web 10 Jun09

826 views
775 views

Published on

Martin Bazley slides from MA Wonder Web 10 Jun 09 V&A

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
826
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Understanding Online Audiences Bazley Ma Wonder Web 10 Jun09

  1. 1. Understanding online audiences: How to evaluate your website and why World wide wonder: Museums on the web V&A Sackler Centre 10 June 2009 Martin Bazley Online experience consultant Martin Bazley & Associates
  2. 2. Martin Bazley <ul><li>Science Museum, London, Internet Projects (7yrs) </li></ul><ul><li>E-Learning Officer, MLA South East (3yrs) </li></ul><ul><li>Chair of E-Learning Group for Museums </li></ul><ul><li>Consultancy, websites, training, user testing, evaluation … Martin Bazley & Associates Notes in pack for afterwards </li></ul>
  3. 4. www.martinbazley.com
  4. 5. <ul><li>Why do we need to research our online audiences? </li></ul>
  5. 6. <ul><li>Because otherwise users don’t ‘get’ what we are offering: </li></ul><ul><li>a real–world example (10 Jun09: 43% of non-users don’t want web at home - even for free) </li></ul><ul><li>Re value/status of Wikipedia – motivation , then information – anyone involved in any serious research will discover errors very quickly – try and get links into museum content via Wikipedia and other channels </li></ul>
  6. 7. Even a slight difference in viewpoints… … can cause real problems for users
  7. 14. A conflict between visual affordance … … and written instructions visual affordance almost always wins
  8. 15. <ul><li>Another example </li></ul>
  9. 16. Hmm… the button is really small… And it’s not green… You can’t push it in… Just push the big green button by the gate
  10. 17. Huge green button
  11. 18. <ul><li>How can we get a sense of who our online visitors are and what they do with our online content?   </li></ul><ul><li>How do we gather data to help us improve what we do? </li></ul><ul><li>How do we measure success from the user's point of view, and against our own objectives and constraints? </li></ul><ul><li>For example, how justify investment in social networks etc? </li></ul>
  12. 19. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  13. 20. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  14. 21. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  15. 22. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  16. 23. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  17. 24. Define audience research goal Analyse data Collect data Use results to guide changes Plan methodology
  18. 25. Reasons for doing audience research: Evaluation <ul><li>Did your project/product/service do what you wanted it to do? </li></ul><ul><li>Provide information for stakeholders </li></ul><ul><li>Gauge audience satisfaction </li></ul>
  19. 26. Reasons for doing audience research: Promotion <ul><li>Improve your offer for your target audiences </li></ul><ul><li>Increase usage </li></ul><ul><li>Widen access </li></ul>
  20. 27. Reasons for doing audience research: Planning <ul><li>Inform development of a new product/service </li></ul><ul><li>Inform business planning </li></ul><ul><li>Prove interest in a related activity (e.g. exhibit) </li></ul>
  21. 28. Tools available <ul><li>Qualitative – focus groups, “free text” questions in surveys, interviews </li></ul><ul><li>Quantitative – web statistics, “multiple choice” questions in surveys, visitor tracking </li></ul><ul><li>Observational – user testing, ethnographic </li></ul>
  22. 29. When to evaluate or test and why <ul><li>Before funding approval – project planning </li></ul><ul><li>Post-funding - project development </li></ul><ul><li>Post-project – summative evaluation </li></ul>
  23. 30. Testing is an iterative process <ul><li>Testing isn’t something you do once </li></ul><ul><li>Make something </li></ul><ul><li>=> test it </li></ul><ul><li>=> refine it </li></ul><ul><li>=> test it again </li></ul>
  24. 31. Before funding – project planning <ul><li>*Evaluation of other websites </li></ul><ul><ul><li>Who for? What for? How use it? etc </li></ul></ul><ul><ul><li>awareness raising: issues, opportunities </li></ul></ul><ul><ul><li>contributes to market research </li></ul></ul><ul><ul><li>possible elements, graphic feel etc </li></ul></ul><ul><li>*Concept testing </li></ul><ul><ul><li>check idea makes sense with audience </li></ul></ul><ul><ul><li>reshape project based on user feedback </li></ul></ul>Focus group Research
  25. 33. Post-funding - project development <ul><li>*Concept testing </li></ul><ul><ul><li>refine project outcomes based on feedback from intended users </li></ul></ul><ul><li>Refine website structure </li></ul><ul><ul><li>does it work for users? </li></ul></ul><ul><li>*Evaluate initial look and feel </li></ul><ul><ul><li>graphics,navigation etc </li></ul></ul>Focus group Focus group One-to-one tasks
  26. 38. Post-funding - project development 2 <ul><li>*Full evaluation of a draft working version </li></ul><ul><ul><li>usability AND content: do activities work, how engaging is it, what else could be offered, etc </li></ul></ul>Observation of actual use of website by intended users , using it for intended purpose , in intended context – workplace, classroom, library, home, etc
  27. 46. <ul><li>Video clip Moving Here key ideas not lesson plans </li></ul>
  28. 51. Post-funding - project development 3 <ul><li>Acceptance testing of ‘finished’ website </li></ul><ul><ul><li>last minute check, minor corrections only </li></ul></ul><ul><ul><li>often offered by web developers </li></ul></ul><ul><li>Summative evaluation </li></ul><ul><ul><li>report for funders, etc </li></ul></ul><ul><ul><li>learn lessons at project level for next time </li></ul></ul>
  29. 52. Website evaluation and testing <ul><li>Need to think ahead a bit: </li></ul><ul><ul><li>what are you trying to find out? </li></ul></ul><ul><ul><li>how do you intend to test it? </li></ul></ul><ul><ul><li>why? what will do you do as a result ? </li></ul></ul><ul><ul><li>The Why? should drive this process </li></ul></ul>
  30. 53. User test early <ul><li>Testing one user early on in the project… </li></ul><ul><li>… is better than testing 50 near the end </li></ul>
  31. 54. Two usability testing techniques <ul><li>“ Get it” testing </li></ul><ul><li>- do they understand the purpose, how it works, etc </li></ul><ul><li>Key task testing </li></ul><ul><li>ask the user to do something, watch how well they do </li></ul><ul><li>Ideally, do a bit of each, in that order </li></ul>
  32. 56. User testing – who should do it? <ul><li>The worst person to conduct (or interpret) user testing of your own site is… </li></ul><ul><ul><li>you! </li></ul></ul><ul><li>Beware of hearing what you want to hear… </li></ul><ul><li>Useful to have an external viewpoint </li></ul><ul><li>First 5mins in a genuine setting tells you 80% of what’s wrong with the site </li></ul>
  33. 57. Extracts from London Museums Hub Online audience research
  34. 58. Data used for different things: Reporting and diagnostics <ul><li>Diagnostics – aimed at improvement of resources, identifying new areas for development, etc </li></ul><ul><li>Reporting – to DCMS and other funders, internal etc </li></ul>
  35. 59. <ul><li>Google Analytics = </li></ul><ul><li>Most popular web-based web stats tool. </li></ul><ul><li>But also Piwik, Mint, Clicky etc </li></ul><ul><li>Also AWstats, HitStats, etc </li></ul>
  36. 60. Stats solutions <ul><li>Exclusion of bots / internal traffic </li></ul><ul><li>most people aware of need to do so </li></ul><ul><li>but there is also pressure to provide higher figures…so maybe leave them in? </li></ul>
  37. 61. Issues <ul><li>Browser-based statistics applications (like GA) cannot be installed on the whole of a site because </li></ul><ul><li>a) no access to the code of the site, or </li></ul><ul><li>b) there are just too many legacy pages to change </li></ul>
  38. 62. Issues <ul><li>Barriers to effective usage of web stats </li></ul><ul><li>Subsections of websites or microsites have different statistics packages => separate analysis for each part </li></ul><ul><li>Perhaps outside supplier controls access to log files or stats package and are unwilling to add Google Analytics etc </li></ul>
  39. 63. Issues <ul><li>Off-website activity not reflected in server stats </li></ul><ul><li>More of users’ interaction with museum web content is happening away from the museum websites themselves, via blogging, Flickr, YouTube, wikis, etc. </li></ul><ul><li>Activity via RSS feeds such as blogs or podcasts can be measured… </li></ul>
  40. 64. Issues <ul><li>As museums engage further with web 2.0 services their online identities become fuzzier </li></ul><ul><li>=> May soon not be feasible for museums to measure impact in absolute numbers </li></ul>
  41. 65. Issues <ul><li>Ways to ameliorate: </li></ul><ul><li>Inbound RSS feeds from those services could be embedded into the museum sites. Strategies to gather statistics from external services can be developed. </li></ul><ul><li>But a line needs to be drawn to avoid wasting too much effort, following the law of diminishing returns. There is also the copyright-related question of to what extent such content is yours given that it is generated ‘out there’ rather than on your site. </li></ul><ul><li>Therefore, the precise strategy to adopt should be determined by the ways in which the information will be used. This is one reason for adopting a more universal approach to data collection and interpretation, as outlined below. </li></ul>
  42. 66. Concerns <ul><li>Benchmarks: people want to know what is “good” and what is not when it comes to web statistics >P re Google benchmarks </li></ul><ul><li>People don’t always know what to do with the results </li></ul>
  43. 67. Geffrye Museum web stats linked with events etc
  44. 68. Key points <ul><li>What we did </li></ul><ul><ul><li>Online questionnaire </li></ul></ul><ul><ul><li>Visitor survey </li></ul></ul><ul><ul><li>Focus groups </li></ul></ul><ul><li>What we found out </li></ul><ul><ul><li>Some key findings </li></ul></ul><ul><li>Recommended next steps </li></ul>
  45. 69. Online questionnaire <ul><li>SurveyMonkey – questions developed with museums then refined following trialling </li></ul><ul><ul><li>Links to questionnaire added to museum websites </li></ul></ul><ul><ul><li>Run for several months, 000s responses collected </li></ul></ul>
  46. 72. EM Online collections research by MHM
  47. 78. Visitor survey <ul><li>gathered data for comparison with OQs </li></ul><ul><li>recruitment for focus groups </li></ul>
  48. 79. Focus groups <ul><li>18-24 / families / adults 25+ </li></ul>
  49. 81. What we found out <ul><li>Link between websites and physical visits </li></ul><ul><ul><li>Majority of web use is for planning visits </li></ul></ul><ul><ul><li>Only about half use the website before visiting the museum </li></ul></ul><ul><ul><li>Some websites failing to create appropriate impressions of the physical museums, so failing to attract undecided visitors >N </li></ul></ul><ul><ul><li>Confusion re difference between stored and displayed collections </li></ul></ul>
  50. 82. What we found out <ul><li>Very little use of websites other than for planning the visit, except: </li></ul><ul><ul><li>Teachers – looking for teaching materials and ideas </li></ul></ul><ul><ul><li>Families – looking for online games, homework etc </li></ul></ul><ul><ul><li>Comment re online collection browsing: researching/searching, following, browsing ( MHM) </li></ul></ul>
  51. 83. What we found out <ul><li>Very little interest in ‘user generated content’, but… </li></ul><ul><ul><li>… this may be because most do not yet understand the possibilities, partly because trends moving so quickly. More research required, based on specific proposals </li></ul></ul>
  52. 84. What we found out <ul><li>Demographic data </li></ul><ul><li>Motivations for using website </li></ul><ul><li>Expectations of website </li></ul><ul><li>etc </li></ul>
  53. 85. Online users
  54. 90. Most commonly requested web content relates to visiting <ul><li>The top 4 overall are: </li></ul><ul><li>information on what the galleries are like </li></ul><ul><li>information on the museum’s objects </li></ul><ul><li>details of events and exhibitions </li></ul><ul><li>information about collections in store </li></ul><ul><li>what most online users want is more information about the museum visit . </li></ul>
  55. 91. <ul><li>Value = Reach x Quality </li></ul>
  56. 95. Metrics <ul><li>Net Promoter may not be appropriate as Quality metric for cultural sector websites </li></ul><ul><li>Could a single question (or handful of questions) be agreed as standard, and used to assess ‘satisfaction’ via surveys? </li></ul><ul><li>How could the results be used? </li></ul><ul><ul><li>Reporting? </li></ul></ul><ul><ul><li>Internal investment planning? </li></ul></ul>
  57. 96. Defining market share <ul><li>‘Reach’ may be very small for niche topics / audiences – does this matter? </li></ul><ul><li>Could use % of total audience instead? </li></ul><ul><li>This would mean </li></ul><ul><ul><li>(a) estimating total audience (often part of project plan / funding application anyway) </li></ul></ul><ul><ul><li>(b) using unique visitors stats or other means to estimate reach </li></ul></ul>
  58. 97. Metrics for reporting <ul><li>Currently ‘visits’ = default metric for reporting </li></ul><ul><li>Not v meaningful, but does this matter, given that there is no single metric </li></ul><ul><li>What should we be reporting on? </li></ul><ul><ul><li>Audience penetration </li></ul></ul><ul><ul><li>Audience satisfaction </li></ul></ul><ul><ul><li>Audience engagement </li></ul></ul><ul><ul><li>Breadth of coverage …? </li></ul></ul>
  59. 98. Metrics for development <ul><li>Use a variety of metrics in combination, based on nature of project issues, future objectives and current sector priorities… </li></ul><ul><li>No magic bullets on horizon – justifying investment still a ‘creative’ decision </li></ul><ul><li>Need balance between flexibility between projects, and shareability of data, for successful analysis and skills development </li></ul><ul><li>‘ Top tips’ in notes in pack ‘Understanding online audiences’ </li></ul>

×