Understanding online    audiences    Planning and implementing  research into online audiences  UX day Oxford 18 March 201...
Martin BazleyPreviously• Teaching (7 yrs)• Science Museum, London,  Internet Projects (7yrs)• E-Learning Officer, MLA Sout...
Martin Bazley• Current• Developing online resources, websites,  user testing, evaluation, training,  consultancy…  Martin ...
Note to self:                check stats                tomorrow to see                if anyone looked                up ...
How can we get a sense of who our onlinevisitors are and what they do with our onlinecontent?How do we gather data to help...
Reasons for doing audience research:               Evaluation• Did your project/product/service do what  you wanted it to ...
Reasons for doing audience research:               Promotion• Improve your offer for your target  audiences• Increase usag...
Reasons for doing audience research:                 Planning• Inform development of a new  product/service• Inform busine...
Data gathering tools• Qualitative: focus groups, “free text”  questions in surveys, interviews• Quantitative: web statisti...
Define audience                                                         Plan methodology                          research...
Define audience research                                                                 Plan methodology                 ...
Define audience research                                                                 Plan methodology                 ...
Define audience research                                                                 Plan methodology                 ...
Define audience research                                                             Plan methodology                     ...
Define audience                                                         Plan methodology                          research...
Strengths and weaknesses of different       data gathering techniques
Data gathering techniquesUser testing - early in development and again near endOnline questionnaires – emailed to people o...
Need to distinguish between:Diagnostics  – making a project or service betterReporting– to funders, or for advocacy
Online questionnaires(+) once set up they gather numerical and  qualitative data with no further effort –   given time can...
Focus groups(+) can explore specific issues in more  depth, yielding rich feedback(+) possible to control participant  com...
Visitor surveys(+) possible to control participant  composition to ensure representative(–) comparatively time-consuming  ...
Web stats(+) Easy to gather data – can decide what to  do with it later(+) Person-independent data generated - it  is the ...
Web stats(–) Different systems generate different data  for the same web activity – for example no  of unique visits measu...
Web stats(–) As the amount of off-website web activity  increases (e.g. Web 2.0 style interactions)  the validity of websi...
Online surveys    SurveyMonkeywww.surveymonkey.com
Web statsGoogle Analytics
Learn GA: short intro videos etc     https://www.google.com/analytics/iq.html
The best way to learn GA       is to use it:     www.google.com/analytics/
Web stats:   Focus on   trends  rather than absolute     values
The ‘long tail’An example of a power law graph showing popularityranking. To the right is the long tail; to the left are t...
The ‘long tail’The tail becomes bigger and longer in new markets (depicted inred). In other words, whereas traditional ret...
SCA guidancehttp://sca.jiscinvolve.org/wp/audience-publications/                      Good overview                   Step...
More information / advice /          ideas    Happy to help - phone number on site:     Martin Bazley     0780 3580 737  w...
Extra slides   not used in sessionSome of these may be useful -  please feel free to call for  clarification or more info
When to evaluate or test and why• Before funding approval – project planning• Post-funding - project development• Post-pro...
Testing is an iterative processTesting isn’t something you do onceMake something => test it    => refine it       => test ...
Before funding – project planning• *Evaluation of other websites  – Who for? What for? How use it? etc  – awareness raisin...
Post-funding - project development• *Concept testing  – refine project outcomes based on    feedback from intended users  ...
Card sorting - get various people to try out thewebsite structure before you build it
Post-funding - project development 2• *Full evaluation of a draft working  version    – usability AND content: do activiti...
Post-funding - project development 3• Acceptance testing of ‘finished’  website  – last minute check, minor corrections on...
Website evaluation and testingNeed to think ahead a bit:  – what are you trying to find out?  – how do you intend to test ...
Evaluating online learningresources in the classroomMartin BazleyOnline experience consultant
Key point: for a site designed for schools, the most effective user testing observations will be made in a real classroom ...
National Archives Moving Here project For teachers of 8 – 14 yr olds History Geography and Citizenship Features: Interacti...
Moving Here Schools:For 8 – 14 yr olds studying:History Geography and CitizenshipFeatures:Interactives, activity sheets, a...
1. preliminary testing sessions –conventional user-testing with teachers (at TNA)
2. in-class testing –teachers used the Moving Here Schools site with pupils in their own classrooms This meant sitting at ...
Evaluation: 2-phase approach
Site ready in parts – but not too ready:
The environment had a significant impact on how the site was used.The class dynamic within the different groups contribute...
The environment and social dynamicsThe environment had a significant impact on how the site was used.The class dynamic wit...
in-class testing picked up elements  not there in conventional user  testing.teachers in preliminary user testing  did not...
interactive activities:looked big enough when viewed on a screen nearby…
… but text/images toosmall for some children tosee from the back of theclass…
…so interactives neededto be viewable full-screen
Only spotted during in-class testing:…so interactives neededto be viewable full-screen
content:when students tried to read text out loud, teachers realised some text was too difficult or complex
activity sheets:some sheets did not have spaces for students to put their names - caused confusion when printing 30 at sam...
Manchester Art Gallery art interactive  For teachers of 8 – 11 yr olds, and for pupils  History Art and Citizenship  Featu...
Martin Bazley
Martin Bazley
Martin Bazley
Martin Bazley
This classroom user testing is all very well, but...How can you see everything in a class of 30  children – dont you miss ...
This classroom user testing is all very well, but...How can you see everything in a class of 30  children – dont you miss ...
This classroom user testing is all very well, but... but...          This classroom user testing is all very well,Doesnt u...
This classroomclassroom user testing is very well,but...             This user testing is all all very well, but... Doesnt...
This classroomclassroom user testing is very well,but...             This user testing is all all very well, but... Cant m...
This classroomclassroom user testing is very well,but...             This user testing is all all very well, but... Cant m...
This classroom user testing is all very well, but... but...          This classroom user testing is all very well,I dont h...
Video clips• Moving Here key ideas not lesson plans etc  http://www.vimeo.com/18888798• http://www.vimeo.com/18892401 Less...
User test earlyTesting one user early on in the project……is better than testing 50 near the end
Two usability testing techniques“Get it” testing- do they understand the purpose, how it  works, etcKey task testing- ask ...
User testing – who should do it?• The worst person to conduct (or interpret)  user testing of your own site is…  – you!• B...
Strengths and weaknesses of different       data gathering techniques
Data gathering techniquesUser testing - early in development and again near endOnline questionnaires – emailed to people o...
Need to distinguish between:Diagnostics  – making a project or service betterReporting– to funders, or for advocacy
Online questionnaires(+) once set up they gather numerical and  qualitative data with no further effort –   given time can...
Focus groups(+) can explore specific issues in more  depth, yielding rich feedback(+) possible to control participant  com...
Visitor surveys(+) possible to control participant  composition to ensure representative(–) comparatively time-consuming  ...
Web stats(+) Easy to gather data – can decide what to  do with it later(+) Person-independent data generated - it  is the ...
Web stats(–) Different systems generate different data  for the same web activity – for example no  of unique visits measu...
Web stats(–) As the amount of off-website web activity  increases (e.g. Web 2.0 style interactions)  the validity of websi...
More information / advice /          ideas     Martin Bazley     0780 3580 737  www.martinbazley.com
SCA guidancehttp://sca.jiscinvolve.org/wp/audience-publications/                      Good overview                   Step...
Crit room‘simulated user testing’
Crit room protocolSimulating user testing – usually one-to-one  in quiet roomNo one (especially site stakeholders) other  ...
More information / advice /          ideas    Happy to help - phone number on site:     Martin Bazley     0780 3580 737  w...
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Understanding online audiences ux day oxford 18 mar 13
Upcoming SlideShare
Loading in...5
×

Understanding online audiences ux day oxford 18 mar 13

160

Published on

Slides used by Martin Bazley as part of UX Oxford day 18 March 2013 organised by the Bodleian Libraries

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
160
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • The Moving Here site, launched in 2003, is the product of collaboration between 30 local, regional and national archives, museums and libraries across the UK, headed by the National Archives. The site explores, records and illustrates why people came to England over the last 200 years, and what their experiences were and continue to be. It holds a database of on-line versions of 200,000 original documents and images recording migration history, all free to access for personal and educational use. The documents include photographs, personal papers, government documents, maps, and images of art objects, as well as a collection of sound recordings and video clips, all accessible through a search facility. The site also includes a Migration Histories section focusing on four communities – Caribbean, Irish, Jewish and South Asian – as well as a gallery of selected images from the collection, a section about tracing family roots, and a Stories section allowing users to submit stories and photographs about their own experiences of migration to England. The site was funded by the BLF (Big Lottery Fund). The Moving Here Schools site is a subsection of the greater Moving Here site, and was designed during a second phase (2005-07) of the Moving Here project. One aim of this phase is to ensure that stories of migration history are passed down to younger generations through schools. The Schools section therefore focuses on History, Citizenship and Geography for Key Stages 2 and 3 of the National Curriculum (ages 8 to 14), and includes four modules: The Victorians, Britain Since 1948, The Holocaust, and People and Places. Designed for use with an interactive whiteboard, the resources on Moving Here Schools include images and documents, audio and video clips, downloadable activity sheets, on-line interactive activities, a gallery of images, and links to stories of immigration experiences that have been collected by the Community Partnerships strand of the project. Funding for the Schools section is provided by HLF (Heritage Lottery Fund). The Schools site launches in March 2007 as a new section of the Moving Here site. Most on-line resource testing involves a potential user being observed in an environment chosen and closely controlled by an evaluator who guides the user through pre-set questions and assigned actions. Although this method of assessment can be useful in addressing top level issues and can give insight into some of the design, navigation and content changes that may need to be made, it does not replicate the conditions under which the Web site would normally be used. In this presentation we contend that the primary aim of testing Web sites for use in schools should be to capture feedback, not only on the usability, overall design, content and other aspects of the Web site itself, but also on the ways in which the Web site supports, or hinders, enjoyment and learning on the part of teachers and students in a real classroom environment. A range of issues, including group dynamics in the classroom, teachers' prior experience of using an electronic whiteboard, their background knowledge of the subject, and other issues, can all have an impact on the overall experience. The feedback generated through 'real life testing' – or 'habitat testing' – is rich and highly informative for refining the site to improve the end user experience – and it need not be expensive or overly time consuming. In this presentation we will discuss the expectations, challenges and opportunities that arose during 25 in-class testing sessions undertaken as part of our evaluation programme. We also make suggestions about how this type of testing can enhance Web site evaluation programmes, not only with regard to gaining feedback on specific resources, but also as an awareness-raising experience for museum practitioners.
  • A comprehensive evaluation for Moving Here Schools was built into the project from the beginning, and was allocated £20,000 of the entire project's budget, which came to approximately 6.7 % of the £300,000 budget that was allotted to the Schools portion of the project (or 1.7 % of the total project budget of £1.2 million). The evaluation programme included two distinct phases: a period of preliminary testing sessions, during which teachers participated in conventional user-testing, and a period of in-class testing, during which teachers used the Moving Here Schools site directly with their pupils in their own classrooms. In planning the evaluation process, the team felt that a combination of methodologies might produce more fruitful results than just one methodology alone. The same concept is suggested in Haley Goldman and Bendoly's study of heuristic evaluation used in tandem with other types of evaluation to test museum Web sites (2003). Six teachers from the London area formed the evaluation team, selected and partially supported by the LGFL (London Grid for Learning). Four of them participated in both the preliminary testing sessions and the in-class sessions. Two preliminary testing sessions were held at the National Archives in February and June 2006 to review the first and second drafts of the Schools site. The Education Resources Manager led the sessions and two other members of the Moving Here team participated, mainly recording information. The user environment (the ICT suite at the National Archives), as well as the methodology, was based on conventional user testing, bringing the testers into a closed environment and observing them as they interacted with the draft site. For each of the two observation sessions, the teachers spent a full day looking at the draft versions of the module they had been assigned and commenting on navigation, design, subject coverage, style, tone and other elements of the site, observed by three Moving Here team members. They also participated in a group discussion at the end of the day. Their feedback was written into reports and used to make improvements to the site. These preliminary testing sessions proved invaluable to improving the quality and usability of the site. Between the February session and the June session, major changes were made as a direct result of teacher feedback, substantially improving the site. The main changes made were to reorganize the materials into shorter, blockier lessons rather than longer, linear lessons; to merge two related lessons into one; to shorten most lessons and lesson pages; and to redevelop some of the interactive activity specifications. In the June session teachers noted that their input had been followed up and commented favourably upon the fact that their feedback had been used to improve the site (one teacher said that even though she had been asked for her opinion about Web sites before, she had never seen her suggestions put into practice before this particular project). After the June session, changes were again made to the site, including design modifications, changes to interactive activities that had already been programmed, and a few more changes (including additions) to content. Although this amount of testing could be considered enough – especially since it yielded such fruitful results – Moving Here also included in-class testing as part of its programme. This approach incorporates the advantages of 'ordinary' user testing, but builds on it by taking account of the social dynamics and practical problems that influence the use of the site, so as to ensure the site is usable in the classroom – as opposed to ensuring that it is usable in a controlled user testing environment. This unique addition to the evaluation programme proved even more useful than the conventional user testing. We were hired as the evaluation team to carry out the in-class testing programme. In classroom testing sessions, teachers were observed, at their schools, using the Moving Here site with their students. The schools included two primary schools in the borough of Newham, London, a secondary school in Bethnal Green, London, and a secondary school in Peckham, London – all neighbourhoods with culturally diverse communities, a high proportion of immigrants and a large number of people whose first language is not English. Four of the original six teachers were involved, and the evaluation team went into their classrooms 25 times between October and December 2006 – 5 times per teacher, with one teacher doing a double load and testing two modules instead of a single one, for a total of 10 sessions. The teachers were paid £300 each for five sessions of in-class testing, working out at £60 per session. This closely follows the standard amount of between $50 and $100 US for one session, as suggested by Steve Krug in his seminal work on user testing, 'Don't Make Me Think!'(2000). The teacher who tested two modules received double payment. The total spent on teachers was £1500. The evaluation team was paid approximately £18,500 for the 5 sets of observations and session reports, plus a final report. The evaluation team met with each of the participating teachers in advance to agree which lessons to use for the in-classroom evaluation, to brief the teachers on what was required during the session and to agree on the procedure for arriving during the school day. In consultation with the Education Resources Manager, the evaluators produced an evaluation plan with a set of questions to ask each teacher to make sure they had covered all areas and issues, plus an in-class observation checklist. The questions covered learning outcomes, tone, length of lesson, order of pages, images, design, navigation, accessibility, activity sheet issues, issues with interactives, children's engagement, and other improvements that teachers thought might be useful. The teachers submitted lesson plans with intended learning outcomes for each session according to the National Curriculum. Immediately after each session a written session report was sent to the Education Resources Manager, who used the findings in each report to implement changes to the site while the series of observations were still going on. In some cases, changes requested by a teacher were in place in time for the next observation.
  • A comprehensive evaluation for Moving Here Schools was built into the project from the beginning, and was allocated £20,000 of the entire project's budget, which came to approximately 6.7 % of the £300,000 budget that was allotted to the Schools portion of the project (or 1.7 % of the total project budget of £1.2 million). The evaluation programme included two distinct phases: a period of preliminary testing sessions, during which teachers participated in conventional user-testing, and a period of in-class testing, during which teachers used the Moving Here Schools site directly with their pupils in their own classrooms. In planning the evaluation process, the team felt that a combination of methodologies might produce more fruitful results than just one methodology alone. The same concept is suggested in Haley Goldman and Bendoly's study of heuristic evaluation used in tandem with other types of evaluation to test museum Web sites (2003). Six teachers from the London area formed the evaluation team, selected and partially supported by the LGFL (London Grid for Learning). Four of them participated in both the preliminary testing sessions and the in-class sessions. Two preliminary testing sessions were held at the National Archives in February and June 2006 to review the first and second drafts of the Schools site. The Education Resources Manager led the sessions and two other members of the Moving Here team participated, mainly recording information. The user environment (the ICT suite at the National Archives), as well as the methodology, was based on conventional user testing, bringing the testers into a closed environment and observing them as they interacted with the draft site. For each of the two observation sessions, the teachers spent a full day looking at the draft versions of the module they had been assigned and commenting on navigation, design, subject coverage, style, tone and other elements of the site, observed by three Moving Here team members. They also participated in a group discussion at the end of the day. Their feedback was written into reports and used to make improvements to the site. These preliminary testing sessions proved invaluable to improving the quality and usability of the site. Between the February session and the June session, major changes were made as a direct result of teacher feedback, substantially improving the site. The main changes made were to reorganize the materials into shorter, blockier lessons rather than longer, linear lessons; to merge two related lessons into one; to shorten most lessons and lesson pages; and to redevelop some of the interactive activity specifications. In the June session teachers noted that their input had been followed up and commented favourably upon the fact that their feedback had been used to improve the site (one teacher said that even though she had been asked for her opinion about Web sites before, she had never seen her suggestions put into practice before this particular project). After the June session, changes were again made to the site, including design modifications, changes to interactive activities that had already been programmed, and a few more changes (including additions) to content. Although this amount of testing could be considered enough – especially since it yielded such fruitful results – Moving Here also included in-class testing as part of its programme. This approach incorporates the advantages of 'ordinary' user testing, but builds on it by taking account of the social dynamics and practical problems that influence the use of the site, so as to ensure the site is usable in the classroom – as opposed to ensuring that it is usable in a controlled user testing environment. This unique addition to the evaluation programme proved even more useful than the conventional user testing. We were hired as the evaluation team to carry out the in-class testing programme. In classroom testing sessions, teachers were observed, at their schools, using the Moving Here site with their students. The schools included two primary schools in the borough of Newham, London, a secondary school in Bethnal Green, London, and a secondary school in Peckham, London – all neighbourhoods with culturally diverse communities, a high proportion of immigrants and a large number of people whose first language is not English. Four of the original six teachers were involved, and the evaluation team went into their classrooms 25 times between October and December 2006 – 5 times per teacher, with one teacher doing a double load and testing two modules instead of a single one, for a total of 10 sessions. The teachers were paid £300 each for five sessions of in-class testing, working out at £60 per session. This closely follows the standard amount of between $50 and $100 US for one session, as suggested by Steve Krug in his seminal work on user testing, 'Don't Make Me Think!'(2000). The teacher who tested two modules received double payment. The total spent on teachers was £1500. The evaluation team was paid approximately £18,500 for the 5 sets of observations and session reports, plus a final report. The evaluation team met with each of the participating teachers in advance to agree which lessons to use for the in-classroom evaluation, to brief the teachers on what was required during the session and to agree on the procedure for arriving during the school day. In consultation with the Education Resources Manager, the evaluators produced an evaluation plan with a set of questions to ask each teacher to make sure they had covered all areas and issues, plus an in-class observation checklist. The questions covered learning outcomes, tone, length of lesson, order of pages, images, design, navigation, accessibility, activity sheet issues, issues with interactives, children's engagement, and other improvements that teachers thought might be useful. The teachers submitted lesson plans with intended learning outcomes for each session according to the National Curriculum. Immediately after each session a written session report was sent to the Education Resources Manager, who used the findings in each report to implement changes to the site while the series of observations were still going on. In some cases, changes requested by a teacher were in place in time for the next observation.
  • Besides the elements that originated from the site and its contents, the environment had a significant impact on how the site was used. For example, various disturbances in the classroom (excess noise, students coming in late, interruptions at the door, etc), as well as logistical issues (time taken to turn on laptops and log in, time taken to log in at the ICT suite, time taken to find the correct Web site and the page within the Web site, difficulties with saving documents to students' folders and printing them out) all affected how well students worked with the Web site. As an example, a class of year 7 students became overly excited when visiting the ICT suite for the first time with the teacher, could not concentrate on the activity because there were other classes in the ICT suite, and was unable to access a video because the school's firewall blocked it. This resulted in a relatively uneven learning experience, but also was instrumental in indicating to the evaluators which of the activities that had been attempted during that class were the most engaging, and would therefore be the most likely to hold students' attention during periods of high disturbance Another important element that the evaluators were able to capture only in a classroom setting was how students worked with each other. The class dynamic within the different groups contributed to how much the students learned, and while this issue will affect not only the group's learning from a specific Web site but also how well they will learn in all of their classroom endeavours, it is important to note how it affected the testing. For example, some groups worked extremely well together on an activity sheet, but this may have been due not only to the intrinsic interest they took in the activity but also to external issues (threats of detention if they talked too much, possibility of bad marks if they did not complete the activity, incentive of a free lunch to the group that handed in the best activity sheet). Interestingly, the evaluators' presence did not seem to distract students. Initially, the evaluators thought that their presence, even sitting discreetly at the back of the room, might cause students to react differently than they might normally have done. However, during the in-class testing sessions, evaluators found that their presence was either ignored or considered normal by the children. Reasons for this might be that students are accustomed to having more than one adult in the classroom at a time – teaching assistants might be a constant presence, other teachers might interrupt the class, and OFSTED inspectors (the Office for Standards in Education, the UK's official school inspection system) might visit classes to conduct inspections.
  • Besides the elements that originated from the site and its contents, the environment had a significant impact on how the site was used. For example, various disturbances in the classroom (excess noise, students coming in late, interruptions at the door, etc), as well as logistical issues (time taken to turn on laptops and log in, time taken to log in at the ICT suite, time taken to find the correct Web site and the page within the Web site, difficulties with saving documents to students' folders and printing them out) all affected how well students worked with the Web site. As an example, a class of year 7 students became overly excited when visiting the ICT suite for the first time with the teacher, could not concentrate on the activity because there were other classes in the ICT suite, and was unable to access a video because the school's firewall blocked it. This resulted in a relatively uneven learning experience, but also was instrumental in indicating to the evaluators which of the activities that had been attempted during that class were the most engaging, and would therefore be the most likely to hold students' attention during periods of high disturbance Another important element that the evaluators were able to capture only in a classroom setting was how students worked with each other. The class dynamic within the different groups contributed to how much the students learned, and while this issue will affect not only the group's learning from a specific Web site but also how well they will learn in all of their classroom endeavours, it is important to note how it affected the testing. For example, some groups worked extremely well together on an activity sheet, but this may have been due not only to the intrinsic interest they took in the activity but also to external issues (threats of detention if they talked too much, possibility of bad marks if they did not complete the activity, incentive of a free lunch to the group that handed in the best activity sheet). Interestingly, the evaluators' presence did not seem to distract students. Initially, the evaluators thought that their presence, even sitting discreetly at the back of the room, might cause students to react differently than they might normally have done. However, during the in-class testing sessions, evaluators found that their presence was either ignored or considered normal by the children. Reasons for this might be that students are accustomed to having more than one adult in the classroom at a time – teaching assistants might be a constant presence, other teachers might interrupt the class, and OFSTED inspectors (the Office for Standards in Education, the UK's official school inspection system) might visit classes to conduct inspections.
  • The in-class testing was useful in picking up elements that were not, and would not have been, flagged in the conventional user testing scenario. Even though the testers were the same teachers who had participated in the two preliminary user testing sessions, they did not pick up on some of the elements that needed to be fixed or changed until they were actually using the site in the classroom. It was only when practical implications became apparent that they noticed these items needed to be changed. All of the following issues had been present during the conventional user testing sessions but had not been singled out as needing modification until the in-class testing sessions: content : when they read the text out loud or asked students to read the text on the pages, teachers realized that the tone of some of the text was too difficult or complex, even though it had seemed fine when it was read on the screen images : teachers realized that some of the images they had seen on lesson pages were not actually useful or pertinent to their teaching, and so should be removed or moved to different pages activity sheets : activity sheets did not have spaces for students to put their names, which caused confusion when they were printing out their work – something teachers hadn't noticed when looking at the content of the sheets interactive activities : although they took up a fairly large amount of screen space when they were being viewed by a single user on a single screen, interactive activities were too small for some children to see from the back of the class and needed to be expandable to full-screen size navigation : the breadcrumb trail needed to go down one more level in order for teachers and students to immediately recognize where they were within the site navigation : the previous/next buttons and the page numbers only appeared at the bottom of the screen, sometimes after the 'fold line', which made it difficult for users to know how to get from one page to the others In fact, sometimes the issues that came up during classroom testing directly opposed what teachers had told us in initial user testing sessions. For example, during user testing teachers had said that the breadcrumb trails were easy to use and helped with navigation, but when they began using the site in the classroom, they found that this was not always the case. Teachers needed to try out the site with their pupils to see what really worked.
  • In other words, if you are observing in a specific class with, say, low ability or poor levels of English, equipment not working, behaviour issues, and so on, how can you be sure your results are as reliable as those obtained in a 'neutral' environment? First, there is no such thing as a neutral environment, and any test will be subjective, not least because of the particular interests and abilities of the subjects themselves. Secondly, and more importantly, the overall aim of this testing is to ensure the Web site works well in classrooms, and this means seeing the effect that factors like those mentioned above have on the way the Web site is used. Although ideally one would test in more than one classroom (as in this project), just one session in one classroom, however unique the setting might be, reveals more about the required changes than one session in a neutral environment, because the social dynamics and educational imperatives are simply not there to be observed in neutral surroundings.
  • In other words, if you are observing in a specific class with, say, low ability or poor levels of English, equipment not working, behaviour issues, and so on, how can you be sure your results are as reliable as those obtained in a 'neutral' environment? First, there is no such thing as a neutral environment, and any test will be subjective, not least because of the particular interests and abilities of the subjects themselves. Secondly, and more importantly, the overall aim of this testing is to ensure the Web site works well in classrooms, and this means seeing the effect that factors like those mentioned above have on the way the Web site is used. Although ideally one would test in more than one classroom (as in this project), just one session in one classroom, however unique the setting might be, reveals more about the required changes than one session in a neutral environment, because the social dynamics and educational imperatives are simply not there to be observed in neutral surroundings.
  • f you are using an external Web developer, it is probably best to avoid getting them to do the user testing, as there is an inherent conflict of interest leading to a likelihood of minimising the changes required, and also because they are likely to focus more on the technical aspects of the site than on its effect on the teacher and pupils themselves. For the same reason, it would be preferable to get an external or unrelated evaluator for the project rather than visit a classroom yourself if you are the producer or writer of the content. It is always difficult to take criticism and a neutral party will not have any issues surrounding ownership of the material.
  • f you are using an external Web developer, it is probably best to avoid getting them to do the user testing, as there is an inherent conflict of interest leading to a likelihood of minimising the changes required, and also because they are likely to focus more on the technical aspects of the site than on its effect on the teacher and pupils themselves. For the same reason, it would be preferable to get an external or unrelated evaluator for the project rather than visit a classroom yourself if you are the producer or writer of the content. It is always difficult to take criticism and a neutral party will not have any issues surrounding ownership of the material.
  • It's true that it can cost money to conduct user testing in a classroom – but then again, it need cost no more than conventional user testing. In conventional user testing, an acceptably budget-conscious way of conducting the testing is to have (preferably neutral) staff administer it, hand-writing the notes or using in-house recording equipment to record the user's experience, and to give the tester a small token of appreciation such as a gift voucher. In an in-class testing scenario, one person could attend a one-hour class session in a school, giving the teacher the same small token payment and taking notes or using recording equipment (taking care not to breach rules about the recording of children) to make notes of the issues uncovered. The Moving Here Schools evaluation programme was built into the project plan, but still only used 6.7% of the total spending on the Schools site. The team would recommend spending between 5 and 10% of your total project budget on user testing – especially a combination of conventional and 'real habitat' testing – and planning it into your project from the start. When taking into account the cost of not conducting effective user testing, then the cost of user testing is usually worth every penny. If you can only afford one test, do one. Krug makes the point best when he says 'Testing one user is 100 percent better than testing none. Testing always works. Even the wrong test with the wrong user will show you things you can do to improve your site' (2000).
  • Understanding online audiences ux day oxford 18 mar 13

    1. 1. Understanding online audiences Planning and implementing research into online audiences UX day Oxford 18 March 2013 Martin Bazley Online experience consultant Martin Bazley & Associates
    2. 2. Martin BazleyPreviously• Teaching (7 yrs)• Science Museum, London, Internet Projects (7yrs)• E-Learning Officer, MLA South East (3yrs)• Founder: Digital Learning Network DLNET
    3. 3. Martin Bazley• Current• Developing online resources, websites, user testing, evaluation, training, consultancy… Martin Bazley & Associates www.martinbazley.com Slides and notes available afterwards
    4. 4. Note to self: check stats tomorrow to see if anyone looked up the websitewww.martinbazley.com
    5. 5. How can we get a sense of who our onlinevisitors are and what they do with our onlinecontent?How do we gather data to help us improvewhat we do?How do we measure success from theusers point of view, and / or against our ownobjectives and constraints?For example, how justify investment (or lackof it) in social networks etc?
    6. 6. Reasons for doing audience research: Evaluation• Did your project/product/service do what you wanted it to do?• Provide information for stakeholders• Gauge audience satisfaction
    7. 7. Reasons for doing audience research: Promotion• Improve your offer for your target audiences• Increase usage• Widen access
    8. 8. Reasons for doing audience research: Planning• Inform development of a new product/service• Inform business planning• Prove interest in a related activity
    9. 9. Data gathering tools• Qualitative: focus groups, “free text” questions in surveys, interviews• Quantitative: web statistics, “multiple choice” questions in surveys, visitor tracking• Observational: user testing, ethnographic
    10. 10. Define audience Plan methodology research goalUse results to guide Collect data changes Analyse data
    11. 11. Define audience research Plan methodology goalUse results to guide Collect data changes Analyse data
    12. 12. Define audience research Plan methodology goalUse results to guide Collect data changes Analyse data
    13. 13. Define audience research Plan methodology goalUse results to guide Collect data changes Analyse data
    14. 14. Define audience research Plan methodology goalUse results to guide Collect data changes Analyse data
    15. 15. Define audience Plan methodology research goalUse results to guide Collect data changes Analyse data
    16. 16. Strengths and weaknesses of different data gathering techniques
    17. 17. Data gathering techniquesUser testing - early in development and again near endOnline questionnaires – emailed to people or linked from websiteFocus groups - best near beginning of project, or at redevelopment stageVisitor surveys - link online and real visitsWeb stats - useful for long term trends /events etc
    18. 18. Need to distinguish between:Diagnostics – making a project or service betterReporting– to funders, or for advocacy
    19. 19. Online questionnaires(+) once set up they gather numerical and qualitative data with no further effort – given time can build up large datasets(+) the datasets can be easily exported and manipulated, can be sampled at various times, and structured queries can yield useful results(–) respondents are self-selected and this will skew results – best to compare with similar data from other sources, like visitor surveys(–) the number and nature of responses may depend on how the online questionnaire is displayed and promoted on the website
    20. 20. Focus groups(+) can explore specific issues in more depth, yielding rich feedback(+) possible to control participant composition to ensure representative(–) comparatively time-consuming (expensive) to organise and analyse(–) yield qualitative data only - small numbers mean numerical comparisons are unreliable
    21. 21. Visitor surveys(+) possible to control participant composition to ensure representative(–) comparatively time-consuming (expensive) to organise and analyse(–) responses can be affected by various factors including interviewer, weather on the day, day of the week, etc, reducing validity of numerical comparisons between museums
    22. 22. Web stats(+) Easy to gather data – can decide what to do with it later(+) Person-independent data generated - it is the interpretation, rather than the data themselves, which is subjective. This means others can review the same data and verify or amend initial conclusions reached
    23. 23. Web stats(–) Different systems generate different data for the same web activity – for example no of unique visits measured via Google Analytics is generally lower than that derived via server log files(–) Metrics are complicated and require specialist knowledge to appreciate them fully
    24. 24. Web stats(–) As the amount of off-website web activity increases (e.g. Web 2.0 style interactions) the validity of website stats decreases, especially for reporting purposes, but also for diagnostics (–) Agreeing a common format for presentation of data and analysis requires collaborative working to be meaningful
    25. 25. Online surveys SurveyMonkeywww.surveymonkey.com
    26. 26. Web statsGoogle Analytics
    27. 27. Learn GA: short intro videos etc https://www.google.com/analytics/iq.html
    28. 28. The best way to learn GA is to use it: www.google.com/analytics/
    29. 29. Web stats: Focus on trends rather than absolute values
    30. 30. The ‘long tail’An example of a power law graph showing popularityranking. To the right is the long tail; to the left are thefew that dominate. Notice that the areas of bothregions match. [Wikipedia: Long Tail]
    31. 31. The ‘long tail’The tail becomes bigger and longer in new markets (depicted inred). In other words, whereas traditional retailers have focusedon the area to the left of the chart, online bookstores derivemore sales from the area to the right.[Wikipedia: Long Tail]
    32. 32. SCA guidancehttp://sca.jiscinvolve.org/wp/audience-publications/ Good overview Step by step approach Culture 24 Let’s Get Real http://weareculture24.org.uk/projects/action-research/
    33. 33. More information / advice / ideas Happy to help - phone number on site: Martin Bazley 0780 3580 737 www.martinbazley.com
    34. 34. Extra slides not used in sessionSome of these may be useful - please feel free to call for clarification or more info
    35. 35. When to evaluate or test and why• Before funding approval – project planning• Post-funding - project development• Post-project – summative evaluation
    36. 36. Testing is an iterative processTesting isn’t something you do onceMake something => test it => refine it => test it again
    37. 37. Before funding – project planning• *Evaluation of other websites – Who for? What for? How use it? etc – awareness raising: issues, opportunities – contributes to market research Research – possible elements, graphic feel etc• *Concept testing – check idea makes sense with audience – reshape project based on user feedback Focus group
    38. 38. Post-funding - project development• *Concept testing – refine project outcomes based on feedback from intended users Focus group• Refine website structure – does it work for users? One-to-one tasks• *Evaluate initial look and feel – graphics,navigation etc Focus group
    39. 39. Card sorting - get various people to try out thewebsite structure before you build it
    40. 40. Post-funding - project development 2• *Full evaluation of a draft working version – usability AND content: do activities work, how engaging is it, what else could be offered, etcObservation of actual use of websiteby intended users,using it for intended purpose,in intended context – workplace, classroom, library, home, etc
    41. 41. Post-funding - project development 3• Acceptance testing of ‘finished’ website – last minute check, minor corrections only – often offered by web developers• Summative evaluation – report for funders, etc – learn lessons at project level for next time
    42. 42. Website evaluation and testingNeed to think ahead a bit: – what are you trying to find out? – how do you intend to test it? – why? what will do you do as a result? The Why? should drive this process
    43. 43. Evaluating online learningresources in the classroomMartin BazleyOnline experience consultant
    44. 44. Key point: for a site designed for schools, the most effective user testing observations will be made in a real classroom situation
    45. 45. National Archives Moving Here project For teachers of 8 – 14 yr olds History Geography and Citizenship Features: Interactives, activity sheets, audio and video clips
    46. 46. Moving Here Schools:For 8 – 14 yr olds studying:History Geography and CitizenshipFeatures:Interactives, activity sheets, audio and videoclips
    47. 47. 1. preliminary testing sessions –conventional user-testing with teachers (at TNA)
    48. 48. 2. in-class testing –teachers used the Moving Here Schools site with pupils in their own classrooms This meant sitting at the back of the classroom observing and taking notes…
    49. 49. Evaluation: 2-phase approach
    50. 50. Site ready in parts – but not too ready:
    51. 51. The environment had a significant impact on how the site was used.The class dynamic within the different groups contributed to how much the students learned.
    52. 52. The environment and social dynamicsThe environment had a significant impact on how the site was used.The class dynamic within the different groups contributed to how much the students learned.
    53. 53. in-class testing picked up elements not there in conventional user testing.teachers in preliminary user testing did not spot some problems until actually in the classroom. For example…
    54. 54. interactive activities:looked big enough when viewed on a screen nearby…
    55. 55. … but text/images toosmall for some children tosee from the back of theclass…
    56. 56. …so interactives neededto be viewable full-screen
    57. 57. Only spotted during in-class testing:…so interactives neededto be viewable full-screen
    58. 58. content:when students tried to read text out loud, teachers realised some text was too difficult or complex
    59. 59. activity sheets:some sheets did not have spaces for students to put their names - caused confusion when printing 30 at same time…
    60. 60. Manchester Art Gallery art interactive For teachers of 8 – 11 yr olds, and for pupils History Art and Citizenship Features: interactive with built in video, quiz, etc, plus activity sheets and background info
    61. 61. Martin Bazley
    62. 62. Martin Bazley
    63. 63. Martin Bazley
    64. 64. Martin Bazley
    65. 65. This classroom user testing is all very well, but...How can you see everything in a class of 30 children – dont you miss things? You see things in a classroom that dont arise in one-to-one testing They are the real issues
    66. 66. This classroom user testing is all very well, but...How can you see everything in a class of 30 children – dont you miss things? You see things in a classroom that dont arise in one-to-one testing They are the real issues
    67. 67. This classroom user testing is all very well, but... but... This classroom user testing is all very well,Doesnt using a specific class with particular needs skew the results? » For example, low ability, poor English, equipment not working, behaviour issues, etc - are results as reliable as those in a neutral environment? » ‘neutral environment’ ? – no such thing - any test will be subjective, and in any case: » Testing is to make website work well in classroom, - need to see effects of factors like those.
    68. 68. This classroomclassroom user testing is very well,but... This user testing is all all very well, but... Doesnt using a specific class with particular needs skew the results? » For example, low ability, poor English, equipment not working, behaviour issues, etc - are results as reliable as those in a neutral environment? » ‘neutral environment’ ? – no such thing - any test will be subjective, and in any case: » Testing is to make website work well in classroom, - need to see effects of factors like those.
    69. 69. This classroomclassroom user testing is very well,but... This user testing is all all very well, but... Cant my Web developer do the testing for us? » best not to use external developer to do user testing - conflict of interest » also likely to focus more on the technical aspects of the site than on effect on the teacher and pupils. » observe classes yourself but use an independent evaluator for key decision points
    70. 70. This classroomclassroom user testing is very well,but... This user testing is all all very well, but... Cant my Web developer do the testing for us? » best not to use external developer to do user testing - conflict of interest » also likely to focus more on the technical aspects of the site than on effect on the teacher and pupils. » visit a classroom yourself but use an independent evaluator for key decision points
    71. 71. This classroom user testing is all very well, but... but... This classroom user testing is all very well,I dont have the time or budget to do this! » need cost no more than conventional user testing. one person could attend a one-hour class session in a school, giving the teacher the same small token payment » This programme had evaluation built into project: 6.7% of total Schools site budget. » Allow 5 -10% of total project budget for user testing => videos
    72. 72. Video clips• Moving Here key ideas not lesson plans etc http://www.vimeo.com/18888798• http://www.vimeo.com/18892401 Lesson starter• Time saver http://www.vimeo.com/18867252 S
    73. 73. User test earlyTesting one user early on in the project……is better than testing 50 near the end
    74. 74. Two usability testing techniques“Get it” testing- do they understand the purpose, how it works, etcKey task testing- ask the user to do something, watch how well they doIdeally, do a bit of each, in that order
    75. 75. User testing – who should do it?• The worst person to conduct (or interpret) user testing of your own site is… – you!• Beware of hearing what you want to hear…• Useful to have an external viewpoint• First 5mins in a genuine setting tells you 80% of what’s wrong with the site
    76. 76. Strengths and weaknesses of different data gathering techniques
    77. 77. Data gathering techniquesUser testing - early in development and again near endOnline questionnaires – emailed to people or linked from websiteFocus groups - best near beginning of project, or at redevelopment stageVisitor surveys - link online and real visitsWeb stats - useful for long term trends /events etc
    78. 78. Need to distinguish between:Diagnostics – making a project or service betterReporting– to funders, or for advocacy
    79. 79. Online questionnaires(+) once set up they gather numerical and qualitative data with no further effort – given time can build up large datasets(+) the datasets can be easily exported and manipulated, can be sampled at various times, and structured queries can yield useful results(–) respondents are self-selected and this will skew results – best to compare with similar data from other sources, like visitor surveys(–) the number and nature of responses may depend on how the online questionnaire is displayed and promoted on the website
    80. 80. Focus groups(+) can explore specific issues in more depth, yielding rich feedback(+) possible to control participant composition to ensure representative(–) comparatively time-consuming (expensive) to organise and analyse(–) yield qualitative data only - small numbers mean numerical comparisons are unreliable
    81. 81. Visitor surveys(+) possible to control participant composition to ensure representative(–) comparatively time-consuming (expensive) to organise and analyse(–) responses can be affected by various factors including interviewer, weather on the day, day of the week, etc, reducing validity of numerical comparisons between museums
    82. 82. Web stats(+) Easy to gather data – can decide what to do with it later(+) Person-independent data generated - it is the interpretation, rather than the data themselves, which is subjective. This means others can review the same data and verify or amend initial conclusions reached
    83. 83. Web stats(–) Different systems generate different data for the same web activity – for example no of unique visits measured via Google Analytics is generally lower than that derived via server log files(–) Metrics are complicated and require specialist knowledge to appreciate them fully
    84. 84. Web stats(–) As the amount of off-website web activity increases (e.g. Web 2.0 style interactions) the validity of website stats decreases, especially for reporting purposes, but also for diagnostics (–) Agreeing a common format for presentation of data and analysis requires collaborative working to be meaningful
    85. 85. More information / advice / ideas Martin Bazley 0780 3580 737 www.martinbazley.com
    86. 86. SCA guidancehttp://sca.jiscinvolve.org/wp/audience-publications/ Good overview Step by step approach Culture 24 Let’s Get Real http://weareculture24.org.uk/projects/action-research/
    87. 87. Crit room‘simulated user testing’
    88. 88. Crit room protocolSimulating user testing – usually one-to-one in quiet roomNo one (especially site stakeholders) other than tester say anything for first part of sessionIn this simulation we will focus on- Look and feel of site- Usability- Content
    89. 89. More information / advice / ideas Happy to help - phone number on site: Martin Bazley 0780 3580 737 www.martinbazley.com
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×