Analytic Design Group Design Research Qualifications


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Analytic Design Group Design Research Qualifications

  1. 1. QualificationsUser Testing &Design ResearchJanuary 24, 2013
  2. 2. Who we areAnalytic Design Group Inc (ADGi) is a visionary user experience Founded in 2005 on thestrategy and design firm that specializes in innovating in digital principle that evidence-basedenvironments by leveraging in-depth primary research to find design will always be moreexpose unexamined assumptions. Our work not only withstands the powerful than design driven by best practices, we have growncomplexity of multiple agendas and intricate implementation but from a single practitioner to aalso the scrutiny of the public. vibrant, collaborative team.Some of our clients include:Samsung, Sony, AT&T, Adobe, Nokia, LG, Motorola
  3. 3. Our ServicesKey service areas include: design research, userexperience strategy development, interaction design,communication design, and usability testing. Lately some ofour work has also included service design considerations aswell. Our projects can include the full sweep of userexperience services (i.e. user research through strategy anddesign) or just one element. Our aim is to always fit the workrequired to the need, and we’ll work with you to ensure youare getting the best value from our efforts.This presentation focuses on our design research and usertesting services. / User Experience/ Design Research / / Usability Testing / / Communication Design / / Interaction Design / Strategy Development /
  4. 4. Design Research
  5. 5. Design ResearchWe use a diverse set of design research methodologies:Surveys — we have used surveys to establish baseline data(largely attitudinal), help to segment audiences, and insome cases, help to identify core issues that can be furtherexplored by other research.Context-rich group interviews (like marketing focus groupsbut much richer) — the focus groups we do, are typicallyvery rich and usually drive out a great deal of contextual aswell as attitudinal data. We usually ask participants tocomplete homework prior to the session (aids in groundingthe user and supports contextual data gathering) as well ashave some form of participatory design exercise to allowparticipants to tap into their feelings and attitudes quickly.
  6. 6. Design ResearchOn-site observation (w/o) interviews — this is useful when we arelooking for issues that are process related.Task analysis — this is usually both an expert review and then awalkthrough with participants to identify particular pain points withcertain tasks. This often involves both offline and online elements. TASK 1 80%50% 70%Expert review/Heuristic analysis — this can be a quick and cost-effective means of identifying user experience and usability issues.We typically rank severity of issues identified and can include anaccessibility review in this process.Card sorting — we have done card sorting exercises in both oneon one as well as group sessions. We’ve used both open and closedcard sorts and typically use the findings to develop informationarchitectures.Diary Studies — are useful when we are looking at processes thatoccur over a longer period of time or are looking at the impact ofcertain things over time.
  7. 7. Design ResearchIn-Situ & EthnographicWe have conducted numerous ethnographic or in-situ studies ona wide range of physical and digital products. These typically arevery data rich and result in in-depth, tactical, near-term findingsas well as robust, strategic, longer-term, insights. Our clientsreport that the ROI on these studies is that along with findingsolutions to nagging problems, it can help them focus theirproduct management for a year or more.For example, last year ADGi conducted an ethnographic study fora mobile carrier on a device experiencing high returns. We wereable to identify key usability issues, service design issues, anddeliver insights about how their customers currently perceivedthese devices and were likely to for the foreseeable future.
  8. 8. Design ResearchSample Report
  9. 9. User Testing
  10. 10. User TestingThe range of user testing methods we use include:Metrics-based usability studies — the usability studies we doare quite rich with quantitative (metrics) data as well asqualitative data. We typically collect task time, performance,SUS, satisfaction, and hedonic scoresRemote-moderated usability studies — through the use ofsuch tools as WebEx (or other screen sharing tools) we havesuccessfully conducted remote moderated testing, collectingsimilar (or the same) metrics as we do for in person tests –this is particularly useful when testing with participants whoare geographically dispersed or where the user’s contextheavily influences their interaction and on-site observation isnot possible/feasible.
  11. 11. User Testing‘Listening-lab’ style user testing — this is essentially user testingwithout a set task list. There is some hard data we draw out ofthese sessions, but mainly this is focused on qualitative data.Un-moderated usability testing — this is user testing where theuser is in the lab and observed and recorded but completing thetasks on their own.ADGi Field Test — this is a web-based tool we developed in housethat automates a field test: participants are asked via emailwhether they wish to participate. If they indicate yes, they are senta set of instructions or tasks to complete along with an NDAreminder. After a set period of days participants are then sent asurvey to fill out. From a test administration point of view we cantrack all the participants, where they are in the study and get agraphical view on how they responded to each question, as wellas download a CSV of the results for additional manipulation.We’ve used this tool to test devices and apps.
  12. 12. User TestingNavigation testing — this is another tool we developed inhouse to test navigation structures. Users are asked a series ofquestions about under what categories and labels they wouldexpect to find certain pieces of information. They are shownthe tree structure for the site and can navigate through it to thespot where they would expect to find the content. This testinghas been very effective for us in establishing how findablecontent on very large sites will be and in determining theeffectiveness of categorization and labeling schemes.Concept acceptance testing — this is useful for trying out anew concept, typically while comparing it to other more familiarones. We’ve used this on devices when a client wants toevaluate new way of navigating or different form factor
  13. 13. User TestingCompetitive benchmark testing — this is useful whencomparing a product (interface, device, site) against one ormore others – we have used this to set benchmarks for futurecomparison as well as just comparisonsIterative testing — this is where we test one or at most twodiscreet elements with a very small set of users (2 or 3) makerecommendations on that testing, the development teammakes those changes and we test again until we do not seethe need for any more changes. We use this method primarilyfor games research looking at a particular interaction. Whileother clients have asked about this, after discussing it wehave so far determined that the value of this approach doesnot warrant the effort and cost for the project at hand.
  14. 14. User TestingRemote user testing — Ability and experience in executingremote usability testing — inclusive of screen sharing, audioand video recording.We have experience conducting remote-moderated usabilityas well as focus group sessions. We screen share andcapture (audio and video record) the sessions. We havefound that this type of research can be very cost effectiveand is especially useful when we are asking participants tolog in to their own accounts, or are geographicallydispersed. On occasion we’ve also found that by having theuser located in their own environment, we are ableto glean more contextual information than we are typicallyable to in the lab.
  15. 15. User TestingSample Report
  16. 16. User TestingSample Report
  17. 17. Mobile Test Lab
  18. 18. Mobile Test LabMobile test lab — we conduct a great deal of testingon mobile devices and our lab set up is both flexibleand powerful:Our testing equipment is deliberately flexible so thatwe can set up in a lab environment, a coffee shop, aperson’s home or office. We have designed a verystable, yet flexible camera mount that allows us tocapture a variety of interactions. Assuming we canconnect to a stable WiFi, we can also live stream (toallow remote viewing) outside of a lab environment. For more information view this presentation: Mobile Usability: Whats Your Strategy
  19. 19. Karyn ZuidingaPrincipal & Director of User