Preference and Desirability Testing: Measuring Emotional Response to Guide Design

  • 4,520 views
Uploaded on

(From UPA 2011-Atlanta) Usability practitioners have a variety of methods and techniques to inform interaction design and identify usability problems. However, these tools are not as effective at …

(From UPA 2011-Atlanta) Usability practitioners have a variety of methods and techniques to inform interaction design and identify usability problems. However, these tools are not as effective at evaluating the visceral and emotional response generated by visual design and aesthetics. This presentation will discuss why studying visual design is important, review considerations for preference and desirability testing and present two alternative approaches to user studies of visual designs in the form of case studies.

More in: Design , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
4,520
On Slideshare
0
From Embeds
0
Number of Embeds
13

Actions

Shares
Downloads
104
Comments
0
Likes
10

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Impacts a product's or application's perceived:UtilityUsabilityCredibility
  • If users have a positive impression of the design aesthetics, they are more likely to overlook or forgive poor usability or limited functionality. With a negative impression, users are more likely to find fault with an interaction, even if a product’s overall usability is good and the product offers real value.
  • High desirability feeds into the motivational factors that help trigger target behavior.
  • High desirability feeds into the motivational factors that help trigger target behavior.
  • The simplicity of the question doesn’t work well with larger numbers of design options, especially if some are highly similar
  • People can have a difficult time articulating what it is about a design they like or dislike
  • The whys are important for stakeholder acceptance (branding guidelines)Business sponsors and stakeholders often want substantial customer feedback to assure them a given direction is correct.
  • TriadingQualitative interview technique that reveals constructsElicits attributes that are important to users in their vocabularyResearcher asks the participant to identify how two of the three examples are different from the thirdIn typical user research interviews, a researcher asks participants about their thoughts on a defined list of topics. The disadvantage of this approach is that the researcher may be inquiring about topics that are of little value or significance to the experience of the participants. Generally, participants will dutifully answer questions about any topics we ask them about, without thinking more broadly, going beyond the limits our questions impose, or interrupting us to tell us about dimensions that may be more relevant to them. Participants assume researchers are interested in studying the particular topics they’ve included in their interview scripts and don’t raise other issues that might be more pertinent to their overall experience with a product or potential design.Triading is a method that allows a researcher to uncover dimensions of a design space that are pertinent to its target audience. In triading, researchers present three different concepts or ideas to participants and ask them to identify how two of them are different from the third. Participants describe, in their own terms, the dimensions or attributes that differentiate the concepts. Participants follow this process iteratively—identifying additional attributes they feel distinguish two of the concepts from the third until they can’t think of any other distinguishing factors. By repeating this process across multiple participants, researchers can see trends that define audience segments or personas.The benefit of this process is that it uncovers dimensions of a particular domain that are important to the target audience rather than the researcher or designer. In addition, the dimensions participants identify are generally emotional aspects that it is important for experience designers to consider. For example, participants may describe differences in groups as “warm” versus cold” or business-like” versus fun.” Designers can then use the most relevant or common dimensions as inspiration for further design and exploration.
  • Benefits – straightforward and easy to administer on a large scaleNegatives – if you want to pick more than a clear winner but rather understand the emotional connections/reactions to each design this will not lend itself to that.
  • Obvious examples are consumer electronics or other retail productsAlso appropriate for applications in healthcare, insurance, financial, travel, etc.
  • Sensors track participants’ physiological measurements to particular designs. Changes in suggest a particular emotional response.Paired with attitudinal and self-reporting surveys measurements give a multifaceted view of emotional reactions to a design
  • Respondents are being asked: "To which extent do the feelings expressed by the characters correspond with your own feelings towards the stimulus?“Building on the responses of many people allows you to abstract valuable data pertaining to the emotional performance of your website, product, service.
  • “luxurious, approachable, friendly, capable, multi-cultural/inclusive, established”
  • “My initial reaction to this web site is that it seems kind of plain. There is not much going on in the page, and the colors seem kind of drab.”“This is a nice looking website. It is well designed, well laid out, and is appealing to look at. It makes me want to continue to navigate the site to learn more. “
  • “Men don’t really go with children… where there’s a baby, there must be a mother.”“My initial reaction to the website is that it seems very clean and modern. I like the layout, it looks like its easy to find information.”
  • “I felt love. I saw a mother holding a child…that’s pretty touchy. The site looks good, and it makes the hospital trustworthy.”“My initial reaction was that the hospital is represented by a caring, warm and friendly website.”
  • As you’re about to see, my story is just a LITTLE bit different than Mike’s . . .I joined the legal business unit of TR in 2007, just as it was about to embark on a 3-year, $90 million journey to produce a next generation subscription-based legal research tool. Please note the catchphrase in the center graphic – Legal Research Goes Human. This was at the core of what the executive leadership was trying achieve. Which was no easy endeavor – first, because legal research is not a pleasurable activity under any circumstances, and second, because the legacy Westlaw product, which had long dominated the market, looked like this . . .
  • This is one tab out of more than 100 that the user could have access to. Some were worse than this.Most of the selling points of the new system were going to be feature-based – a completely new and proprietary search algorithm, robust filters, the ability to create and share folders, collaborative tools, etc.However, the top exec, to his great credit, said he wanted to create something users would love to use – he used to say “I want them to snuggle up in bed with it at night.” And he committed to significant preference testing of designs toward dual purposes of equal significance: guiding the design itself, and having rock-solid justification for the senior executives for what we were doing. To Mike’s earlier point, that meant quantitative data with lots of users.
  • Initial goals were . . .1.2.measure how strong the brand was and whether people cared if it was messed withCritical – affected our approach, as you’ll see
  • In May of 2009, inparellel with the feature build outs . . .
  • This was done for security reasons (phones put away, constant monitoring)I told you our approach was different . . . Try to stay with me
  • This was to establish that users liked and were loyal the product, but hated the design
  • Presented pages designs with instructions to a focus on a specific element in isolation – like use of a photo/image in the product header, or size of the search area – to see if there were any trends in those areas.Here’s how they did is
  • Look at all pages in full screen (randomized)
  • From those they reviewed, select
  • Either change the one you selected, or move on to the next Design Element.
  • Looked at, and selected from, 32 screens
  • In the Design Gallery phase, users were asked to register their preferences to optimized page designs in their totality.
  • Instructions – you’re going to see them again and give an overall rating on a 10 point scale
  • Top 5 choices are presented, with a way to rate key aspects to get granular on what they like
  • As well as their bottom 2 – what they don’t like
  • We had them do the same for Results Lists (smaller numbers of design elements and design gallery, but the same way to rate their top 5 and bottom 2)
  • And document display as well
  • Revisit the descriptors to establish that the perception of a new design for the product was positveand they were done (MOST IN LESS THAN 1 HOUR)However, we did solicit some who could stay longer to participate in a brief interview which was taped (extra $$ to those who did)
  • Here’s one example
  • Out of all this, we got 3 core buckets of emotional response information that the VP could report up .. Baseline for design refinement established based on these clear winners
  • 2. That we were getting the desired perceptual responses on the differences between the products
  • post-session discussions elicited the qualitative feedback we needed to provide that color that Mike talked about earlierPlease take particular note of the 4th one there – users did not want the brand messed with. We’re going to come back to that laterOK, the design team takes the summer, refines refinesrefines, gets to a certain point, and the VP says . . .
  • Let’s do it again
  • Because of the refinements, we were able to piggyback some outstanding UI issues into this testSecurity important now more than ever – this can’t get out!!
  • Type formatting was the dynamic manipulation of case and statute documents on the screen, topic of my submission to UPA 2010, happy to provide that info if you’re interested
  • ** DON”T VERBALIZE RESULTS ** (next slides)
  • Clear choices (in the 90% range) for each of our primary page types
  • Specific areas of concern about all three (Significantly T&C search)In the home stretch, 3 months away from launch, we’re doing massive final validation testing for all of the core features I mentioned at the top, when I get called into my directors officeEveryone at the top was thrilled at what was coming, BUT – “we accept that we cannot mess with the legacy blue – however, we STRONGLY advocate that the branding of the product be aligned with the corporate template (orange, gray, white that you say at the top), and if you resist you better have a damn good reason . . . .”SO
  • Focus on those who decide to buy, or have influence on whether to buy
  • Results confirmed what we knew to be true from the outset
  • Achieved our primary goal 759
  • Measuredmeasuredmeasured (quant & qual) – 759 participants total about 325kGut-level preferences, which is a by-product of emotional responseGuided the design via explicit trends to all stakeholders
  • Evolution – changes are nuanced, but still critical
  • We sort of did in phase 1, but we didn’t ask “why” so much as just get quotes of enthusiasm for the fact that the product was being updated
  • As I was reviewing, I thought “why would anyone ever consider doing anything like this”, but if you do
  • In order of ascending practicality
  • A lot of that comes from the top, but we as UX pros can help make the caseEvery project is different, and as you saw in my case, you can take a little bit from a lot of different methods and come up with something that works in your specific circumstancesRe-emphasize – this should be used to guide and inspire the evolution of a design, then confirm decisions (if you’ve done it right)
  • In order of ascending practicality

Transcript

  • 1. Preference and Desirability Testing: Measuring Emotional Response to Guide Design
    Michael Hawley
    Chief Design Officer, Mad*Pow
    @hawleymichael
    Paul Doncaster
    Senior User Experience Designer, Thomson Reuters
  • 2. Agenda
    Why we should care
    Why it’s not always as simple as asking:“Which option do you prefer?”
    Methods to consider
    Case Study: Greenwich Hospital
    Case Study: WestlawNext
    Summary/Comparison
  • 3. Why we should care
    3
  • 4. Impressions Count
  • 5. Visceral Emotions
  • 6. 6
    Fogg’s Behavior Model
    Core motivators include:
    • Pleasure/pain
    • 7. Hope/fear
    • 8. Acceptance/rejection
    http://www.behaviormodel.org/
  • 9. 7
    Positioning Desirability Studies
    http://www.xdstrategy.com/2008/10/28/desirability_studies/
  • 10. “Which do you prefer?”
    8
  • 11. Quantity, granularity breed apathy
  • 12. Poor articulation
    “It reminds me of…”
    “It’s nice and clean.”
    “There’s just something about it . . .”
    “I ordinarily don’t like red, but for some reason it works here . . .”
    “It’s better than the other ones.”
  • 13. What Stakeholders Should Care About
    “We should go with design C over A and B, because I feel it evokes the right kind of emotional response in our audience that is closer to our most important brand attributes.”
  • 14. Methods to Consider
    12
  • 15. Present three different concepts or ideas to participants, and ask them to identify how two of them are different from the third and why.
    13
    Triading
  • 16. Broad, experience-based questionnaires, that also include questions relating to visual appeal and aesthetics
    • SUS (System Usability Scale)
    • 17. QUIS (Questionnaire for User Interface Satisfaction)
    • 18. WAMMI (Website Analysis and Measurement Inventory)
    14
    Qualitative Questionnaires
  • 19. Show participants a user interface for a very brief moment, then take it away.
    Participants recall their first impression, then moderator probes for meaning.
    • Helpful for layout decisions, prominence of content, labels
    • 20. www.fivesecondtest.com
    15
    Attention designers:
    You have
    50 milliseconds
    to make a good
    first impression
    Quick Exposure Memory Tests
  • 21.
    • Electroencephalography (EEG): Brain activity
    • 22. Electromyography (EMG):
    Muscles and Excitement
    • Electrodermal Activity (EDA): Sweat, Excitement
    • 23. Blood Volume Pressure (BVP): Arousal
    • 24. Pupil Dilation: Arousal and Mental Workload
    • 25. Respiration:
    Negative Valence or Arousal
    16
    Physiological and Neurological
  • 26. 17
    PrEmo Emotional Measurement
    Dr. Pieter Desmet, Technical University of Delft
    http://www.premo-online.com
  • 27. 18
    http://www.microsoft.com/usability/uepostings/desirabilitytoolkit.doc
    Product Reaction Cards
  • 28. Case Study: Greenwich Hospital
    19
  • 29. Determine intended brand attributes (and their opposites)
    20
    Product Reaction Cards: Before You Begin
    Leverage existing marketing/brand materials
    Alternatively, stakeholder brainstorm to identify key brand attributes/descriptors using full list of product reaction cards as a start
    Tip: “If the brand was a person, how would it speak to your customers?”
  • 30. Methodology
    Include 60/40 split of positive and negative words
    Target 60 words, optimized to test brand
    Simple question: “Which of the following words do you feel best describe the site/design/product (please select 5):”
    One comp per participant, or multiple comps per participant (no more than 3)
    Participants
    Qualitative: Paired with usability testing
    Quantitative: Target minimum of 30 per option if possible
    21
    Product Reaction Cards: Conducting
  • 31. 22
    Process - Analyzing
    Calculate percentage of positive and negative attributes per design
    Visualize overall sentiment of feedback using “word clouds” (see wordle.net)
    68% Positive
    32% Negative
  • 32.
    • Align the website with the character of the Hospital
    • 33. Update the site after nearly 10 years
    • 34. Counter impressions that Greenwich is more than just maternity and elder care
    • 35. Communicate that they are long-standing members of the community
    23
    Case Study: Greenwich Hospital Website Redesign
  • 36. 24
    Case Study: Greenwich Hospital Website Redesign
    • 3 visually designed comps
    • 37. 50 people reacted to each comp (quantitative) via survey
    • 38. Additional feedback obtained via participant interviews (qualitative)
    Survey Questions
    Hello, I am requesting feedback on a website I am working on. Your answers let me know if the site is conveying the right feel.
    1. What are your initial reactions to the web site?
    2. Which of the following words best do you feel best describe the site (select 5):
  • 39. 25
    Three Different Visual Designs
  • 40. 26
    Results: Concept 1
    12% Negative
    88% Positive
  • 41. 27
    Results: Concept 2
    87% Positive
    13% Negative
  • 42. 28
    Results: Concept 3
    5% Negative
    95% Positive
  • 43.
    • Mix of qualitative and quantitative is key
    • 44. Qualitative helps provide color to the results
    • 45. Quantitative resonates with stakeholders and executives
    • 46. Position results as one form of input to decision-making process, not declaring a “winner”
    • 47. Simple, cost-efficient way to assess audience’s emotional response to a design
    29
    Lessons Learned
  • 48. Case Study: WestlawNext
    UX Research Team:
    Paul Doncaster
    Drew Drentlaw
    Shannon O’Brien
    Bill Quie
    November Samnee
    30
  • 49.
  • 50. for Phase 1
    • Use large sample sizes to establish a design “baseline,” from which to advance the design direction in subsequent iterations
    • 51. Isolate preference trends for specific page design aspects
    • 52. Determine tolerance for manipulation of the site “brand”
    • 53. Maintain tight security
    Goals
  • 54. Sessions were held in 4 cities over 5 days
    Seattle
    Denver
    Memphis
    Minneapolis-St. Paul
    4 sessions were held per day, with a maximum of 25 participants per session
    1.5 hours allotted per study, most participants finished in less than 1 hour
    319 participants successfully completed their sessions
    Phase 1: Logistics & Execution
  • 55. Participants completed the study at individual workstations at their own pace
    All workstations included a 20” monitor, at 1024x768 resolution
    Phase 1: Logistics & Execution
    Memphis, TN, May 2009
  • 56. Brief review of Westlaw critical screens
    Positive/negative word selection to describe Westlaw
    35
    Positive/negative product descriptors
  • 57. Each set of Element variations were viewed in full screen
    Participant selects “top choice” by dragging a thumbnail image to a drop area
    36
    Homepage: Design Elements
  • 58. 37
  • 59. Homepage: Design Elements (1)
    All options viewed in full screen
    Participant selects “top choice” by dragging a thumbnail image to a drop area
  • 60.
  • 61.
  • 62. Visual Weight (6 options)
    Use of Imagery (8 options)
    Components (4 options)
    Search Area (4 options)
    Palette (10 options)
    Homepage: Design Elements
  • 63. 19 HP designs viewed in full screen (randomized)
    All 19 options are presented again; participant assigns a rating using a 10-point slider.
    Top 5 and Bottom 2 choices are positioned in order of rating values on one long, scrollable page. Next to each design displayed, rates key aspects for each design on a 5-point scale
    Homepage: Design Gallery
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69. Repeat the process for Results List design:
    New Results List
    • Design Elements
    • 70. Column Collapsing (4 options)
    • 71. Column Separation (2 options)
    • 72. Theme/Color (8 options)
    • 73. Design Gallery
    • 74. 14 Results Lists designs (randomized)
    • 75. Key Aspects Rated
    • 76. Color scheme
    • 77. Global Header
    • 78. Summary and Excerpt (list contents)
    • 79. Filters design (left column)
    • 80. Overall look and feel
  • Repeat the process for Document Display design:
    New Document Display
    • Design Elements
    • 81. Tabs vs. Links (4 options)
    • 82. Background Separation (4 options)
    • 83. Margin Width (3 options)
    • 84. Font Size (12 options)
    • 85. Locate (2 options)
    • 86. Design Gallery
    • 87. 9 Document Display designs (randomized)
    • 88. Key Aspects Rated
    • 89. Color scheme
    • 90. Layout of content
    • 91. Text formatting
    • 92. Overall look and feel
  • “Based on the designs I’ve liked most today . . .”
    50
    Positive/negative design descriptors
  • 93. Results were analyzed across 8 different sample filters
    The top picks were surprisingly consistent across all of the ‘Top 5’ lists analyzed
    High-level Results
  • 100. 52
    Top Homepage Designs by Job Title
    Job Title. Top 5 out of 19 possible.
    Overall (319)
    Associate (189)
    Librarian (37)
    Partner (81)
    Solo Practitioner (5)
    1
    1
    1
    1
    1
    HP16
    HP1
    HP16
    HP5
    HP15
    2
    HP8
    HP15
    HP15
    HP6
    HP16
    2
    2
    2
    2
    3
    3
    3
    3
    3
    HP8
    HP8
    HP16
    HP8
    HP10
    4
    HP15
    HP1
    4
    4
    4
    4
    HP5
    HP7
    HP5
    5
    5
    5
    5
    5
    HP10
    HP13
    HP8
    HP19
    HP14
  • 101. Home Page (19)
    HP16 & HP15 designs consistently placed in the Top 5 across all filters
    Results List (14)
    RL4 consistently placed in the Top 3 across all sample filters, and was the #1 choice for 80% of all participants
    Document Display (9)
    DD3 placed in the Top 5 across all sample filters and was the #1 choice for 77% of all participants
    Phase 1: High-level Results
  • 102. Note, participants were asked to describe the current Westlaw before being shown the new designs.
    54
    Phase 1: Word Selection Results
  • 103. 5 design themes were derived from post-session discussions
    • “New design(s) are better than current Westlaw”
    • 104. “Clean and Fresh”
    • 105. “Contrast is Important”
    • 106. “Prefer Westlaw Blue”
    • 107. “No Big Fonts Please”
    The study narrowed the list of potential designs, and we better understood what design elements that Westlaw users liked and disliked.
    Phase 1: High-level Results
  • 108. 56
    Phase 2: September 2009
    Kansas City, MO, Sept 2009
  • 109. Goals
    • Refine preferences for selected design directions
    • 110. Understand users personal reasons for liking their preferred choices
    • 111. Get closure on other design options for online and printed content
    • 112. Sustain tight security
    Tool
    • Same as in Round 1, with some minor revisions to accommodate specialized input
    Phase 2: September 2009
  • 113. Method
    View, Rate, and Pick Top Choice for
    Homepage (3 options)
    Result List (2 options)
    Document Display (2 options)
    “Why?”
    Simple preference selection for two unresolved UI design issues
    Citing References: Grid display or List display?
    Out of Plan Indication design (6 options)
    Type formatting preferences for 3 different content types
    Font Face
    Font Size
    Margin Width
    Phase 2: September 2009
  • 114. Logistics
    3 cities (Philadelphia, Kansas City, Los Angeles)
    1 Day
    226 participants
    Analysis
    Filters (8 categories) were used to score the designs for each visual preference
    Results
    Clear choices for top designs in each of all categories
    “Why” feedback shed new light on designs under consideration and helped focus “homestretch” design activities
    Phase 2: September 2009
  • 115. Home Page (3)
    HP3 ranked #1in 94% of filter groups (54% of total participants)
    Results List (2)
    RL5 ranked #1in 97% of filter groups (58% of total participants)
    Document Display (2)
    DD7 ranked #1in 94% of filter groups (61% of total participants)
    Phase 2: High-level Results
  • 116. The main concerns regarding Homepage Design HP3
    Search Box
    Too small
    How do I do a Terms-and-Connectors search?
    Browse Section
    How do I specify multiple or specific search content?
    Poor organization
    Poor label
    Need access to “often-used” content
    Need better access to help
    61
    Participant Comments: Homepage
  • 117. Goals
    Get feedback on branding options from decision makers and those who influence purchase of the product
    Get closure on final outstanding design issues
    Tool
    Same as in Rounds 1 & 2, with some minor revisions to accommodate specialized input
    Phase 3: December 2009
  • 118. Method
    Wordmark/Branding
    View wordmark color combinations and design elements against different backgrounds, pick top choice and provide comments
    Make a final “Top Choice” from all selections
    Simple preference selection for outstanding UI design issues
    Header Space: Tile or No Tile?
    Notes Design
    Location: Inline or Column?
    State: Open or Closed?
    Headnote Icon design (4 variations)
    Phase 3: December 2009
  • 119. What color combination do you prefer? Please rank the 4 combinations below according to your preferences. To rank, click and drag an item from the left to a box on the right.
    Your Most Liked
    1
    2
    3
    4
    Your Least Liked
  • 120. Logistics
    3 cities (Seattle, Denver, Boston)
    1 Day
    214 participants
    Analysis
    Simple preference, no advanced filters
    Results
    Decision-makers confirmed that critical brand elements should be retained
    Phase 3: December 2009
  • 121. Decision Makers’ Picks (1 of 2)
  • 122. 67
  • 123. Measuring
    Emotional Response
    to Guide Design
    Why it succeeded
    • Quantitative & qualitative data to identify preference trends
    • 124. “Slicing” across identifiable filters
    • 125. Emphasis on “gut-level” reactions
    • 126. Intolerance for manipulation of product brand
    • 127. Rapid turnaround of data to all stakeholders
    • 128. Executive
    • 129. Design
    • 130. Development
  • 69
    Document Display
    May 2009
    Sept 2009
    Feb 2010
  • 131. At what cost(s)?
    • We held off asking “why” until the second round
    • 132. If we had asked why in the first round, we might have
    • 133. avoided some of internal design battles
    • 134. gotten more granular ammunition for communicating the design vision to stakeholders
    • 135. “Need for speed” attained at the cost of detailed analysis
    Retrospective
  • 136. Recommendations for anyone thinking of undertaking something like this
    • Procure a “Matt” to create and administer your tool
    • 137. Get a good technical vendor for on-site
    • 138. Report results in as close to real-time as possible on a wiki or other web-page
    Retrospective
  • 139. Summary/Comparison
    72
  • 140. Both groups valued support in design decision making
    Align methodology with needs of the project
    Research-inspired, not research-decided
    73
    Summary/Comparison
  • 141. Additional Reading and Tools
    74
  • 142. Benedek, Joey and Trish Miner. “Measuring Desirability: New Methods for Evaluating Desirability in a Usability Lab Setting.” Proceedings of UPA 2002 Conference, Orlando, FL, July 8–12, 2002. http://www.microsoft.com/usability/uepostings/desirabilitytoolkit.doc
    Lindgaard, Gitte, Gary Fernandes, Cathy Dudek, and J. Brown. "Attention Web Designers: You Have 50 Milliseconds to Make a Good First Impression!" Behaviour and Information Technology, 2006. http://www.imagescape.com/library/whitepapers/first-impression.pdf
    Rohrer, Christian. “Desirability Studies: Measuring Aesthetic Response to Visual Designs.” xdStrategy.com, October 28, 2008. Retrieved February 10, 2010. http://www.xdstrategy.com/2008/10/28/desirability_studies
    75
    Additional Reading
  • 143. User Focus. "Measuring satisfaction: Beyond the Usability Questionnaire." Retrieved February 10, 2010. http://www.userfocus.co.uk/articles/satisfaction.html
    UserEffect. "Guide to Low-Cost Usability Tools." Retrieved May 12, 2010.http://www.usereffect.com/topic/guide-to-low-cost-usability-tools
    Tullis, Thomas and Jacqueline Stetson. “A Comparison of Questionnaires for Assessing Website Usability.” Usability Professionals’ Association Conference, 2004.home.comcast.net/~tomtullis/publications/UPA2004TullisStetson.pdf
    Westerman, S. J., E. Sutherland, L. Robinson, H. Powell, and G. Tuck. “A Multi-method Approach to the Assessment of Web Page Designs.” Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction, 2007.http:// portal.acm.org/citation.cfm?id=1422200
    76
    Additional Reading
  • 144. Five Second Testhttp://fivesecondtest.com/
    Feedback Army http://www.feedbackarmy.com
    Wordlehttp://www.wordle.net
    PrEmohttp://www.premo-online.com
    77
    Additional Tools