When Too Many Is Just Enough:
Citizen Engagement and Federal Government Websites

Jeffrey Ryan Pass | IA Summit 2013




                                           @jeffpass
                                             #ias2013
                                     #UsabilityforGov
It starts with a guy introducing himself…

I am:                                       Hi
   Jeff Pass
   A first-time IA Summit speaker
   A “User Experience Consultant”
   Tenaciously fighting the good fight
   Here to talk about Citizen Engagement
    through large-scale online IA-focused
    usability studies
Next he talks about this guy…
                                  President Obama supports:
                                     Innovation
                                     Transparency
                                     Information
                                     Feedback
                                     We the People
                                     Portuguese Water Dogs

                                  For all of these reasons (but one)…


Obama from “Change” to “Engage”
He commissioned a strategy & signed a memo…

On May 23, 2012 the White
House, CIO and CTO released:
 Presidential Memorandum:
  Building a 21st Century Digital
  Government
 Digital Government: Building a
  21st Century Platform to Better
  Serve the American People



These are better known as the Digital Government Strategy
The memo was a memo (and not very exciting)…

                The Presidential Memorandum:
                 Introduced the Strategy
                 Put departments/agencies on notice
                 Established a 12-month roadmap
The Strategy was something altogether different…

Introduced by Federal CIO
and CTO:
 “Federal Government must be
  able to deliver information and
  services to the American people
  anytime, anywhere and on any
  platform or device”
 Four over-arching principles
  (wait for it…)
 12-month agency milestones        Federal CIO, Steven VanRoekel
  (keep waiting…)                   Federal CTO, Todd Park
Four Over-Arching Principles*
       Principle                    Addressed Through
       1 – “Information-Centric”    Content Syndication
                                    Data via APIs
                                    Taxonomy/Metadata
       2 – “Shared Platform”        Agency Governance
                                    Agency Inventory
                                    Evaluate GSA vehicles
                                    Shared CMS / Open Source
       3 – “Customer-Centric”       Modern UX
                                    Site/content consolidation
                                    SEO
                                    Mobile
                                    Measure satisfaction
       4 – “Security and Privacy”   FISMA compliance
                                    Data security
                                    Personal Information Privacy

* The first and third are most relevant to today’s chat
12-Month Agency Milestones*

   3 Months 8/2012          6 Months 11/2012           12 Months 5/2013
    Identify 2 services for  Device Inventory         2 APIs Implemented
     APIs                     Customer Satisfaction    2 mobile services
    Identify 2 services for   Implemented               Implemented
     Mobile                                             New systems adhere
    Agency Governance                                  Compliance
    Launch Digital                                      verification with GSA
     Strategy Page                                       standards




     
                                                        Evaluate new GSA
                                                         vehicles




* No real bearing on today’s chat but important to know
Digital Content of, for, and by Citizens…*

Information-Centric includes:
 Presenting content “in the way that is most useful for the
  consumer of that information”

Customer-Centric includes:
  Allowing “customers to shape, share and consume
   information, whenever and however they want it”
  “Using modern tools and technologies *to gather+
   customer feedback to make improvements”


* Not really, but bear with me
Sounds Like a Job For…*

           Me!      Us:
                       Information Architects
                       Content Strategists
                       Usability Specialists
                       Other User Experience Professionals




* At least in some significant part
So Much We Can Do…
Many aspects of IA/UX/Content Strategist work can execute
on the Digital Government Strategy

Specifically, large-scale un-moderated usability tests
(focused on IA) can contribute to “citizen engagement”

So let’s rummage through the IA toolbox…
Rummaging Through the IA Toolbox…
We have the technology
(to perform large-scale usability testing and analysis)

 Open Card Sorts                       I have this ultimate
      (e.g. OptimalSort)             set of tools… I can fix it!
 Closed Card Sorts
      (e.g. WebSort)
 Reverse Card Sorts (Tree Sorts)
      (e.g. Treejack)
 Un-Moderated Usability Tests
      (e.g. Usabilla)
 One-Click Tests (First-Click)
      (e.g. ChalkMark)
 Immediate Feedback Tests
      (e.g. FiveSecondTest)
Still Rummaging…
We have other tools too
(to compliment large-scale usability testing and analysis)
   Surveys (e.g. SurveyMonkey)
   Page-based Feedback Mechanisms (e.g. Voice of Consumer)
   Customer Satisfaction Tools (e.g. ForeSee)
   Click Analysis Tools (e.g. CrazyEgg)
   Heat Mapping Tools (e.g. ClickHeat)
   User Research Tools (e.g. Ethnio)
   Crowdsourcing Feedback Tools (e.g. UserVoice)
Case Study: Large-Scale Closed Card Sorts…

First the background:
 IA design of a public-facing website for a government
  healthcare agency
 Began with over 100 content collections
 Goal to end with no more than five domains under a
  single, unified IA and residing in a WCMS
 Iterative testing as well as multiple rounds of wireframe
  usability testing
Case Study: Large-Scale Closed Card Sorts…

Next, card sorting basics:
   Technique for organizing and validating IA
   Dates back more than 100 years
   Can be performed in person, remotely, or online
   Several types:
      Open Card Sorts
      Closed Card Sorts
      Reverse Card Sorts
Case Study: Large-Scale Closed Card Sorts…

An online (closed) card sorting looks like this:
Case Study: Large-Scale Closed Card Sorts…

Now the conventional wisdom (card sorting in
general, but closed card sorting specifically):
 You need a minimum sampling for validity
 No value in samplings bigger than 25-35 participants
    Creates more analysis and reporting work


So, how many participants should you have? There are
many (well reasoned and documented) opinions…
Case Study: Large-Scale Closed Card Sorts…

Freed (2012): 15 - 20




Legend:    Minimum      Optimum
Case Study: Large-Scale Closed Card Sorts…

Gaffney (2000): 4 - 6




Legend:    Minimum      Optimum
Case Study: Large-Scale Closed Card Sorts…

Nielsen (2004): 16




Legend:   Minimum    Optimum
Case Study: Large-Scale Closed Card Sorts…

Paul (2008): 6 - 12




Legend:    Minimum    Optimum
Case Study: Large-Scale Closed Card Sorts…

Robertson (2001): 4 - 8




Legend:    Minimum        Optimum
Case Study: Large-Scale Closed Card Sorts…

Spencer & Warfel (2004): 7 - 10




Legend:   Minimum       Optimum
Case Study: Large-Scale Closed Card Sorts…

Tullis & Wood (2004): 20 - 30




Legend:    Minimum      Optimum
Case Study: Large-Scale Closed Card Sorts…

Tullis & Wood (2005): 30 - 40




Legend:    Minimum      Optimum
Case Study: Large-Scale Closed Card Sorts…

Wood & Wood (2004): 25 - 30




Legend:   Minimum     Optimum
Case Study: Large-Scale Closed Card Sorts…

So, how many participants did the case study have?


         ?
Case Study: Large-Scale Closed Card Sorts…
Multiple closed card sorts with 1,000+ participants!
Case Study: Large-Scale Closed Card Sorts…

And how did we engage the participants? Directly.




            Social media was our recruiter
             A blog post was our screener
       OptimalSort and TreeJack were our vehicles
Case Study: Large-Scale Closed Card Sorts…
And the result?
Gained valuable insight for IA improvements, plus confirmed
that large-scale approach:

 Serves as a user outreach/feedback mechanism
 Allows for qualitative data collection alongside
  quantitative data (via free-text comment fields)
 Raises awareness of the contribution of usability studies to
  the presentation and use of online content
 Supports the Digital Government Strategy
 Really doesn’t result in unnecessary analysis and
  reporting, but…
Challenges and Lessons Learned…

To avoid being crushed by the weight of data,
analysis, and reporting you must:
 Have a clear, well-established methodology
 Have a clearly defined goal and scope
 Use an online card sorting tool that can handle large-
  scale participation
So what do you think?

Share your thoughts and experiences about large-
scale usability studies and direct user engagement

             I’m listening…
Thanks for your time and participation!

Jeffrey Ryan Pass                                         Bye
Lead User Experience Consultant
Aquilent (www.aquilent.com)

jeff.pass@aquilent.com
@jeffpass




Didn’t get enough (I honestly cannot imagine)? Then check out our (with
UserWorks colleague Weimin Hou) case study posters at #IAS2013!
Shameless Poster Plugs…
Sources:
Freed, E. (2012). How-To Guide for Intranet Card Sorting. The Social Intranet Blog (09/11/2012). Retrieved 03/12/2013 from
http://www.thoughtfarmer.com/blog/2012/09/11/intranet-card-sorting/.

Gaffney, G. (2000). What is Card Sorting? Information & Design, 2000. Retrieved 03/12/2013 from http://www.ida.liu.se/~TDDD26/material/CardSort.pdf.

Nielsen, J. (2004). Card Sorting: How Many Users to Test. Jakob Nielsen’s Alertbox: July 19, 2004. Retrieved 12/21/2012 from
http://www.nngroup.com/articles/card-sorting-how-many-users-to-test/.

OptimalWorkshop (2011). How Many Participants Do I Need for My Survey? (And How Many Should I Invite?). Optimal Workshop Support Knowledge Base
11/14/2011. Retrieved 03/12/2013 from
http://www.optimalworkshop.com/help/kb/remote-user-testing/how-many-participants-do-i-need-for-my-survey-and-how-many-should-i-invite.

Paul, C. L. (2008). A Modified Delphi Approach to a New Card Sorting Methodology. JUS Journal of Usability Studies, Volume 4, Issue 1, November 2008. Retrieved
03/12/2013 from http://www.academia.edu/150978/A_Modified_Delphi_Approach_to_a_New_Card_Sorting_Methodology.

Robertson, J. (2001). Information Design Using Card Sorting. Step Two Designs, 02/19/2001. Retrieved 03/12/2013 from
http://www.steptwo.com.au/papers/cardsorting/index.html.

Sachs, J. (2002). Aristotle's Metaphysics. Green Lion Press, Santa Fe, NM.

Spencer, D., & Warfel, T. (2004). Card Sorting: A Definitive Guide. Boxes and Arrows 04/07/2004. Retrieved 03/12/2013 from
http://www.boxesandarrows.com/view/card_sorting_a_definitive_guide.

Tullis, T. S., & Wood, L. E. (2004). How Many Users Are Enough for a Card-Sorting Study? UPA 2004 Conference, Minneapolis, NM. Retrieved 12/21/2012 from
http://home.comcast.net/~tomtullis/publications/UPA2004CardSorting.pdf.

Tullis, T. S., & Wood, L. E. (2005). How Can You Do a Card-sorting Study with LOTS of Cards? UPA 2005 Conference, Montreal, Quebec, Canada. Retrieved 12/21/2012
from http://www.eastonmass.net/tullis/presentations/Tullis&Wood-CardSorting.pdf.

Wood, J. R., & Wood, L. E. (2008). Card Sorting: Current Practices and Beyond. Journal of Usability Studies, Volume 4, Issue 1, November 2008. Retrieved 03/12/2013
from http://www.upassoc.org/upa_publications/jus/2008november/wood3.html.

UserZoom (2011). Online Card Sorting: What, How & Why? UserZoom 01/20/2011. Retrieved 03/12/2013 from http://www.userzoom.com/online-card-sorting-
what-how-why/.

Note: The Digital Government Strategy was announced on 05/23/2012 in the Presidential Memorandum: Building a 21st Century Digital Government
(http://www.whitehouse.gov/the-press-office/2012/05/23/presidential-memorandum-building-21st-century-digital-government) and detailed in the actual strategy
document Digital Government: Building a 21st Century Platform to Better Serve the American People
(http://www.whitehouse.gov/sites/default/files/omb/egov/digital-government/digital-government.html).

When Too Many is Just Enough: Citizen Engagement and Federal Government Websites

  • 1.
    When Too ManyIs Just Enough: Citizen Engagement and Federal Government Websites Jeffrey Ryan Pass | IA Summit 2013 @jeffpass #ias2013 #UsabilityforGov
  • 2.
    It starts witha guy introducing himself… I am: Hi  Jeff Pass  A first-time IA Summit speaker  A “User Experience Consultant”  Tenaciously fighting the good fight  Here to talk about Citizen Engagement through large-scale online IA-focused usability studies
  • 3.
    Next he talksabout this guy… President Obama supports:  Innovation  Transparency  Information  Feedback  We the People  Portuguese Water Dogs For all of these reasons (but one)… Obama from “Change” to “Engage”
  • 4.
    He commissioned astrategy & signed a memo… On May 23, 2012 the White House, CIO and CTO released:  Presidential Memorandum: Building a 21st Century Digital Government  Digital Government: Building a 21st Century Platform to Better Serve the American People These are better known as the Digital Government Strategy
  • 5.
    The memo wasa memo (and not very exciting)… The Presidential Memorandum:  Introduced the Strategy  Put departments/agencies on notice  Established a 12-month roadmap
  • 6.
    The Strategy wassomething altogether different… Introduced by Federal CIO and CTO:  “Federal Government must be able to deliver information and services to the American people anytime, anywhere and on any platform or device”  Four over-arching principles (wait for it…)  12-month agency milestones Federal CIO, Steven VanRoekel (keep waiting…) Federal CTO, Todd Park
  • 7.
    Four Over-Arching Principles* Principle Addressed Through 1 – “Information-Centric” Content Syndication Data via APIs Taxonomy/Metadata 2 – “Shared Platform” Agency Governance Agency Inventory Evaluate GSA vehicles Shared CMS / Open Source 3 – “Customer-Centric” Modern UX Site/content consolidation SEO Mobile Measure satisfaction 4 – “Security and Privacy” FISMA compliance Data security Personal Information Privacy * The first and third are most relevant to today’s chat
  • 8.
    12-Month Agency Milestones* 3 Months 8/2012 6 Months 11/2012 12 Months 5/2013  Identify 2 services for  Device Inventory  2 APIs Implemented APIs  Customer Satisfaction  2 mobile services  Identify 2 services for Implemented Implemented Mobile  New systems adhere  Agency Governance  Compliance  Launch Digital verification with GSA Strategy Page standards   Evaluate new GSA vehicles * No real bearing on today’s chat but important to know
  • 9.
    Digital Content of,for, and by Citizens…* Information-Centric includes:  Presenting content “in the way that is most useful for the consumer of that information” Customer-Centric includes:  Allowing “customers to shape, share and consume information, whenever and however they want it”  “Using modern tools and technologies *to gather+ customer feedback to make improvements” * Not really, but bear with me
  • 10.
    Sounds Like aJob For…* Me! Us:  Information Architects  Content Strategists  Usability Specialists  Other User Experience Professionals * At least in some significant part
  • 11.
    So Much WeCan Do… Many aspects of IA/UX/Content Strategist work can execute on the Digital Government Strategy Specifically, large-scale un-moderated usability tests (focused on IA) can contribute to “citizen engagement” So let’s rummage through the IA toolbox…
  • 12.
    Rummaging Through theIA Toolbox… We have the technology (to perform large-scale usability testing and analysis)  Open Card Sorts I have this ultimate  (e.g. OptimalSort) set of tools… I can fix it!  Closed Card Sorts  (e.g. WebSort)  Reverse Card Sorts (Tree Sorts)  (e.g. Treejack)  Un-Moderated Usability Tests  (e.g. Usabilla)  One-Click Tests (First-Click)  (e.g. ChalkMark)  Immediate Feedback Tests  (e.g. FiveSecondTest)
  • 13.
    Still Rummaging… We haveother tools too (to compliment large-scale usability testing and analysis)  Surveys (e.g. SurveyMonkey)  Page-based Feedback Mechanisms (e.g. Voice of Consumer)  Customer Satisfaction Tools (e.g. ForeSee)  Click Analysis Tools (e.g. CrazyEgg)  Heat Mapping Tools (e.g. ClickHeat)  User Research Tools (e.g. Ethnio)  Crowdsourcing Feedback Tools (e.g. UserVoice)
  • 14.
    Case Study: Large-ScaleClosed Card Sorts… First the background:  IA design of a public-facing website for a government healthcare agency  Began with over 100 content collections  Goal to end with no more than five domains under a single, unified IA and residing in a WCMS  Iterative testing as well as multiple rounds of wireframe usability testing
  • 15.
    Case Study: Large-ScaleClosed Card Sorts… Next, card sorting basics:  Technique for organizing and validating IA  Dates back more than 100 years  Can be performed in person, remotely, or online  Several types:  Open Card Sorts  Closed Card Sorts  Reverse Card Sorts
  • 16.
    Case Study: Large-ScaleClosed Card Sorts… An online (closed) card sorting looks like this:
  • 17.
    Case Study: Large-ScaleClosed Card Sorts… Now the conventional wisdom (card sorting in general, but closed card sorting specifically):  You need a minimum sampling for validity  No value in samplings bigger than 25-35 participants  Creates more analysis and reporting work So, how many participants should you have? There are many (well reasoned and documented) opinions…
  • 18.
    Case Study: Large-ScaleClosed Card Sorts… Freed (2012): 15 - 20 Legend: Minimum Optimum
  • 19.
    Case Study: Large-ScaleClosed Card Sorts… Gaffney (2000): 4 - 6 Legend: Minimum Optimum
  • 20.
    Case Study: Large-ScaleClosed Card Sorts… Nielsen (2004): 16 Legend: Minimum Optimum
  • 21.
    Case Study: Large-ScaleClosed Card Sorts… Paul (2008): 6 - 12 Legend: Minimum Optimum
  • 22.
    Case Study: Large-ScaleClosed Card Sorts… Robertson (2001): 4 - 8 Legend: Minimum Optimum
  • 23.
    Case Study: Large-ScaleClosed Card Sorts… Spencer & Warfel (2004): 7 - 10 Legend: Minimum Optimum
  • 24.
    Case Study: Large-ScaleClosed Card Sorts… Tullis & Wood (2004): 20 - 30 Legend: Minimum Optimum
  • 25.
    Case Study: Large-ScaleClosed Card Sorts… Tullis & Wood (2005): 30 - 40 Legend: Minimum Optimum
  • 26.
    Case Study: Large-ScaleClosed Card Sorts… Wood & Wood (2004): 25 - 30 Legend: Minimum Optimum
  • 27.
    Case Study: Large-ScaleClosed Card Sorts… So, how many participants did the case study have? ?
  • 28.
    Case Study: Large-ScaleClosed Card Sorts… Multiple closed card sorts with 1,000+ participants!
  • 29.
    Case Study: Large-ScaleClosed Card Sorts… And how did we engage the participants? Directly. Social media was our recruiter A blog post was our screener OptimalSort and TreeJack were our vehicles
  • 30.
    Case Study: Large-ScaleClosed Card Sorts… And the result? Gained valuable insight for IA improvements, plus confirmed that large-scale approach:  Serves as a user outreach/feedback mechanism  Allows for qualitative data collection alongside quantitative data (via free-text comment fields)  Raises awareness of the contribution of usability studies to the presentation and use of online content  Supports the Digital Government Strategy  Really doesn’t result in unnecessary analysis and reporting, but…
  • 31.
    Challenges and LessonsLearned… To avoid being crushed by the weight of data, analysis, and reporting you must:  Have a clear, well-established methodology  Have a clearly defined goal and scope  Use an online card sorting tool that can handle large- scale participation
  • 32.
    So what doyou think? Share your thoughts and experiences about large- scale usability studies and direct user engagement I’m listening…
  • 33.
    Thanks for yourtime and participation! Jeffrey Ryan Pass Bye Lead User Experience Consultant Aquilent (www.aquilent.com) jeff.pass@aquilent.com @jeffpass Didn’t get enough (I honestly cannot imagine)? Then check out our (with UserWorks colleague Weimin Hou) case study posters at #IAS2013!
  • 34.
  • 35.
    Sources: Freed, E. (2012).How-To Guide for Intranet Card Sorting. The Social Intranet Blog (09/11/2012). Retrieved 03/12/2013 from http://www.thoughtfarmer.com/blog/2012/09/11/intranet-card-sorting/. Gaffney, G. (2000). What is Card Sorting? Information & Design, 2000. Retrieved 03/12/2013 from http://www.ida.liu.se/~TDDD26/material/CardSort.pdf. Nielsen, J. (2004). Card Sorting: How Many Users to Test. Jakob Nielsen’s Alertbox: July 19, 2004. Retrieved 12/21/2012 from http://www.nngroup.com/articles/card-sorting-how-many-users-to-test/. OptimalWorkshop (2011). How Many Participants Do I Need for My Survey? (And How Many Should I Invite?). Optimal Workshop Support Knowledge Base 11/14/2011. Retrieved 03/12/2013 from http://www.optimalworkshop.com/help/kb/remote-user-testing/how-many-participants-do-i-need-for-my-survey-and-how-many-should-i-invite. Paul, C. L. (2008). A Modified Delphi Approach to a New Card Sorting Methodology. JUS Journal of Usability Studies, Volume 4, Issue 1, November 2008. Retrieved 03/12/2013 from http://www.academia.edu/150978/A_Modified_Delphi_Approach_to_a_New_Card_Sorting_Methodology. Robertson, J. (2001). Information Design Using Card Sorting. Step Two Designs, 02/19/2001. Retrieved 03/12/2013 from http://www.steptwo.com.au/papers/cardsorting/index.html. Sachs, J. (2002). Aristotle's Metaphysics. Green Lion Press, Santa Fe, NM. Spencer, D., & Warfel, T. (2004). Card Sorting: A Definitive Guide. Boxes and Arrows 04/07/2004. Retrieved 03/12/2013 from http://www.boxesandarrows.com/view/card_sorting_a_definitive_guide. Tullis, T. S., & Wood, L. E. (2004). How Many Users Are Enough for a Card-Sorting Study? UPA 2004 Conference, Minneapolis, NM. Retrieved 12/21/2012 from http://home.comcast.net/~tomtullis/publications/UPA2004CardSorting.pdf. Tullis, T. S., & Wood, L. E. (2005). How Can You Do a Card-sorting Study with LOTS of Cards? UPA 2005 Conference, Montreal, Quebec, Canada. Retrieved 12/21/2012 from http://www.eastonmass.net/tullis/presentations/Tullis&Wood-CardSorting.pdf. Wood, J. R., & Wood, L. E. (2008). Card Sorting: Current Practices and Beyond. Journal of Usability Studies, Volume 4, Issue 1, November 2008. Retrieved 03/12/2013 from http://www.upassoc.org/upa_publications/jus/2008november/wood3.html. UserZoom (2011). Online Card Sorting: What, How & Why? UserZoom 01/20/2011. Retrieved 03/12/2013 from http://www.userzoom.com/online-card-sorting- what-how-why/. Note: The Digital Government Strategy was announced on 05/23/2012 in the Presidential Memorandum: Building a 21st Century Digital Government (http://www.whitehouse.gov/the-press-office/2012/05/23/presidential-memorandum-building-21st-century-digital-government) and detailed in the actual strategy document Digital Government: Building a 21st Century Platform to Better Serve the American People (http://www.whitehouse.gov/sites/default/files/omb/egov/digital-government/digital-government.html).