Interface Prototyping Case Study - PaperDocument Transcript
Our main objective is to build a user interface prototype for a desktop/web client that would
be useful to manage a backup or an hosting account. The main functionalities of the application
come from the general requirements of this kind of software, which are: status reports, file
management, uploading and downloading files, account and website administration. Also other
requirements like user account administration would be considered.
The user interface has to be flexible so other components can be added and removed easy,
because users have different options on their accounts. Some users might have more than one
website or more than one account, and the application has to allow working with multiple accounts
and multiple websites at the same time.
Building the UI will be made by following several steps which require a look into the
methods and the process of bringing the UI to the users, and making it more accessible, useful and
also adaptable so that we can modify some options to fit any future purpose.
In a first stage we will be studying a raw sketch of the UI to get a general idea of the needs
of the users. Then we will go further with some story boards of how a user might interact an use the
application. In a second stage we will be analyzing and selecting some of the functionalities of the
UI and also making some design decision that will have an impact on the overall aspect.
Reaching our objectives will require that we follow all the methods and steps necessary to
make the user experience as fluent as possible with no frustration on the user side. These methods
and steps will consist of UI design patterns and of HCI specific rules.
Applications, services, and systems need to respond to stimuli created by human beings.
Those responses need to be meaningful, clearly communicated, and, in many ways, provoke a
persuasive and semi-predictable response. In a few words they need to behave.
Proper interface design will provide a mix of well-designed input and output mechanisms
that satisfy the user’s needs, capabilities, and limitations in the most effective way possible. The
best interface is one that is not noticed, and one that permits the user to focus on the information
and task at hand instead of the mechanisms used to present the information and perform the task.
A well-designed interface and screen are terribly important to users. They are their window
to view the capabilities of the system, the bridge to the capabilities of the software. To many users it
is the system, because it is one of the few visible components of the product its developers create. It
is also the vehicle through which many critical tasks are presented. These tasks often have a direct
impact on an organization’s relations with its customers, and its profitability.
A screen’s layout and appearance and a system’s navigation affect a person in a variety of
ways. If they are confusing and inefficient, people will have greater difficulty doing their jobs and
will make more mistakes. Poor design may even chase some people away from a system
permanently. It can also lead to aggravation, frustration, and increased stress.
Prototyping the UI
Sketches and prototypes are both instantiations of the design concept. However they serve
different purposes, and therefore are concentrated at different stages of the design process. Sketches
dominate the early ideation stages, whereas prototypes are more concentrated at the later stages
where things are converging within the design funnel. Much of this has to do with the related
attributes of cost, timeliness, quantity, and availability. Essentially, the investment in a prototype is
larger than that in a sketch, hence there are fewer of them, they are less disposable, and they take
longer to build. At the front end of the funnel, when there are lots of different concepts to explore
and things are still quite uncertain, sketching dominates the process.
Figure 1. Sketch vs. Prototype
Phase 1: The Audit and The UI
The goal of the audit is to create a blueprint for the project, much like architectural drawings
are developed before constructing a building. The audit process begins by asking and answering a
number of questions and acknowledging ongoing change and an ever-increasing palette of products
and services. Questions are asked throughout the entire product life cycle, since the answers/design
solutions reflect the user/use environment and affect the ongoing usefulness and value of the
product. To create an eloquent design, continually ask and answer the following questions:
Audit Questions A
• Who are the product users?
• How will this product be used?
• When will this product be used?
• Why will this product be used?
• Where will this product be used?
• How will the process evolve to support this product as it evolves?
Audit Questions B
• What is the most efficient, effective way for a user to accomplish a set of tasks and move on to the
next set of tasks?
• How can the information required for product ease of use be presented most efficiently and
• How can the design of this product be done to support ease of use and transition from task to task
as a seamless, transparent, and even pleasurable experience?
• What are the technical and organizational limits and constraints?
These two sets of questions give a starting point in sketching a model of the UI, what
elements it has to contain and what elements should be left out. We will focus on the A set of
question mainly because it establishes an informal connection to our users and helps us get a better
understanding of what we result we need to focus on.
Audit Answers to Questions A
• Who are the product users?
We target the users that are beginners in the field of managing websites and backup, but
some of the functionalities provided by the application will target more advanced users, giving them
an opportunity to take advantage of their skills. Because the UI will require mouse and keyboard
inputs some users with disabilities will find it hard to use it, although the application will allow
some modules to facilitate interaction for people with disabilities.
• How will this product be used?
In an desktop environment regardless of the OS, but an web interface would also be helpful
and make application more portable. An important factor in using the application is Internet
connection, which is mandatory. The system requirements for the application have be as minimal as
possible so that any user could take advantage of the application. As keyboard and mouse input is
mandatory the user will interact with the interface using these methods so we will have to pay
attention to the rules of working with these input devices.
• When will this product be used?
Considering the purpose of the application the time of the usage depends on the user, so
regardless of the moment when it would be used, it would be available and functional to the user.
• Where will this product be used?
Accessing the content from anywhere and anytime is very important, so looking at the
possibilities of allowing the application to platform and resources independent is a big issue. Also
the interface might be required to be used on different locations and by users with different
• How will the process evolve to support this product as it evolves?
Adding new functionalities and options is crucial to the evolution of the resulting
application, so the user interface has to allow new modules and options to be easily added without
having a dramatic change of the overall look and feel.
Before answering to the following set of question we would have to consider the above
answers and our objectives and establish a list of the functionalities and options we would have to
focus on. Focusing on some of the core issues of the user interface means that we would have to
organize and sort those functionalities. A short answer to all the questions from the set B is to
evaluate some ideas and sketch down a mock up of an actual user interface. The result of the
answers to set B of questions would be a sketch (Figure 2).
Figure 2. First user interface sketch
The user is the target of the information and the driver of the system. This fact is often lost
because interfaces are designed and developed by developers, not users. Developers have a different
view of the product, a different skill set than the users, and often enforce their own desires rather
than those of the end users. Only the users know what they need and what they want; and the only
way to find out what the users need and want is to ask the users.
Putting the user at the center of the design approach greatly improves the chances of creating
an intuitive, efficient, and effective interface.
Analyzing the first UI prototype
Pointing some main functionalities of the application:
• Handling multiple accounts and websites.
• Upload and download files, organize them in folders.
• View account statistics.
• Administration options.
With these in mind we are able to present a first prototype of out application.
Figure 3. First user interface prototype
As we can notice a lot of grouping takes place, in fact the user interface can be divided in
three major parts:
1. Navigation bar.
2. Content area.
3. Information bar.
Grouping elements has an important role it aids in establishing structure, meaningful
relationships, and meaningful form of the user interface. The study by Grose et al. (1998
“Evaluating the layout of graphical user interface screens: validation of a numerical computerized
model”) found that providing groupings of screen elements containing meaningful group titles was
also related to shorter screen search times. In this study groupings also contributed to stronger
viewer preferences for a screen. The perceptual principles of proximity, closure, similarity, and
matching patterns also foster visual groupings.
Perceptual principles can be used to aid a person in comprehending groupings. Visual
organization can establish a relationship between related items or design elements.
The most common perceptual principle used to establish visual groupings is the proximity
principle. Elements positioned close together are perceived as a single group, and are interpreted as
more related than elements positioned farther apart. A lack of proximity creates the impression of
multiple groups and reinforces differences among elements. In the preceding example, the
incorporation of adequate spacing between groups of related elements enhances the “togetherness”
of each grouping. Space should always be considered a design component of a screen. The
objective should never be to get rid of it.
The similarity principle can be used to call attention to various groupings by displaying
related groupings in a different manner, such as intensity, font style, or color. Elements that are
similar in some manner are perceived to be more related than elements that are dissimilar.
Because people tend to perceive a set of individual elements as a single recognizable pattern
instead of a collection of multiple, individual elements, users will close gaps and fill in missing
information to derive a meaningful pattern whenever possible. Closure is strongest when elements
approximate simple and recognizable patterns. Closure, generally, will not occur if the effort
required to identify a form or pattern is greater than effort required to perceive the elements
individually. In the preceding example, the perception of boxes is established through the use of
Line borders, or rules enhance the perception of grouped elements. Information is displayed
with a border around so is easier to read, better in appearance, and preferable. In fact the whole user
interface prototype so far is organized by grouping and arranging elements, which makes it more
easy for the user to understand what the interface is about, than when the element are scattered on
This kind of organization which groups and divides elements using lines and borders forms a
full-screen application — the idiom of multipaned windows. Multipaned windows consist of
independent views or panes that share a single window. Adjacent panes are separated by fixed or
movable dividers or splitters.
The advantage of multipaned windows is that independent but related information can be
easily displayed in a single, sovereign screen in a manner that reduces navigation and window
management excise to almost nil. For an application of any complexity, adjacent pane designs are
practically a requirement. Specifically, designs that provide navigation and/or building blocks in
one pane and allow viewing or construction of data in an adjacent pane seem to represent an
efficient pattern that bears repeating.
Keeping the usual pattern of other similar software, the menu is arrayed in a horizontal row
at the top of a window. A menu bar is the starting point for many dialogs. Consistency in menu bar
design and use will present to the user a stable, familiar, and comfortable starting point for all
interactions. Menu bars are most effectively used for presenting common, frequent, or critical
actions used on many windows in a variety of circumstances. Each menu bar item is the top level of
a hierarchical menu. It will have a drop down menu associated with it, detailing the specific actions
that may be performed.
Figure 4. Drop down menu
Solving the problem of multiple accounts was done with the help of a tabbed interface. This
type allows multiple “documents” in the same windows, thus reducing some of the unwanted work
from the user, by opening and closing multiple windows to operate different accounts.
Tabs are one of those physical metaphors that have moved into the digital world.
Conceptually they function in the same way—we have a grouping of content that we stick into a
section that has a tab on it. The tab helps people find the content under that heading, and while
looking at the content under a tab, the tab can also function to reassure people of where they
are/what they’re looking at.
But digital tabs of course do diverge from their physical counterparts. They can support
multiple rows, usually no more than two, of hierarchy, and they don’t literally contain their content,
though it is best to visually show the content area that the tabs will affect—a simple bar across the
top below the tabs can be enough to indicate it affects the whole page.
Navigation tabs are a nice way to blend the navigational aspects of a menu with the
reassurance of a breadcrumb sign post, to help people know where they are. This latter is especially
helpful because people can easily be dumped into the middle of an application and need some way
to orient themselves as to where they are on that site and where they can go from there. While tab
usage in software has a long and illustrious history, their use for navigation is somewhat newer,
their familiarity is a boon and should not be discounted as a possible navigation device.
Grouping elements has an important role in setting apart some options in the navigation drop
down menu as we see in Figure 4. This separation has an important role in optimizing the
interaction of the user with the menu. Taking a step forward in improving the navigation menu we
can consider the paper “A Predictive Model of Menu Performance”. (In Proceedings of the ACM
Conference on Human Factors in Computing Systems - ACM CHI'07. ACM Press, pages 627-636,
Apr 28-May 3 ). The menu model presented in the study is able to predict performance for many
different menu designs, including adaptive split menus, items with different frequencies and sizes,
and multi-level menus. The main idea reflects the fact that the menu is personalized to fit every user
by increasing text size and changing positioning of elements depending on how the user interacts
with the menu.
Looking at the current prototype (Figure 3) we realize that the current grouping is an ideal
grouping but not optimal because some of the elements take a lot o space or there is some unused
space (“white space”). At this point it is important which elements to keep and which elements are
missing. Metaphors or other graphical elements such as text and positioning of the elements must
be excluded, because these will be debated in the second phase.
Taking a look at the navigation bar we notice the white space, which has the role of making
a distinction between the window controls and the menus, it can be used to add several other
elements like an user login form, or to display the name of the current user. A more likely approach
is to leave it as an white space because otherwise the navigation bar would be cluttered with
information that we don't actually need.
Putting related buttons in a visual grouping uses the innate human mechanisms of
association by proximity.
Figure 5. Button groups
Ensuring they’re related and possibly act on the same or similar objects helps to reduce any
possible confusion. Because buttons tend to be strong visual elements, a group of them is stronger
and is likely to stand out more than individually placed buttons, which helps users to find them. If
we have more than a few buttons, the value of the grouping is reduced as the display will get
cluttered and be harder for users to figure out what they want to do.
As we see in Figure 5 our navigation bar has 3 set of buttons. A first set that contains the
main navigation button that are back and forward, which are helpful navigating throughout multiple
tabs. A second set contains the navigation menus which are drop down menus whit multiple
functions. The last set refers to the window commands which work better by being set in the same
group. Now depending on the OS the positioning of this last set might not be the same as shown.
Phase 2: User Interface Decisions
The second phase uses the audit report as a guideline. This is an ongoing, iterative process
with each iteration incorporating user test results to make the product appropriate to the particular
set of needs. In reality, the length of this process is often defined and limited by real-world
deadlines such as product release dates. This phase includes design and testing.
We have to create a number of solutions based on results and objectives determined by the
audit report as well as other project specifics. Initially, design ideas should be very broad,
incorporating many ideas and options no matter how unrealistic or unusual. As ideas are tested, user
feedback incorporated, and other parameters defined, solutions naturally become more defined.
Surviving design ideas are then based on solid information derived from user feedback, providing a
strong basis for final design decisions. In the beginning, the focus is on high-level concepts and
navigation. How will the product work? What will it feel like to use? As initial concepts are refined,
design details become more specific. When the conceptual model and organizational framework are
approved, the design of the look or product package begins.
Every element in that forms the GUI has a number of properties, such as shape and color,
that work together to create meaning. There is rarely an inherent meaning to any one of these
properties. Rather, the differences and similarities in the way these properties are applied to each
element come together to allow users to make sense of an interface. When two objects share
properties, users will assume they are related or similar. When users perceive contrast in these
properties, they assume the items are not related, and the items with the greatest contrast tend to
demand our attention. Although we can't stress enough the importance of these properties (color,
text size, font, illustration, orientation, position etc.) our main focus is organizing the GUI,
explaining different decisions we have to make in order to get most of the UX, and illustrate some
patterns we are using to achieve our goal.
Figure 6. Pointing out some usability issues
As we marked on Figure 6 there are some elements that don't seem to add up to the overall
user interface. For example the search bar is has an unusual positioning and the group that it
belongs to not from the same class of controls. The same observation goes to the IP element which
eventually would be dropped because its functionality is no longer needed.
The information bar, produces a visual noise, elements and icons inside it are scattered, and
the fluency is missing. The link “More Info ..” is not appropriate to the overall interface because
other similar elements are under the form of buttons.
Therefore we had to get back to the drawing board and redo some elements and position
them correctly so that the overall look has a more precise meaning.
Figure 7 Second prototype
In a second prototype (Figure 7) we take a different approach and try to reorganize some of
the elements. Comparing this prototype with the previous one we have added and additional
navigation bar and took out some unnecessary elements.
The Navigation bar consists now of two elements: the menu bar and the quick navigation
bar. The menu bar consists of the same elements, but in the quick navigation we added the search
bar and back and forward commands. These two elements are the most frequently used, and the user
doesn't have to look for them.
Visual representations such as site maps, graphics, and icons are effective devices for
orienting users within a program. Creating effective graphics and icons requires that intent and
action are defined and designed. That is why some graphical elements also identify themselves as
Using familiar visual analogies helps users easily understand and organize new information.
Some are easily recognizable metaphors like the plus sign which means to add a new element or the
back and forward signs which identify themselves as back and forward. Some of them need an
reinforcement to sustain the meaning it's suggesting. As we see in Figure 7 the free space is also
displayed in numbers because the metaphors might be confusing to some users and they can't
understand how much free space is left.
Often plain text is enough in lists, and sometimes illustrating an option is not possible or has
no effect. But for those times when we can, using illustrations can help to bring out the differences
between items in the list and thus help people more quickly identify the option that they want. In
fact, sometimes an illustration is far more appropriate than text: e.g., when the choice in question is
visual to begin with. Icons were developed for just this purpose.
Also, a lesser reason to illustrate is simple to break up the monotony of a list and add visual
interest. Although choosing the right illustrations for the metaphors can be time consuming and
often is done by designers, still it is important. For example some icons in our prototype are easily
understood and discovered even without text.
For example in Figure 7 in the navigation bar the arrows that represent back and forward are
quite suggestive and don't need further explanation from our side, although their positioning should
closer tot the tabs, the proximity suggesting a relationship between those two group of elements.
Although all the illustrations in this prototype are black and white their meaningful enough
to show their purpose. Because some the icons and other illustrations are made from scratch we
strongly considered supporting them with text. Research done by the Microsoft Office team found
that even for their broadly disseminated icons, users still would mistake them (such as the Reply
button); their conclusion was to add/keep text labels with the icons for the more important options.
Once an illustration becomes well known enough and/or that the user testing shows the icon is well
recognized, we can consider leaving off the accompanying.
This is what we did in Figure 8 with the elements representing free space and bandwidth
usage. Alone icons and other graphical elements can't provide and accurate representation of the
specified functions but along with some text suddenly everything has more sense
Figure 8. Reinforced metaphors
In creating illustrations, it’s important to remember that we don’t always have to be picture-
perfect or reflect reality exactly, if we are modeling the illustration off of something real. In fact, it
can actually help to reduce the number of visual elements to the essentials and even exaggerate the
aspects that would aid recognition. That’s really the key—to create illustrations that will aid
recognition and disambiguation.
As we see in Figure 9 there is the “plus sign” near some text, and as a metaphor it represents
adding some elements that is why associating the text with it is important.
Although graphical elements and text are considered an usability combination made so that
it can help the user to have a better understanding of the overall UI, there are some cases where
these elements are brought together for design purposes, to keep the general aspect of the interface.
For example in the right side of Figure 9 there are two buttons “Statistics” and “More Info”. The
text clearly states their purpose, but several elements like an icon representing a chart or a question
mark can be added to the corresponding text. As far as the usability goes these changes have no
effect, but they help keeping the fluency of the interface. One must also be careful not to overdue
these graphical elements because then the UI will become cluttered.
Figure 9. Info bar elements
Regarding the working area or as we called it the content area the things are straight
forward. We divided it into two parts for convenience: a side bar that contains the folders as
suggested in the prototype and the other side that contains the files and general info about them with
the possibility to perform different tasks as download, select and delete.
The folders are structured as a tree because it is a natural way to explore hierarchical
information and is also a fairly established way to do this. Some email applications and RSS readers
use this approach, so users of those will readily be able to use this; however, due to its visual
complexity when the folders are too many, it could be problematic for inexperienced users.
As we see in About Face 3 The Essentials of Interaction Design page 247 - 248 , the
author talks about avoiding hierarchies not because there are not an useful or don't provide an
accurate representation of the date but because it is a deprecated method and because large amount
of data may confuse the user.
Our option was to separate as much as possible the data as we have done in folders and files.
As we can easily observe the number of files exceeds the number of folders, that is why the folders
have a much smaller space than the files. An important role in not overwhelming the UI in this case
is played by the text size and graphical elements.
Managing a list of files, objects it’s a very common need and for each object in the list, there
are a several actions that can be applied to it, and this approach makes possible to select any
To differentiate table rows from each other, a different shade is used as background color for
every second row. Keeping the difference between the two colors to a minimum to preserve a gentle
feeling. The colors should be similar in value and low in saturation – the one should be slightly
darker or lighter than the other. It is often seen that one of the two colors is the background color of
the page itself.
Sorting each column of data selected is a useful feature and will enhance the way in which
the users manage the content. For example, for single column sorts the user should be able to click
the top of the column to sort by that column. One standard practice is to use arrows at the top of
each sortable column as an affordance that this is possible, using the direction of the arrow to
indicate the sort order (ascending versus descending).
Pop-up menus are requested in order to bring more flexibility in using some of the more
popular features like: Delete, Copy, Proprieties, Download, Upload new version. Integrating some
elements in the UI will sometimes confuse the user, and if those elements happen to be applied only
to some part of the interface then we can integrate them in a pop-up menu. In look, they usually
resemble pull-down menus, or drop down menus as shown in Figure 4. The kinds of choices
displayed in pop-up menus are context sensitive, depending on where the pointer is positioned when
the request is made. They are most useful for presenting alternatives within the context of the user’s
immediate task. If positioned over text, for example, a pop-up might include text-specific
The advantages of pop-up menus are
• They appear in the working area.
• They do not use window space when not displayed.
• Their vertical orientation is most efficient scanning.
• Their vertical orientation most efficient for grouping.
For experienced users, pop-up menus are an alternative to retrieve frequently used
contextual choices in pull-down menus. Choices should be limited in number and stable or
infrequently changing in content. They are also referred to as context menus or shortcut menus.
Feedback and Error reporting
When users of an interactive product manipulate tools and data, it’s usually important to
clearly present the status and effect of these manipulations. This information must be easy to see
and understand without obscuring or interfering with a user’s actions.
There are several ways for an application to present information or feedback to users. This
technique is modal: It puts the application into a special state that must be dealt with before it can
return to its normal state, and before the person can continue with her task. A better way to inform
users is with modeless feedback as the author calls it in About Face 3 The Essentials of
“Feedback is modeless whenever information for users is built into the structures of the
interface and doesn’t stop the normal flow of activities and interaction. In Microsoft Word, we can
see what page we are on, what section we are in, how many pages are in the current document, and
what position the cursor is in, modelessly just by looking at the status bar at the bottom of the
screen— we don’t have to go out of our way to ask for that information.”
Placing error messages close to what caused the error, or close to the area that a user will
need to interact with to get rid of the error makes it easier to notice and deal with errors. This
becomes even more useful when multiple errors occur at once; in many cases users are told there
are multiple errors, but not what to do about each one, and whether or not they are related. Our goal
should be to help users get rid of any errors as easily as possible, not just announcing to them that
In many cases the error message may contain information that will be helpful to the user
when trying to reenter data; therefore it can only help the user to be able to see the error message
when actually trying to reenter data. Keeping in mind though that maybe this helpful information
provided as part of the error message should have been present in the first place, for example as a
tool tip, a hint in the data field itself, or a small bit of text near the data field. Providing this type of
information up front can help prevent errors from occurring.
Solving these problems can be done by providing the interface in-line with descriptive help/
error blocks. Some functionalities in the software require interaction from the users, these
functionalities can be easily explained in a help/error box just above or below the functionality that
requires the user’s interaction. As the help/error box itself is not part of the main functionality, it is a
good idea to add a style to it that visually separates the help box from that functionality. An easy
way to do this is by applying another background and font color to the help box.
Additionally, to avoid the user’s discontent with the help/error box, a great feature of the in-
line help box is to have a “hide this box” functionality. Positioning such a box just above the
content area or the folder list in our interface does not affect any other elements.
Although we are striving to eliminate as much as we can pop-up message and unwanted
error messages, pop-up windows are useful and can be used to display additional information when
an abbreviated form of the information is the main presentation technique. Pop-up windows are also
used to collect secondary information whenever the user’s flow through an application should not
be interrupted, and to provide context-sensitive Help information. The standard rule of presenting a
pop-up is at the front of the screen so it will not be missed, especially if it is reused. The pop-up
should be a quarter to a third of a window size. If it is too small it may not be seen; too large and it
will cover too much of the screen.
Providing OK or Save and Cancel buttons to remind people of the methods for getting rid of
the pop-up. For example displaying properties and adding new email accounts, domains and other
options that require a direct input from the user are done by a pop-up window.
As we have done with the content we will approach the pop-up windows with a tabbed
window. A tab-based approach leverages understanding that many users will have from real world
tabbed folders, usually used to group related documents.
By separating each chunk of related content onto individual tabs, users can quickly and easily get to
each group of content. Rather than having to scroll (assuming you don’t put too much on each tab),
users can quickly glance at the content on a single tab and quickly make sense of the information.
Usability Testing in the Prototype phase
Figure 11. Usability task scenario
Task scenarios are representations of actual work that the participants would conceivably
perform using the product. Task scenarios are expanded versions of the original task list (previously
developed as part of the test plan), adding context and the participant’s rationale and motivation to
perform those original tasks. In many cases, one task scenario will comprise several tasks from the
task list grouped together because that is the way that people perform their work on the job.
Why we should test in this phase? The main reason of the usability test in this stage is so
that the developers can work on the bugs in the interface but also in performing tasks. Failing to
complete a task because the UI is not easily accessible, means that some users will fail to
understand an therefore use the interface. At this point an issue like that means that the developer
need to work on reorganizing some of the elements so that the user can find its way throughout the
application. Making another prototype is only necessary if several tasks are not completed
Phase 3: Implementation and Monitoring Issues
The implementation phase focuses on delivering what has been defined, designed, and
documented in the preceding phases. It is the final part of a holistic process that defines everything
necessary to make a product succeed on an ongoing basis. This includes not only the
implementation of the design within the technology, but also any additional support such as the
creation of training materials and other reinforcements that enhance use and productivity.
Figure 12. An iterative process
Continuous monitoring is key to sustained success, because a successful product responds to
evolving technology and user needs. This last phase is mostly consultative and ongoing throughout
the product life cycle in order to ensure that changes such as new technology and product
developments are reflected in the product itself. These may in fact trigger another
audit/design/testing cycle, although usually less extensive than the initial process. Though the
implementation phase is called “the last phase,” it reveals the evolutionary process of design and
development. The goal of ongoing monitoring of solutions is to be aware of changes in user needs,
technology, and competition that impact user acceptance and satisfaction. Changes here often result
in the need to reevaluate and redesign to incorporate this new knowledge gained.
Choosing the tools to implement the UI and test the interface will determine the boundaries
of what we can an what can achieve from our interface. For example using Mozilla XUL
technology along with user interface libraries such as YUI (Yahoo user interface library) or
MochaUI will ease our work both with implementation and testing issues. As a testing software we
can use Selenium which can provide both automated and manual testing.
Globalization and Localization
Successful computer-based products and services developed for users in different countries
and among different cultures (even within one country) consist of partially universal, general
solutions and partially unique, local solutions to the design of UIs. Global enterprises seek to mass-
distribute products and services with minimal changes to achieve cost-efficient production,
maintenance, distribution, and user support. Nevertheless, it is becoming increasingly important,
technically viable, and economically necessary to produce localized versions for certain markets.
UIs must be designed for specific user groups, not merely translated and given a superficial “local”
appearance for quick export to different markets.
It is important to bear in mind that localization concerns go well beyond only language
translation. They may affect each component of a UI: from choices of metaphorical references,
hierarchies in the mental model, and navigation complexity, to choices of input techniques,
graphics, colors, and so on.
In an ideal world usability tests would be carried out frequently from an early stage of the
project. As we mentioned usability test in the prototype phase has an important role and decides
whether the developers should take the user interface to the implementation phase. Also in the
implementation phase the developers can go back to the prototype phase but only if the UI is fails to
provide the users with all the qualities of an usable interface.
In large part, what makes something usable is the absence of frustration in using it. As we
lay out the process and method for conducting usability testing in this book, we will rely on this
definition of ‘‘usability;’’ when a product or service is truly usable, the user can do what he or she
wants to do the way he or she expects to be able to do it, without hindrance, hesitation, or questions.
Usefulness concerns the degree to which a product enables a user to achieve his or her
goals, and is an assessment of the user’s willingness to use the product at all. Without that
motivation, other measures make no sense, because the product will just sit on the shelf. If a system
is easy to use, easy to learn, and even satisfying to use, but does not achieve the specific goals of a
specific user, it will not be used even if it is given away for free. Interestingly enough, usefulness is
probably the element that is most often overlooked during experiments and studies in the lab.
Effectiveness refers to the extent to which the product behaves in the way that users expect
it to and the ease with which users can use it to do what they intend. This is usually measured
quantitatively with error rate. The usability testing measure for effectiveness, like that for efficiency,
should be tied to some percentage of total users. Extending the example from efficiency, the
benchmark might be expressed as ‘‘95 percent of all users will be able to load the software correctly
on the first attempt.’’
Learnability is a part of effectiveness and has to do with the user’s ability to operate the
system to some defined level of competence after some predetermined amount and period of
training (which may be no time at all). It can also refer to the ability of infrequent users to relearn
the system after periods of inactivity.
Satisfaction refers to the user’s perceptions, feelings, and opinions of the product, usually
captured through both written and oral questioning. Users are more likely to perform well on a
product that meets their needs.
Accessibility and usability are siblings. In the broadest sense, accessibility is about having
access to the products needed to accomplish a goal. But in this book when we talk about
accessibility, we are looking at what makes products usable by people who have disabilities.
Making a product usable for people with disabilities— or who are in special contexts, or both—
almost always benefits people who do not have disabilities. Considering accessibility for people
with disabilities can clarify and simplify design for people who face temporary limitations (for
example, injury) or situational ones (such as divided attention or bad environmental conditions,
such as bright light or not enough light).
Although Usability testing is a time consuming issue and often is done several times in the
process of building an product, there are tools such as Selenium, Silverback which facilitate this
purpose. There is no exact rule which specifies which set o questions should be addressed to the
user testing the application, or which scenarios to evaluate, but there is an ideal number of users on
which to test the application. In “A mathematical model of the finding of usability problems,"
Proceedings of ACM INTERCHI'93 Conference (Amsterdam, The Netherlands, 24-29 April 1993),
pp. 206-213 , the authors evaluate from an mathematical point of view the ideal number of user
which the application should be tested on and the conclusion they reached is a maximum of 5 users
An interesting experiment worth mentioning is illustrated in Extremely Rapid Usability
Testing. Research report 2008-918-31, Department of Computer Science, University of Calgary,
Calgary, Alberta, Canada, October. The authors conducted a series of Extremely Rapid Usability
Testing as an evaluation procedure of certain products present at conference. Although the test
results are “illustrative rather than definitive” the paper represents an useful example of principles
of usability testing.
Don't Make Me Think - A Common Sense Approach To Web Usability, New Riders
Designing for Interaction: Creating Smart Applications and Clever Devices, Peachpit Press
About Face 3 The Essentials of Interaction Design, Wiley
The Essential Guide to User Interface Design, Wiley
Handbook of Usability Testing How to Plan Design and Conduct Effective Tests, Wiley
Handbook of Human Computer Interaction, Human Factors and Ergonomics
Related articles (Romanian):