4. N
P
S
|
W
E
B
S
I
T
E
:
H
O
M
E
P
A
G
E
|
U
S
A
B
I
L
I
T
Y
R
E
P
O
R
T
44
Homepage
• All the interest is concentrated on the first
30% section of the website.
• The interest drops down after the second
row of cards and after the third card on
the responsive version.
• User use the “Find a Park” banner option
as the most clicked section in all the
homepage.
• The cards section have a minimum
interaction as we can see how the interest
drops down and it is visible in the
heatmaps on the blue/cold zone.
• “Work for us” and “National Parks News”
are the most clicked cards on the
homepage.
• “Plan you visit” button has a considerable
number of clicks, considering it’s location.
7. N
P
S
|
W
E
B
S
I
T
E
:
H
O
M
E
P
A
G
E
|
U
S
A
B
I
L
I
T
Y
R
E
P
O
R
T
77
Recommendations
• Migrate the interactive map to the
homepage.
• On the Homepage: Elaborate a
component for “National Parks News”
(image 1) that have a differentiate section
and design a single banner for “Work for
us” (image 2).
• Re-think the “Plan your visit” button and
merge it with the “Find a park” banner by
creating a “booking” experience, where
users can filter the parks according to their
duration, type of users, dates, activities,
etc.
Image 1
Image 2
8. N
P
S
|
W
E
B
S
I
T
E
:
H
O
M
E
P
A
G
E
|
U
S
A
B
I
L
I
T
Y
R
E
P
O
R
T
88
Recommendations
• Merge the “Plan your visit” button and the
“Find a park” banner by creating a “booking”
experience, where users can filter the parks
according to:
o Duration
o Type of users
o Dates
o Activities
o Location
• Add top lists of parks to the homepage
periodically:
o Top 5 of the best hiking trails
o Top 5 of the best stargazing spots
o Know 5 Parks in Native American
Territory
9. N
P
S
|
W
E
B
S
I
T
E
:
H
O
M
E
P
A
G
E
|
U
S
A
B
I
L
I
T
Y
R
E
P
O
R
T
99
1. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy
eirmod .
2. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy
eirmod .
3. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy
eirmod .
4. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy
eirmod .
5. Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy
eirmod .
Key questions
we aimed to
answer with
usability testing
In the last Meeting the UX Team Present a Script for this user testing, we recruit the users, make the test and now…..
As a continue road map identifying theses findings, we have developed a process that has the following outline:
1. Overview
2. Usability Test Purpose
3. Task Analysis Findings
4. Next User Testing
Not to make the Introduction any longer, Let’s start with the Overview of the project..
Each user was measured based on success or failure of their ability to complete the task correctly. To prioritize recommendations, a method of problem severity classification was used in the analysis of the data collected during evaluation activities.
Error Classification
High Impact - Unresolved tasks or errors that produce an incorrect outcome (critical errors).
Low Impact – partial success or minor problems that do not significantly affect the task completion (non-critical errors).
Difficulty Classification
High Difficulty – Requires complex design and development rework. Examples of this would be new complete templates, site wide architectural changes, or backend changes. 15+ hours.
Low Difficulty – Requires very little design and development changes. Examples of this would include things like color changes, superficial interaction tweaks, and typography changes. > 5 hours.
Error Classification
High Impact - Unresolved tasks or errors that produce an incorrect outcome (critical errors).
Low Impact – partial success or minor problems that do not significantly affect the task completion (non-critical errors).
Difficulty Classification
High Difficulty – Requires complex design and development rework. Examples of this would be new complete templates, site wide architectural changes, or backend changes. 15+ hours.
Low Difficulty – Requires very little design and development changes. Examples of this would include things like color changes, superficial interaction tweaks, and typography changes. > 5 hours.
Error Classification
High Impact - Unresolved tasks or errors that produce an incorrect outcome (critical errors).
Low Impact – partial success or minor problems that do not significantly affect the task completion (non-critical errors).
Difficulty Classification
High Difficulty – Requires complex design and development rework. Examples of this would be new complete templates, site wide architectural changes, or backend changes. 15+ hours.
Low Difficulty – Requires very little design and development changes. Examples of this would include things like color changes, superficial interaction tweaks, and typography changes. > 5 hours.
The average success rates show us how this feature is in the high impact sector, this corresponds to the importance of the tool and to being the first search of the users when they arrived at the homepage.
The study show us how a big % of users start with the same question: “Where can I find a park?”
To know how users answer this question we create these task….
The average success rates show us how this feature is in the high impact sector, this corresponds to the importance of the tool and to being the first search of the users when they arrived at the homepage.
The study show us how a big % of users start with the same question: “Where can I find a park?”
To know how users answer this question we create these task….
In the last Meeting the UX Team Present a Script for this user testing, we recruit the users, make the test and now…..
As a continue road map identifying theses findings, we have developed a process that has the following outline:
1. Overview
2. Usability Test Purpose
3. Task Analysis Findings
4. Next User Testing
Not to make the Introduction any longer, Let’s start with the Overview of the project..
Each user was measured based on success or failure of their ability to complete the task correctly. To prioritize recommendations, a method of problem severity classification was used in the analysis of the data collected during evaluation activities.
In the last Meeting the UX Team Present a Script for this user testing, we recruit the users, make the test and now…..
As a continue road map identifying theses findings, we have developed a process that has the following outline:
1. Overview
2. Usability Test Purpose
3. Task Analysis Findings
4. Next User Testing
Not to make the Introduction any longer, Let’s start with the Overview of the project..
Each user was measured based on success or failure of their ability to complete the task correctly. To prioritize recommendations, a method of problem severity classification was used in the analysis of the data collected during evaluation activities.
Each user was measured based on success or failure of their ability to complete the task correctly. To prioritize recommendations, a method of problem severity classification was used in the analysis of the data collected during evaluation activities.
Each user was measured based on success or failure of their ability to complete the task correctly. To prioritize recommendations, a method of problem severity classification was used in the analysis of the data collected during evaluation activities.
In the last Meeting the UX Team Present a Script for this user testing, we recruit the users, make the test and now…..
As a continue road map identifying theses findings, we have developed a process that has the following outline:
1. Overview
2. Usability Test Purpose
3. Task Analysis Findings
4. Next User Testing
Not to make the Introduction any longer, Let’s start with the Overview of the project..
In the last Meeting the UX Team Present a Script for this user testing, we recruit the users, make the test and now…..
As a continue road map identifying theses findings, we have developed a process that has the following outline:
1. Overview
2. Usability Test Purpose
3. Task Analysis Findings
4. Next User Testing
Not to make the Introduction any longer, Let’s start with the Overview of the project..