Introduce myself Here’s my twitter handle, the conference hashtag, and the track hashtag. Tweet, and I’ll respond later. 14 years at the University of Minnesota, a very large Research 1 University, working in Usability and User Experience Essentially, I help the people who are designing, developing, and selecting software, websites, and applications to include their users in the process and make their products more intuitive.
I’m here today to talk about how including your users in your software selection process can help you to make more informed decisions and be better prepared for the challenges that can come with an enterprise-wide software rollout.
Before I start running with this, I need to ask a question. Who knows what I’m talking about when I refer to an RFP? RFP stands for “Request for Proposals” Essentially, you have a business problem and you’re sending out a request to vendors so they can propose a solution. An example might look like this. It’s an RFP for a sandwich. It’s a little silly, but an easy way to get it across So there are requirements, answers, and each answer gets a score So if this were it, vendor 1 would “win” with 10 points, vendor 2 has 9 points, and vendor 3 has 6
So I’ll do a little check on where everyone stands: ** How many of you have worked or are currently working on an RFP for software at your institution? [Ask someone] how is it going? Do you feel prepared? How was it? Were you satisfied with the outcome? *** I’m going to ask you to close your eyes for a moment and think about a piece of software that you use at your work One that drives you completely crazy. Raise your hand when you have one in mind Okay, you can open your eyes Now, keep your hand up if you had the opportunity to try any of that software before it was purchased. Okay, this little exercise is over, you can relax
Here are a few of the ways that we end up with the software that we do
Second point: One of the things that I’ve heard sway a decision “Their sales rep wore a suit, the other rep didn’t” I call the third point “Death by a thousand cuts” If you have 400 requirements on your RFP, even if you weigh 10 of them as “5” and all the rest “1”, the weight of the minor requirements will crush the major ones. The fourth point is a really difficult one to get past. If a key stakeholder is biased going in, it’s going to be hard to change their mind. In the worst case, they may tailor the requirements so that their favorite will win.
You may notice “it was the easiest for our users to understand” isn’t anywhere on the list Neither is, “we tried all of them and this one made the most sense” That isn’t a malicious exclusion, it’s just that, many times, the people in charge of selecting software aren’t the people who will be using it. Because of this, the teams can get so focused on the functionality & possibilities that they neglect what it will be like to actually use the tool They are also committing a lot of their time to the process, and the prospect of adding more time isn’t always welcome
There are plenty of quotes of warning that apply to the challenges of RFPs There’s the one on this picture
There’s the unattributed quote “The bitterness of poor quality remains long after the sweetness of low price is forgotten” – some say it’s Ben Franklin’s And there’s Henry Ford’s quote: “If I had asked people what they wanted, they would have said ‘faster horses’”
Can you imagine it? “We’re here to kickoff the project to find faster horses! *Twirls moustache*. What should some of our requirements be?”
“Well, they should be purebred horses!” Nods and murmurs of agreement “We only want brown horses. Black horses don’t match our riders’ uniforms” “Remember, we *are* on a budget!” *groans from the crowd* And so on and so forth
Do you think Henry Ford would even respond to this RFP?
Another challenge that we encounter in software selection is something that we have to do, which is implement “vanilla” Essentially implementing the solutions we purchase out of the box with no customization, because customization is expensive and difficult to maintain in the face of upgrades to systems Any customizations are seen as “nice to have”
So not this. No delicious berry customizations to improve the vanilla. We don’t have the time/money for those. But this
Now “vanilla” still connotes something tasty. And, fittingly, you can still get something good out of a “vanilla” implementation, but you have to do the upfront work in the selection process Otherwise, you’ll end up with an “non-fat, unflavored ice milk” implementation. Not so tasty
Initially, usability testing on purchased enterprise solutions was seen as unnecessary, since there would be no changes to the product The flaw in this way of thinking is that the product needs to work *well* for users out of the box, so making sure the selected finalist does so is still critical Also, it can help you to prepare your help desk and training teams Nonetheless, we had our work cut out for us to get included in this process
Bringing me to the case study I’ll discuss today:
After an RFP in the mid-2000s, the U of M had implemented an open source tool called LimeSurvey that was rebranded as UMSurvey It came from a process that largely asked those who would be using it what they wanted feature-wise, rather than focusing on what they did job-wise Of course, when you ask people if they want a feature, they’ll easily be able to find *some* time when they could imagine using it And end-users generally aren’t thinking about the increased complexity that comes with increased functionality
This is what it becomes They wanted it because they wanted it There’s no thought of where you’re going to put the pony (the garage?) or how you’ll feed it There has been an awful lot of talk of horses, so far, huh?
So, UMSurvey was extremely difficult to use. It had a lot of functionality, but little restraint in how that functionality was applied or presented There are three levels of menu open here that all look pretty much the same. The tool was brought in for usability testing, and was one of the biggest failures I’ve ever seen. It just wasn’t intuitive at all. But, this was our supported survey tool for the entire University. Basically only useful to survey experts who had time to train extensively in its use. Beyond the usability testing, I was invited a couple years later to watch an end-user try to create a simple survey in her office. Learned: Creating a survey might be possible, but creating a good survey was another story entirely
Fortunately, a member of our purchasing team was in the office when that informal usability test took place And when it came time to procure a new survey tool, that experience led her to suggest to the business team to talk to us in Usability Services Their goal was “to procure a survey tool for beginning and expert users, a student rating of teaching tool, and a voting/balloting tool” …In the same RFP (we learned from that) We made ourselves available for all meetings that discussed scoring, vendor demo planning, vendor demonstrations, as well as our usability study We wanted to understand the process and this particular project as fully as possible
A notable addition that we made came during preparation for vendor demonstrations What was typically a showcase of features in the past, we turned into an opportunity to see vendors demonstrate actual use cases that the team developed We helped the team develop situations and provide data that the vendors would use to build their sandboxes For example: The survey tool vendors were sent a basic survey to create (centered around a business process at the University) and instructed not to create it prior to the demonstration They were asked to walk us through the creation of that survey live, on-site This was also repeated with a more complex survey Instead of showing us a happy-path view of the tool, we hoped to see a more life-like usage We could gain some understanding of the workflows (or lack thereof) in the tools The team would also see the vendor showing situations the team was familiar with or were a fit with our business
Running these demonstrations required keeping a close eye on time and for team members to keep track of what was or wasn’t shown We were also more able to notice when vendors were trying to stretch their tools to do things they weren’t built to do Or when they were skipping around between features in ways that you couldn’t expect an infrequent user to understand (explain having 6 tabs open to the same software)
Ultimately, we needed to have our own users get their hands on the tools This required us to prepare our vendors for a request that they were not used to receiving We added a requirement to the RFP that, if a vendor is selected as a finalist, they must provide a sandbox for users to evaluate We experienced some push back to this idea [refer to quote] In response, each vendor was informed that none of the users of any of the finalists would be trained. There would be no advantage or disadvantage.
We also excluded the vendor finalists from the testing process, and we were firm on this
We did so to avoid a continued sales pitch, and so the team could discuss what they were seeing without defensiveness
A drawback was lacking the technical knowledge of where the issues lie within the system and how hard they would be to fix We knew only what we’d learned from the demonstrations
But between that knowledge and the business knowledge of team members, we were quite adept at understanding what was going on in each tool
We also needed to decide how to test the finalist tools and in what order I know this slide can seem overwhelming. 44 sessions?! Ultimately, your team will need to decide how much time it can dedicate to testing and what that process will look like. This graphic is just an example of the variety of session types. We chose to have similar users participate consecutively, those are the letters, so A is picnic planners, B is survey takers, and C is longitudinal researchers The number designates the vendor software We chose to have consecutive sessions of the same vendor tool, this was to avoid confusion among committee members when gathering issues
For the Survey Tool RFP we had 24 sessions over 5 days. We’ve had other projects take 5 ½ days. Some only take 2 days. It all depends on the scope. Not every project is the same.
It can be difficult to watch You might find yourself saying things like “They made this look so *easy*” or “Can we even pick any of these?”
Remember: Usability testing won’t make any of the tools look good. The process is meant to find the pain points. We’re pulling back the curtain on the vendor demos to see what is really there The vendor’s job is to make it look good in their demo, our job is to have an understanding of what implementation will really look like in the hands of our users You’re learning what you need to be prepared for
During each session, your team members will be taking notes and, afterward, you’ll gather the issues that everyone saw so that you can discuss next steps. It will be tempting to look at the number of issues that is gathered for each tool as a kind of score.
With our survey tool finalists: One had 36 documented issues and the other had nearly 150. We purchased the tool that had almost 150.
The reason for a low number of issues might be the fact that those issues were showstoppers that kept users from completing tasks. The farther someone can get in a process, the more things they’ll uncover. The issues might be there, but they aren’t keeping people from their tasks Also, the farther users can get, the more the selection team knows what they’re getting into implementation-wise.
After analysis, we had a sheet that looked like this. But the key thing to note are the top few rows
Furthermore, we want to focus on the columns at the far right, I’ll highlight them.
This area was one of the differences between how we handled a typical internal usability project and how we handled those dealing with vendor software. For each issue we didn’t just come up with a recommendation for next steps, we checked boxes to determine who would be involved
I’ll explain in a little more detail what we were looking for
We then send the issues that are marked as “response required” to the vendor in a separate spreadsheet that had space for answers We asked them to indicate if the things we identified and asked for are Something they can demonstrate to us right now Already done in the system On their development roadmap Or something they can’t or won’t fix
One of the vendors wrote us an email with the following concerning their experience with our testing process
Needless to say, not all vendors were quite so eager [give example]
You can really get a sense of the relationship you’re going to have with each company based on their response to the issues identified.
It’s not just a preview of the tool, but a preview of their attitude toward their users and their product
By finding out through testing how intuitive each tool was, the team was able to build a business case for a more expensive tool that better met their users needs
The money spent would be saved elsewhere in less repeat training, less usage of other survey tools, fewer unexpected customizations, and not having to start over in 2-3 years, because it didn’t solve the problem
I have been involved in 21 RFP projects since mid-2012 Considering that most teams are only involved in this process once every 5-7 years, I’ve gained a level of familiarity with RFPs and vendor selection that most of the people I’m working with won’t have From that I’ve learned a few things that have changed the process along the way
The first part of this is one of the main reasons that I’m here. Asking to test their products with end users before buying should not be a surprise to vendors Usability testing shouldn’t be unfamiliar to them If we all start asking, and pushing back on the things that don’t work, maybe we’ll have better tools to work with The “difficult” aspect may not just be usability, but I think that it is seen as part of being more “demanding” than other customers.
We had a requirement in the RFP that finalists would provide us with a demonstration sandbox version of their software to use in usability testing
Initially, that was enough. No one really balked at providing the sandbox. Then we had a couple vendors who refused and would not allow us access until we had selected them and paid for the tool. We had no recourse, because the requirement wasn’t one that could knock out a vendor. Needless to say, providing a sandbox as a finalist is no longer optional. If you don’t comply, you’re out. We do this on everything It’s too important. You wouldn’t buy a car without taking it for a test drive. Why would we buy expensive software that many people will use without getting to try it first? You will be making it better for everyone in higher ed
We were fortunate to have a very strong team lead for the project in my case study. She kept the vendors on task during a long and detailed vendor demonstration script, while making sure to address questions from the audience. Staying on time was critical. In subsequent projects, if no one was willing to take the lead, vendors would freely stray from the script and go away from showing us the processes that we needed to see. Questions from the audience would get out of hand, and we would end up not having enough time to complete sections of the demonstration If you want to get the most out of a demo script that you’ve created, you need to be familiar with it, and not be afraid to hold the vendors accountable when they haven’t shown you something you asked to see.
For the most part we have held our testing after the vendor demonstrations have been complete. However, in a couple cases, the timeline didn’t allow for it, and the testing wasn’t as effective as it could have been Without the knowledge of how the tool is supposed to work that we gain from the vendor demonstration, it is very difficult to get a sense of when the tool is working properly or not. It also prevents us from troubleshooting issues to keep the session moving when a participant gets stuck. It also opens the door for the vendor to explain away any issues that we encountered, even if they really are issues. We’ll call this “vendsplaining”
I’ve made it a hashtag for you.
If it’s a choice between testing before vendor demonstrations and not testing at all. Absolutely, you should test. But it’s best after you’ve seen the tools.
Surprises – talk about the cost of customizing, the cost of working outside the system that you vetted for security. When you get audited and the data that was supposed to be secure isn’t.
You don’t need a lab or an expert to do this process. You need users, you need your scenarios or use cases (use the ones from the demo), and you need a way to take notes. It can be really informal. It is really important. And here are a couple books to pick up to get you started.
Pick up these two books by Steve Krug. He’s a huge proponent of do it yourself usability, and these books can really help you get the ball rolling
Try Before You Buy: User Experience Testing in Your RFP Process Can Save You Time and Money
Try Before You Buy:
User Experience Testing in
Your RFP Process Can Save
You Time and Money
David Rosen, User Experience
WHO IS THIS GUY?
User Experience Analyst
University of Minnesota
Track hashtag: #MPD10
WHAT’S AN RFP?
Vendor 1 Vendor 2 Vendor 3
Does the sandwich have
ham on it?
3 Yes (3) Yes (3) No (0)
What condiments are
available for the sandwich?
Can the sandwich be
2 Yes (2) No (0) Yes (2)
How quickly will the
sandwich be made?
HOW DID WE GET THIS STUFF?
Some common reasons software vendors are
• They were the least expensive
• They gave the most polished demonstration of their
• They checked the most boxes on the RFP
• They were one of the committee member’s favorites
before the process began
• We have already been using their product
“It’s a very sobering feeling to be up in
space and realize that one’s safety factor
was determined by the lowest bidder on a
government contract.” Alan Shepard
NO MORE SMOKE AND MIRRORS
Because we were dictating
the terms of the
demonstration, we could see
when the vendors were
trying to skip steps, or avoid
less pleasant interactions.
PREPARATION FOR USABILITY TESTING
“We’re reluctant to let you
try this without at least a
WHAT WE GAVE TO THE VENDOR
– A description of what we wanted to either understand better or
– Training and Help Desk teams will need to be aware of this issue
Prepare Vendor Request
– A request for a change in order for the solution to fit the need
– We are expecting an explanation of this issue
Submit OIT Request
– This may require configuration on our part
Hold for Further Review
WHAT WE GAVE TO THE VENDOR
We understand that some of these tasks may or may not be standard,
but they were included as a means to evaluate the usability and
intuitiveness of the system. In this way, we are able to gain some
insight into configuration setting issues, areas that may require extra
training, and/or items that need to addressed by you, the vendor.
Attached is a workbook outlining the issues that were experienced
and that we would like you to respond to. Please review and enter
your response in the space provided. In the columns to the right, you
may indicate if any of these options are applicable.
WHAT WE HEARD FROM ONE OF THE VENDORS
“A response was required from us on
each item, and since we were
motivated to win the contract we made
commitments to address the vast
majority of issues. Now that all items
targeted for the short term have been
addressed, we know our product has
become easier to use.”
WHAT WE HEARD FROM SOME OTHER VENDORS
Vendors don’t understand
what we are doing by
They also think that we’re
Access to the
cannot be optional
need a strong moderator
from the selection team
Testing is best after
IF YOU REMEMBER NOTHING ELSE
1. WE. SAVE. MONEY.
– We’ve had a preview of implementation
– Our training and help desk teams are
prepared ahead of time for known
2. There are fewer surprises in a situation
where surprises are expensive
3. This makes vendors better vendors and
Go do it!
1. Use the stuff you’d ask the
2. You can do this on the cheap
with limited resources