Diane Kim (x.ai) spoke about "Designing Intelligent Agents and a new class of Perceived Errors". This talk covers new research in UI and how we can take advantage of NLP and AI in general, and change the way we interact with technology dramatically. Diane discusses how the standard GUI is many times fully eliminated, leading to novel challenges in UX. Tasks are removed from the user’s oversight with invisible or seamless software, and the output is not always as expected. But sometimes that output is correct within the parameters given and simply perceived as an error.
By Diane Kim (AI Interaction Designer, x.ai)
@_DianeKim
part of NYAI #19: AI & UI on Tues, 27 Feb 2018 at Capital One Labs
nyai.co
6. Perceived Error: Not what I requested
WHEN
Thursday, February 03, 2018 4:00pm – 4:30pm EST
WHERE
200 Broadway, New York, NY | Irving Farms
WHO
Diane Kim, Host
Tanya Rose, Guest
Amy Ingram, Assistant to Diane
Hi Tanya,
Looking forward to getting together like we
talked about! Wednesday is my preferred
day, Amy will help us find a time.
,
Dennis
Tanya, DianeTanya
Diane
Let’s get coffee Wednesday!
Tanya, Diane | Coffee
7. “But I said to schedule on Wednesday!”
Except Tanya was out of town until Thursday.
8. Perceived Error: Outside my preferences
WHEN
Friday, March 16, 2018 9:00am – 9:30am EST
WHERE
Maryam to call Diane at 719-284-5634, PIN 36812
WHO
Diane Kim, Host
Maryam Farooq, Guest
Amy Ingram, Assistant to Diane
Hi Maryam,
Looking forward to speaking with you! Amy
can help us find time for our prep call.
Diane
Maryam, DianeMaryam
Diane
Connecting before NYAI
Maryam, Diane | Call
Abstract: As we move to the conversational UI and take advantage of NLP and AI in general, we change the way we interact with technology dramatically. The standard GUI is many times fully eliminated, leading to novel challenges in UX. Tasks are removed from the user’s oversight with invisible or seamless software, and the output is not always as expected. But sometimes that output is correct within the parameters given and simply perceived as an error.
Diane will talk through where x.ai has encountered error perception issues as we seek to develop frictionless software, how we thought about the problem, and the communication strategies we’re exploring to resolve it.
Traditional UX
“I click crop in Photoshop and the image is cropped to the specs I asked for”
Options are black and white.
User is fully in control of every outcome (but receives no decision assists from the software).
Entire knowledge base built over the last 20 years of interacting with software through GUI
Examples of traditional errors: can’t find the function (“where is the crop button”, the software doesn’t have the feature/functionality (“there is no crop button”)
Traditional UX
“I click crop in Photoshop and the image is cropped to the specs I asked for”
Options are black and white.
User is fully in control of every outcome (but receives no decision assists from the software).
Entire knowledge base built over the last 20 years of interacting with software through GUI
Examples of traditional errors: can’t find the function (“where is the crop button”, the software doesn’t have the feature/functionality (“there is no crop button”)
Traditional UX
“I click crop in Photoshop and the image is cropped to the specs I asked for”
Options are black and white.
User is fully in control of every outcome (but receives no decision assists from the software).
Entire knowledge base built over the last 20 years of interacting with software through GUI
Examples of traditional errors: can’t find the function (“where is the crop button”, the software doesn’t have the feature/functionality (“there is no crop button”)
Concept definition: perceived error
If the given output does not match my expected result, I might initially feel like this is a bug - something's not right.
“I click crop in Photoshop and it crops to the dimensions it thinks are best - but I might perceive that decision as “wrong””
Software is working to make the best choice for the user but not always understood why, or may be counter to what they intended—in reality, the software is already a few steps ahead to meet my original request/need, exceeding my expectations for what it is capable of doing. And that is the aha!/wow moment
It’s actually a few steps ahead, exceeding what I thought it was capable of.
Upon explanation people understand why a decision was made that way and actually LIKE the thought process
This is NOT the same as an error in input (12:30am vs. tomorrow errors)
For example, in the future, I ask restaurant booking-robot to book me a table for Tuesday dinner at "Taco King". Instead, they booked me Tuesday dinner at "Taqueria Diana". Perceived error initially, but the bot then explains that Taco King is fully booked all week, so it did the next best thing and booked a table at the closest restaurant that had availability, in closest vicinity, with the most similar menu items/cuisine, etc.
Example #1
Request: schedule me for wednesday
Response: thursday invite
Guest was out of town
The software actually made the CORRECT decision - except it was not what you originally asked for.
Example #2
Request: schedule me with so-and-so
Response: great, it’s at 9am next week, even though you asked me not to schedule anything before 9:30
Flex scheduling hours
Sometimes a meeting is important enough to warrant changes to your regular schedule, so we spent a lot of time and energy training Amy + Andrew to understand flexibility in time preferences.
Got many perceived errors in return! Users were concerned that a meeting was being held at a time that was outside their explicit preferences. BUT if Amy + Andrew stuck to the preferences, no meeting would have been scheduled.
Flex scheduling hours
Sometimes a meeting is important enough to warrant changes to your regular schedule, so we spent a lot of time and energy training Amy + Andrew to understand flexibility in time preferences.
Got many perceived errors in return! Users were concerned that a meeting was being held at a time that was outside their explicit preferences. BUT if Amy + Andrew stuck to the preferences, no meeting would have been scheduled.
When building a conversational or "invisible" interface, you lose a lot of those touch points along the experience for the user to have visibility, confirmation, and comfort in knowing what steps and actions have been taken. The main advantage for this type of interface is that you simply shoot off a request --> you get your result, as opposed to having to click "OK" on 5 pop ups, check off input boxes, select + drag, every micro-step of the way to accomplish your task.
The ideal happy medium is removing the burden for the user to have to complete all of those micro-steps, but still communicating enough relevant information of how the agent went from Request --> result
Simple concept (harder to execute) - communicate more!
[first image] Add explanation to decision
Amy explains why as she goes. “You might have something on the calendar that’s affecting this”, asking permission before going outside scheduling hours
[second image] Readbacks
“Here is what I understood you asked me to do”
Positive feedback, no numbers yet on how they’re helping.
Improved onboarding:
More calls, more explanation.
We don’t have all the answers yet - still working through these issues.
Potential solutions / issues we’ll see?