Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NewMR and Wizu Survey Chatbot Experiment


Published on

Earlier this year 2019, #NewMR and Wizu partnered to conduct an experiment with a chatbot. For our experiment, we conducted a chat survey looking at the Threats and Opportunities facing market research and insights.
We were able to collect 336 responses from people involved in market research and insights, along with quite a lot of material from online discussions about the project.

This webinar is one of two in the 'NewMR Chatbot Experiment – Threats & Opportunities Facing Market Research' series that provides feedback on the experiment.
The two webinars are:

- Sue York from NewMR and Martin Powton from Wizu look at the feedback about the chatbot experience (these slides)
- Ray Poynter from NewMR reports back on what the research said about the Threats and Opportunities facing market research.

The recordings from both webinars are available via the NewMR website Play Again page here:

The NewMR written report will also be shared shortly.

Published in: Education
  • Be the first to comment

  • Be the first to like this

NewMR and Wizu Survey Chatbot Experiment

  1. 1. Feedback on the NewMR Chatbot Experiment Webinar, 4 April 2019 11 am UK Time Sue York NewMR Martin Powton Wizu
  2. 2. Outline •  Introduction to the project •  Look and Feel •  How Intelligent •  When to Use Chatbots •  Conditional Responses •  Open-ended Challenges •  Other Benchmarks •  Summary and Recommendations
  3. 3. Project Outline Online discussions about chatbots Chatbot survey using the Wizu platform Background •  336 interviews (after cleaning) •  January 2019, Global, convenience sample of people connected to #NewMR Study flow •  Introduction •  Country •  Type of Organization (e.g. Buyer, Supplier etc ) •  Level of Optimism (Optimistic, Neutral, Pessimistic) •  Probe “Why do you feel that way?” •  Threats facing Market Research and Insights •  With probing •  Main Opportunity •  With probing •  Comments on key pre-programmed topics not mentioned •  Comments on this topic and/or survey S
  4. 4. Chatbot Look and Feel – 1 (typical)
  5. 5. MMore Interactive
  6. 6. Chatbot Look and Feel – 3 (flawed) Question Answer How optimistic Optimistic Why feel that? I love what I do! Main threat? DIY researchers with no training Other threats? IT folks doing research Main Opportunity? Don’t know Other Opportunities? See prior answer Any comment on AI, neuroscience or Predictive Analytics No Other comments on this topic or survey You asked me a question that was not necessary A fail, but not an epic fail Happens in conventional surveys too M
  7. 7. Two Questions for Chatbots in MR 1.  How intelligent? 2.  For what purpose? S
  8. 8. How Intelligent? Indistinguishable from a human A set of questions in a more chatty style “I am disappointed that this is a fake chatbot, a la XXXXX. This is just putting another skin on a standard survey. Disappointing.” “This survey was a lot more interesting chatbot-based. I would recommend it” Question: If we could create chatbots that were indistinguishable from humans, would it be OK not to tell people they were talking to a machine? Engaging, Survey fatigue reduction Quality checking Depth at scale S
  9. 9. For What Purpose? Chatbots should mostly be used for qualitative research Chatbots can be used to make existing surveys more palatable for participants – even if they do not produce better data Chatbots should only be used if they produce better data Chatbots should be used in a MVP / Agile way to develop better platforms and approaches Chatbots should only be used when they are substantially more developed S
  10. 10. Use Cases •  Strongest •  Where conversation is predictable •  Customer Satisfaction, Employee Experience, Retrospective Experience Survey, UX testing •  Cases where survey fatigue is high or where the risk of alienating/boring the participant is higher •  For participants who reject conventional surveys – e.g. some younger people preferring a chat on their mobile to a survey on their computer •  Most Challenging •  Exploring situations •  Diverse domains •  For example, now we have done this MR study we could program a better project
  11. 11. The Value and Danger of Conditional Chatting •  Some participants received a predictable set of questions •  Because of the pattern of their answers and the conservative script adopted for this experiment •  Some participants received some conditional questions •  This improved the instrument for some •  But increased the risk of a flawed interaction •  For the near future there is a trade-off •  The more conditional (or intelligent) the scripting, the more risk there is that the interaction will be flawed for some people •  Like all survey designs, this trade-off has to be factored into your design •  And the results of the trade-off monitored in real-time M
  12. 12. The Balance of Open & Closed Questions •  If the survey is a structured qualitative interview, the questions will be mostly open-ended •  If a quantitative survey is very short, the balance is less of an issue •  Customer Satisfaction: Overall satisfaction NPS, Why, Suggestions, 2 or 3 demographics •  Other quantitative studies are going to feel more natural with more open-ends than in a traditional format •  Three key implications to open-ends in quantitative surveys •  Will participants be willing and able to enter their responses? •  How will you turn the text into numbers? •  Will the numbers you create from the text be comparable with other numbers from traditional surveys? M
  13. 13. Wizu Comparison Study M •  118% more words than traditional survey forms •  91% more actionable insight •  57% rated conversational experience as 10/10 – only 11% gave the top score for traditional form based surveys •  All age groups and genders expressed a clear preference for conversational surveys over form based surveys •  Did not bias core metrics
  14. 14. Learnings From the Project •  More qualitative research in advance •  Adopting a more iterative approach •  Technology was mentioned quite often, so the probes could have been tweaked to ask people to specify which technologies •  The icon was considered not sufficiently business-like by some •  Perhaps use the NewMR logo or the researcher’s image? •  This was positioned as an experiment, that probably influenced the reaction to it •  The open-ends from most people were very rich, even though we don’t have a control cell to compare with •  How familiar are your participants with interacting with existing conversational UI? S
  15. 15. Example Improvements •  Ray bot – more personal •  More conversational •  Mix qual and quant •  More in-depth probing
  16. 16. Chatbot Summary & Recommendations •  Experiment now, these are coming •  How important is legacy with old data to you? •  How important is the participant experience? •  How to balance the improvements with conditional probing with the risk of a flawed interaction? •  Think about how you are going to process the data •  Remember that in 12 months the picture will have changed, platforms will be better, analysis will be better, and the novelty value will be diminishing
  17. 17. Key Point Chatbots are coming, whether you like it or not They are coming in the ‘real world’, not just in the MR world. S
  18. 18. Thank You
  19. 19. Q & A Sue York NewMR Martin Powton Wizu Want to know more about Wizu? • • Want to know more about NewMR? • •
  20. 20. NewMR Sponsors Communication Gold Silver