SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.
SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.
Successfully reported this slideshow.
Activate your 14 day free trial to unlock unlimited reading.
QA Fest 2018. Александр Хотемский. Использование голосовых помощников для разработки и тестирования
Голосовые помощники могут стать интересной и забавной заменой обычных чат-ботов. Таких помощников можно заставить выполнять практически любые ваши команды и они неплохо распознают голоса. Это приводит к интересным особенностям использования и дает дополнительные каналы информации. Посмотрим зачем это вообще нам нужно, а если окажется что ненужно - мы будем это знать.
Голосовые помощники могут стать интересной и забавной заменой обычных чат-ботов. Таких помощников можно заставить выполнять практически любые ваши команды и они неплохо распознают голоса. Это приводит к интересным особенностям использования и дает дополнительные каналы информации. Посмотрим зачем это вообще нам нужно, а если окажется что ненужно - мы будем это знать.
QA Fest 2018. Александр Хотемский. Использование голосовых помощников для разработки и тестирования
1.
Voice assistants
What it might help us with testing and development?
2.
Oleksandr
Khotemskyi
Independent Contractor
Software Developer Engineer in Test
xotabu4.github.io
3.
What is voice assistants?
And how do they work?
4.
Voice assistants in nutshell
• Always listening microphone connected to
voice recognition service
• Service understands different languages,
and context
• Lots of ready to use reactions to user input,
and can be extended by custom, own
reactions
• Made possible due to great progress of
voice understanding algorithms
5.
TL DR : just
another interface
to communicate
with computer
7.
Why only now?
• First prototypes had already appeared in
the late sixties
• Unrealistic expectations from users lead to
disappointment and AI winter of the 70s-
80s
• Huge performance of modern computers
allows to apply machine learning algorithms
for natural language processing
• Biggest companies in the world started to
invest money, and prepared consumer-
ready solutions
10.
Extending with own responses
How developers might extend Google Assistant
11.
Components
• Google Assistant - virtual assistant
powered by artificial intelligence.
Can engage in two-way
conversations
• Actions on Google is platform to
extend Google Assistant with own
reactions
• DialogFlow - an end-to-end
development suite for conversational
interfaces (e.g., chatbots, voice-
powered apps and devices)
12.
DialogFlow
• Works as middleware between user intents
and your responses to them
• Parses inputs, extracts parameters,
prepares entities with needed data
• Integrates to different services, has API, and
SDKs
• Has training mechanism
• Supports context, fallbacks, events
14.
Limitations
• The communication is one direction at a time.
Kind of “push-to-talk”. Interface limitation
• “Push notifications aren't currently supported on
voice-activated speakers” (September 2018).
google-home-notifier package can be used, but
requires listening server in local network
• Hard to pass complex parameters. Works best
with single words and numbers
• For best results - visual representation will be
needed (smartphone, tv, smart display)
15.
Usage Examples in development
How it might be useful in everyday work
16.
Launching CI/CD jobs/builds
• Voice commands can invoke with HTTP API
(tested with Jenkins/DroneCI)
• Usually client libraries exist for most popular
CI/CD systems
• You will need API key. Be sure you are
using it securely
• You can pass parameters for jobs using
dialogflow
17.
Working with JIRA tickets
• Reading status of JIRA tickets
• Updating tickets
• Leaving comments
• Getting statuses of sprints, boards
18.
Health/status checks
• Notifying on service death
• Notifying on resources overconsumption
• Custom metrics change notifying
• Asking to check server status
• Should be working nice with Google Cloud
Platform + Google Cloud Functions
19.
Notifying
• Uses push notifications, but requires server
in local network
• Fresh build released
• Branch merged
• Tests failed
• Code freeze started
• Meeting is in 5 minutes!
• Whatever notification you want
20.
Is there advantages?
Why this might be useful?
21.
Improving chatbots
• Voice assistant can greatly improve your
chatbots
• Can replace or cooperate with chatbot
• Response to voice request can be sent via
chatbot response
22.
New communication channel
• Voice communication might be easier in
specific cases
• No context switching (No need to switch
windows, write commands with hands)
• Faster - no need to open laptop, login …
• My mom liked it
23.
Doubts
• Might be time-wasting
• Additional support for code that not directly
used in project
• Might be hard to provide wide range of
useful commands
24.
Materials
• Human-Computer interactions:
https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction
• What is Google Actions:
https://developers.google.com/actions/extending-the-assistant
• Starting with Google Actions:
https://codelabs.developers.google.com/codelabs/actions-1/#0
• Using in office:
https://www.zdnet.com/article/five-ways-voice-assistants-are-going-to-change-the-o
• Intonations in voice: https://developers.google.com/actions/reference/ssml