Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Digital Transformation
1. Zagrebačka banka d.d.
Ma in CS & IT Krunoslav Ris, head of EMA Development
Osijek, Octobar 2019
ChatBots Era
Digital Transformation
2. Zagrebačka banka d.d.
2
AGENDA
1 Digital Transformations – Era Of Bots
2 Why is necessary in Banking Business
3 What is the Goal
4 Where are Improvements
5 How It is measured
6 How it is made
3. 3
Digital transformation – Era of Bots
Lorem Ipsum (UniCredit Medium)
• Lorem Ipsum (UniCredit
Medium)
• Lorem Ipsum (UniCredit
Medium)
3
3
4. Gartner estimates that 85% of
banks and businesses will
perform customer engagement
with the help of AI chatbots by
2020.
Banks use them to generate
revenue and help customers
save money by offering smart
advice and directing them to
better products, driving
retention and winning new
business.
Zagrebačka banka d.d.
4
WHY IS NECESSARY
Generating Revenue
Banking Customer Engagement
AI
ChatBots
Social
Networks
Others
5. 5
WHAT IS THE GOAL
• Progress of Personal Banking
• Powerful Automated Customer
Service
• Better Customer Feedback
• Personalized Marketing For
specific Customers
• Increasing Productivity of
internal Employees
• Increasing of Revenue
Zagrebačka banka d.d.
5
10. Zagrebačka banka d.d.
10
Where are Improvements?
1
2
3
4
Working 00/24
Bots doesn’t need rest
Working on
Weekends
Bots doesn’t need Weekends
Working on Holidays
Bots doesn’t celebrate
Christmas, Hanuka, Easter, New
Year.
Multitasking
They can serve more than
one customer at the same
time
Improvements
16. Training a Goal-
Oriented Chatbot for
Improving Streamlining
processes and
Customer profiling with
Deep Reinforcement
Learning
Zagrebačka banka d.d.
16
HOW IT IS MADE
Short Technical Explanation
24. Zagrebačka banka d.d.
24
Thanks for Attention
How to create an agenda and separate chapters
Mentors: prof. Dr. Željko Stanković
prof. Dr. Zoran Avramović
prof. Dr. Branko Latinović
University: Pan-european University Apeiron
Krunoslav Ris
Head of EMA Development
krunoslav.ris@unicreditgroup.zaba.hr
A goal-oriented (GO) chatbot is created to solve a specific problem for a user.
These chatbots can help people book a flight, find a reservation, check account balance, transfer money etc. Basicaly, you can train chatbot to do any manual labor that needs human interaction in virtual world.
There are two main ways to train a Goal-Oriented chatbot:
Supervised learning
with an encoder-decoder that directly maps user dialogue to responses and reinforcement learning which trains a chatbot through trial-and-error conversations with either real users or a rule-based user simulator.
Dialogue System - The dialogue system for a Goal-Oriented chatbot using reinforcement learning is split into 3 main parts:
The Dialogue Manager (DM)
Natural Language Understanding (NLU) unit
Natural Language Generator (NLG) unit.
Furthermore, the Dialog Manager is divided into the Dialogue State Tracker or just State Tracker and the policy for the agent itself, which is represented by a neural network in many cases.
In addition the system loop contains a user with a user goal.
Generative and selective models
General conversation models can be simply divided into two major types — generative and selective (or ranking) models. Also, hybrid models are possible. But the common thing is that such models conceive several sentences of dialogue context and predict the answer for this context. In the picture, you can see the illustration of such systems.
Before going deeper, we should discuss what dialogue datasets look like.
All models described below are trained on pairs (context, reply).
Context is several sentences (maybe one) which preceded the reply.
The sentence is just a sequence of tokens from its vocabulary.
For better understanding, look at the table. There is a batch of three samples extracted from raw dialogue between two persons:
Note the “<eos>” (end-of-sequence) token at the end of each sentence in the batch. This special token helps neural networks to understand sentence bounds and update its internal state wisely.
For modeling dialogue, this paper deployed a sequence-to-sequence (seq2seq) framework which emerged in the neural machine translation field and was successfully adapted to dialogue problems.
The architecture consists of two RNNs with different sets of parameters.
The left one (corresponding to A-B-C tokens) is called the encoder, while the right one (corresponding to <eos>-W-X-Y-Z tokens) is called the decoder.
Each hidden state influences the next hidden state and the final hidden state can be seen as the summary of the sequence.
This state is called the context or thought vector, as it represents the intention of the sequence. From the context, the decoder generates another sequence, one symbol(word) at a time. Here, at each time step, the decoder is influenced by the context and the previously generated symbols.
The most disturbing one is that the model cannot handle variable length sequences. It is disturbing because almost all the sequence-to-sequence applications, involve variable length sequences. The next one is the vocabulary size. The decoder has to run softmax over a large vocabulary of say 20,000 words, for each word in the output