Presiding Officer Training module 2024 lok sabha elections
NLP.pptx
1. Machine Translation
How does Google translation
work?
The History and Progress of
Machine Translation
Major machine translation services use
Artificial Neural Networks (ANN).
•Neural networks are a series of
algorithms that mimic the operations of
an animal brain to recognize
relationships between vast amounts of
data.
•They tend to resemble the
connections of neurons and synapses
found in the brain.
A Brief History of Machine Translation
The new field of “Machine Translation” appeared in early 1950s.
・Until the end of 1970s, Rule-Based Machine Translation(RBMT)
was the major approach. This word-based approach relies on
countless built-in linguistic rules and millions of bilingual
dictionaries for each pair.
・The next approach after RBMT was Statistical Machine
Translation(SMT). This approach analyzes existing translations
developed by humans. SMT is built on phrase-based systems.
・Neural Machine Translation(NMT) was introduced in the last
decade. NMT learns from each translation task and improves upon
each subsequent translation.
2. LSTM
Hook
Do you know memory which can
remember long term?
Key details
Long short-term memory(LSTM) is an artificial neural
network used in the fields of artificial intelligence and deep
learning.
A common LSTM unit is composed of 4.
1.cell 2.input gate 3.output gate 4.forget gate
LSTM networks are well-suited to 3 things based on time
series data.
1.Classifying 2. processing 3.making predictions
LSTMs were developed to deal with the vanishing gradient
problem that can be encountered when training traditional
RNNs.
More details
LSTM has feedback connections. Such a recurrent neural network (RNN) can
process not only single data points (such as images), but also entire sequences
of data (such as speech or video).
(RNN is a class of artificial neural networks where connections between nodes
can create a cycle, allowing output from some nodes to affect subsequent input
to the same nodes. )
For example, LSTM is applicable to tasks such as unsegmented,
connected handwriting recognition, speech recognition, machine translation,
robot control, video games, and healthcare.
The general architecture consists of a cell (the storage part of the LSTM unit)
and three "regulators" (input gates, output gates, and forget gates) of
information flow inside the LSTM unit, mostly called gates.
The "cells" are needed to keep track of dependencies between elements in the
input array.
The "input gates" control the degree to which new values flow into the cell.
The "forget gates" control the degree to which values remain in the cell.
The "output gates" control the degree to which values in the cell are used to
compute the output activation of the LSTM unit.