BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
Feature Extraction and Analysis of Natural Language Processing for Deep Learning English Language- finalppt.pptx
1.
2. Abstract
NLP (Natural Language Processing) is a technology that enables
computers to understand human languages. Deep-level grammatical
and semantic analysis usually uses words as the basic unit, and word
segmentation is usually the primary task of NLP. In order to solve the
practical problem of huge structural differences between different data
modalities in a multi-modal environment and traditional machine
learning methods cannot be directly applied, this paper introduces the
feature extraction method of deep learning and applies the ideas of
deep learning to multi-modal feature extraction. This paper proposes a
multi-modal neural network. For each mode, there is a multilayer sub-
neural network with an independent structure corresponding to it.
3. INTRODUCTION
With the rapid development of Internet information technology and
the continuous advancement of science and technology, a large amount
of data of various types and structures have been accumulated in the
real life and scientific research fields. In the real world, for the
observation target of the same semantic conceptual ontology, multiple
observation methods can often be used to obtain data information
from multiple different observation channels, and these data from
different information channels describe the same concept. Each of
these kinds of information data can be called a different modal, or
different observation perspectives
4. LITERATURE SURVEY
Deep Learning Based Single Image Super-resolution: A Survey
Authors: Viet Khanh Ha, Jin-Chang Ren, Xin-Ying Xu
Abstract: Image super-resolution is a process of obtaining one or
more high-resolution image from single or multiple samples of low-
resolution images. Due to its wide applications, a number of different
techniques have been developed recently, including interpolation-
based, reconstruction-based and learning-based. The learning-based
methods have recently attracted increasing great attention due to their
capability in predicting the high-frequency details lost in low
resolution image.
5. LITERATURE SURVEY
Automatic Modulation Classification: A Deep Learning Enabled
Approach
Authors: Fan Meng, Peng Chen, Lenan Wu.
Abstract: Automatic modulation classification (AMC), which plays
critical roles in both civilian and military applications, is investigated
in this paper through a deep learning approach. Conventional AMCs
can be categorized into maximum likelihood (ML) based (ML-AMC)
and feature-based AMC. However, the practical deployment of ML-
AMCs is difficult due to its high computational complexity, and the
manually extracted features require expert knowledge
6. LITERATURE SURVEY
Deep Learning for Secure Mobile Edge Computing in Cyber-
Physical Transportation Systems
Authors: Yuanfang Chen, Yan Zhang, Sabita Maharjan
Abstract: MEC is able to be used to execute the compute-
intensive applications on the edge of transportation networks directly.
As a result, the communications traffic is substantially increased
among the connected edge devices. Therewith, communications
security is emerging as a serious problem, and as an important research
issue of the communications security, active feature learning is studied
in this article for actively detecting unknown attacks
7. LITERATURE SURVEY
Deep Learning Convolutional Neural Networks for the Automatic
Quantification of Muscle Fat Infiltration Following Whiplash
Injury
Authors: Weber KA, Smith AC, Wasielewski M.
Abstract: Muscle fat infiltration (MFI) of the deep cervical spine
extensors has been observed in cervical spine conditions using time-
consuming and rater-dependent manual techniques. Deep learning
convolutional neural network (CNN) models have demonstrated state-
of-the-art performance in segmentation tasks.
8. EXISTING SYSTEM
In the early days of NLP research, the main focus was on the analysis of
language structure, technology-driven machine translation, and language
recognition [6-9]. The current focus is on how NLP can be used in the real
world. The corresponding research areas include dialogue systems and social
media data. However, the training of deep frames is a difficult task, and
traditional shallow proven methods that have proven effective cannot be
transplanted into deep learning to ensure their effectiveness [10-13].
DISADVANTAGES OF EXISTING SYSTEM:
1. There is no efficiency
9. PROPOSED SYSTEM
This paper proposes a multi-modal neural network. For each mode, there is a
multilayer sub-neural network with an independent structure corresponding to
it. It is used to convert the features in different modes to the same-modal
features. In terms of word segmentation processing, in view of the problems
that existing word segmentation methods can hardly guarantee long-term
dependency of text semantics and long training prediction time, a hybrid
network English word segmentation processing method is proposed. This
method applies BI-GRU (Bidirectional Gated Recurrent Unit) to English word
segmentation, and uses the CRF (Conditional Random Field) model to
annotate sentences in sequence, effectively solving the long-distance
dependency of text semantics, shortening network training and predicted
time.
Advantages of proposed system:
1. Effectively improving the efficiency of word segmentation processing.
10. REQUIRMENTS
SOFTWARE REQUIREMENTS
The functional requirements or the overall description documents
include the product perspective and features, operating system and
operating environment, graphics requirements, design constraints and
user documentation.
The appropriation of requirements and implementation constraints
gives the general overview of the project in regards to what the areas of
strength and deficit are and how to tackle them.
Python idel 3.7 version (or)
Anaconda 3.7 ( or)
Jupiter (or)
Google colab
11. REQUIRMENTS
HARDWARE REQUIREMENTS
Minimum hardware requirements are very dependent on the particular
software being developed by a given Enthought Python / Canopy / VS
Code user. Applications that need to store large arrays/objects in
memory will require more RAM, whereas applications that need to
perform numerous calculations or tasks more quickly will require a
faster processor.
Operating system : windows, linux
Processor : minimum intel i3
Ram : minimum 4 gb
Hard disk : minimum 250gb
13. IMPLEMENTATION
In this paper author is using Natural Language Processing with deep
learning to detect English words segmentation and then evaluating
performance of two deep learning neural networks called BI-GRU
(Bidirectional Gated Recurrent UNIT) and BI-LSTM (Bidirectional
Long Short Term Memory) and from both algorithm BI-GRU is taking
less execution and giving less LOSS compare to BI-LSTM. Neural
network model which give less LOSS can be consider as best model.
16. USECASE DIAGRAM
Upload WIKI sentence dataset
Preprocess dataset
Generate word segmentation
BI-LSTM model
Generate word segmentation
BI-GRU model
Loss comparison graph
User
Word segmentation prediction
19. SEQUENCE DIAGRAM
User System
Upload WIKI sentence dataset
Dataset loaded
Preprocess dataset
features extracted from the dataset
Generate word segmentation BI-LSTM model
LSTM loss reduce to 1.10%
Generate word segmentation BI-GRU model
GRU loss reduce to 1.00%
Loss comparison graph
Graph displayed
Word segmentation prediction
will get segmented word
20. COLLABORATION DIAGRAM
User
System
2: Dataset loaded
1: Upload WIKI sentence dataset
3: Preprocess dataset
5: Generate word segmentation BI-LSTM model
7: Generate word segmentation BI-GRU model
9: Loss comparison graph
11: Word segmentation prediction
4: features extracted from the dataset
6: LSTM loss reduce to 1.10%
8: GRU loss reduce to 1.00%
10: Graph displayed
12: will get segmented word
33. CONCLUSION
This paper proposes a multimodal shared feature expression extraction
algorithm based on deep neural network, gives the entire model
structure of the algorithm, and details the design of the model
structure and the model training method. In order to verify the
effectiveness of the proposed model, a series of comparative
experiments were carried out. The experimental results show that the
proposed multimodal fusion feature extraction model can effectively
extract low-dimensional fusion features from the original multiple
high-dimensional data.
34. REFERENCES
[1] Viet Khanh Ha, Jin-Chang Ren, Xin-Ying Xu. “Deep Learning Based Single
Image Super-resolution: A Survey", International Journal of Automation and
Computing, vol.16, no.4, pp.413-426, 2019.
[2] Fan Meng, Peng Chen, Lenan Wu. “Automatic Modulation Classification: A
Deep Learning Enabled Approach", IEEE Transactions on Vehicular
Technology, vol.67, no.11, pp.10760- 10772, 2018.
[3] Qing Xia, Shuai Li, Aimin Hao. “Deep Learning for Digital Geometry
Processing and Analysis: A Review", Journal of Computer Research and
Development, vol.56, no.1, pp.155-182, 2019.