Existing mobile networking systems lack the level of intelligence, scalability, and autonomous adaptability
required to optimally enable next-generation networks like 5G and beyond, which are expected to be Self -
Organizing Networks (SONs). It is anticipated that machine learning (ML) will be instrumental in designing
future “x”G SON networks with their demanding Quality of Experience (QoE) requirements. This paper
evaluates a methodology that uses supervised machine learning to predict the QoE level of the end user
experiences and uses this information to detect anomalous behavior of dysfunctional network nodes
(eNodeBs/base stations) in self-organizing mobile networks. An end-to-end network scenario is created using
the network simulator ns-3, where end users interact with a remote host that is accessed over the Internet to
run the most commonly used applications like file downloads and uploads and the resulting output is used as
a dataset to implement ML algorithms for QoE prediction and eNodeB (eNB) anomaly detection. Three ML
algorithms were implemented and compared to study their effectiveness and the scalability of the
methodology. In the test network, an accuracy score greater than 99% is achieved using the ML algorithms.
As suggested by the ns-3 simulation the use of ML for QoE prediction will help network operators understand
end-user needs and identify network elements that are failing and need attention and recovery.
A COOPERATIVE LOCALIZATION METHOD BASED ON V2I COMMUNICATION AND DISTANCE INF...IJCNCJournal
Relative positions are recent solutions to overcome the limited accuracy of GPS in urban environment.
Vehicle positions obtained using V2I communication are more accurate because the known roadside unit
(RSU) locations help predict errors in measurements over time. The accuracy of vehicle positions depends
more on the number of RSUs; however, the high installation cost limits the use of this approach. It also
depends on nonlinear localization nature. They were neglected in several research papers. In these studies,
the accumulated errors increased with time due to the linearity localization problem. In the present study,
a cooperative localization method based on V2I communication and distance information in vehicular
networks is proposed for improving the estimates of vehicles’ initial positions. This method assumes that
the virtual RSUs based on mobility measurements help reduce installation costs and facilitate in handling
fault environments. The extended Kalman filter algorithm is a well-known estimator in nonlinear problem,
but it requires well initial vehicle position vector and adaptive noise in measurements. Using the proposed
method, vehicles’ initial positions can be estimated accurately. The experimental results confirm that the
proposed method has superior accuracy than existing methods, giving a root mean square error of
approximately 1 m. In addition, it is shown that virtual RSUs can assist in estimating initial positions in
fault environments.
A Network and Position Proposal Scheme using a Link-16 based C3I SystemUniversity of Piraeus
The smart usage of hi-end military technological solutions in daily activities makes people life better. This paper describes a network and position proposal scheme in respect of technical networking and positioning information. A Link-16 based Command, Control, Communication and Intelligence (C3I) system is established among the mobile devices. Each device knows its geographical position using its GPS. A network along with a possible good position for user’s service is proposed, fulfilling his/her requirements for comfortable work.
NEW TECHNOLOGY FOR MACHINE TO MACHINE COMMUNICATION IN SOFTNET TOWARDS 5Gijwmn
Machine to Machine communication or M2M, refers to a model of communication where devices communicate directly with each other using the available wired or wireless channels. M2M is a new concept proposed under 3GPP(3rd Generation Partnership Project); several research are working on providing solutions for M2M communication for the 5G networks. Challenges associated with M2M communication are the lack of standards, security, poor infrastructure, interoperability and diverse architecture. In this paper, we propose a new mechanism called TM2M5G (The Machine to Machine for 5G) based on SOFTNET platform which results in support of 5G heterogeneous network. In this paper, we
propose the architecture for M2M communication based on SOFTNET and provide new features support like security algorithms for data transmission among devices and scheduling algorithm for seamless transmission of data packets over the network. Finallysimulation results ofthis algorithm based on a system level simulator, considering two different approaches for analyzing the parameters such as delay, throughput and bandwidth are presented.
Design and development of handover simulator model in 5G cellular network IJECEIAES
In the modern era of technology, the high speed internet is the most important part of human life. The current available network is reckoned to be slow in speed and not be up to snuff for data transmission regarding business applications. The objective of handover mechanism is to reassign the current session handle by internet gadget. The globe needs the next generation high mobility and throughput performance based internet model. This research paper explains the proposed method of design and development for handover based 5G cellular network. In comparison to the traditional method, we propose to control the handovers between base-stations using a concentric method. The channel simulator is applied over the range of the frequencies from 500 MHz to 150 GHz and radio frequency for the 700 MHz bandwidth. The performance of the simulation system is calculated on the basis of handover preparation and completion time regarding base station as well as number of users. From this experiment we achieve the 7.08 ms handover preparation time and 9.98 ms handover completion time. The author recommended the minimum handover completion time, perform the high speed for 5G cellular networks.
MAR SECURITY: IMPROVED SECURITY MECHANISM FOR EMERGENCY MESSAGES OF VANET USI...IJCNCJournal
Vehicular Ad-hoc network (VANET) is one of the emerging technologies for research community to get various research challenges to construct secured framework for autonomous vehicular communication. The prime concern of this technology is to provide efficient data communication among registered vehicle nodes. The several research ideas are implemented practically to improve overall communication in VANETs by considering security and privacy as major aspects of VANETs. Several mechanisms have been implemented using cryptography algorithms and methodologies. However, these mechanisms provide a solution only for some restricted environments and to limited security threats. Hence, the proposed novel mechanism has been introduced, implemented and tested using key management technique. It provides secured network environment for VANET and its components. Later, this mechanism provides security for data packets of emergency messages using cryptography mechanism. Hence, the proposed novel mechanism is named Group Key Management & Cryptography Schemes (GKMC). The experimental analysis shows significant improvements in the network performance to provide security and privacy for emergency messages. This GKMC mechanism will help the VANET user’s to perform secured emergency message communication in network environment.
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...IJCNCJournal
Deep learning applications, especially multilayer neural network models, result in network intrusion detection with high accuracy. This study proposes a model that combines a multilayer neural network with Dense Sparse Dense (DSD) multi-stage training to simultaneously improve the criteria related to the performance of intrusion detection systems on a comprehensive dataset UNSW-NB15. We conduct experiments on many neural network models such as Recurrent Neural Network (RNN), Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU), etc. to evaluate the combined efficiency with each model through many criteria such as accuracy, detection rate, false alarm rate, precision, and F1-Score.
Performance analysis of massive multiple input multiple output for high speed...IJECEIAES
This paper analytically reviews the performance of massive multiple input multiple output (MIMO) system for communication in highly mobility scenarios like high speed Railways. As popularity of high speed train increasing day by day, high data rate wireless communication system for high speed train is extremely required. 5G wireless communication systems must be designed to meet the requirement of high speed broadband services at speed of around 500 km/h, which is the expected speed achievable by HSR systems, at a data rate of 180 Mbps or higher. Significant challenges of high mobility communications are fast time-varying fading, channel estimation errors, doppler diversity, carrier frequency offset, inter carrier interference, high penetration loss and fast and frequent handovers. Therefore, crucial requirement to design high mobility communication channel models or systems prevails. Recently, massive MIMO techniques have been proposed to significantly improve the performance of wireless networks for upcoming 5G technology. Massive MIMO provide high throughput and high energy efficiency in wireless communication channel. In this paper, key findings, challenges and requirements to provide high speed wireless communication onboard the high speed train is pointed out after thorough literature review. In last, future research scope to bridge the research gap by designing efficient channel model by using massive MIMO and other optimization method is mentioned.
A COOPERATIVE LOCALIZATION METHOD BASED ON V2I COMMUNICATION AND DISTANCE INF...IJCNCJournal
Relative positions are recent solutions to overcome the limited accuracy of GPS in urban environment.
Vehicle positions obtained using V2I communication are more accurate because the known roadside unit
(RSU) locations help predict errors in measurements over time. The accuracy of vehicle positions depends
more on the number of RSUs; however, the high installation cost limits the use of this approach. It also
depends on nonlinear localization nature. They were neglected in several research papers. In these studies,
the accumulated errors increased with time due to the linearity localization problem. In the present study,
a cooperative localization method based on V2I communication and distance information in vehicular
networks is proposed for improving the estimates of vehicles’ initial positions. This method assumes that
the virtual RSUs based on mobility measurements help reduce installation costs and facilitate in handling
fault environments. The extended Kalman filter algorithm is a well-known estimator in nonlinear problem,
but it requires well initial vehicle position vector and adaptive noise in measurements. Using the proposed
method, vehicles’ initial positions can be estimated accurately. The experimental results confirm that the
proposed method has superior accuracy than existing methods, giving a root mean square error of
approximately 1 m. In addition, it is shown that virtual RSUs can assist in estimating initial positions in
fault environments.
A Network and Position Proposal Scheme using a Link-16 based C3I SystemUniversity of Piraeus
The smart usage of hi-end military technological solutions in daily activities makes people life better. This paper describes a network and position proposal scheme in respect of technical networking and positioning information. A Link-16 based Command, Control, Communication and Intelligence (C3I) system is established among the mobile devices. Each device knows its geographical position using its GPS. A network along with a possible good position for user’s service is proposed, fulfilling his/her requirements for comfortable work.
NEW TECHNOLOGY FOR MACHINE TO MACHINE COMMUNICATION IN SOFTNET TOWARDS 5Gijwmn
Machine to Machine communication or M2M, refers to a model of communication where devices communicate directly with each other using the available wired or wireless channels. M2M is a new concept proposed under 3GPP(3rd Generation Partnership Project); several research are working on providing solutions for M2M communication for the 5G networks. Challenges associated with M2M communication are the lack of standards, security, poor infrastructure, interoperability and diverse architecture. In this paper, we propose a new mechanism called TM2M5G (The Machine to Machine for 5G) based on SOFTNET platform which results in support of 5G heterogeneous network. In this paper, we
propose the architecture for M2M communication based on SOFTNET and provide new features support like security algorithms for data transmission among devices and scheduling algorithm for seamless transmission of data packets over the network. Finallysimulation results ofthis algorithm based on a system level simulator, considering two different approaches for analyzing the parameters such as delay, throughput and bandwidth are presented.
Design and development of handover simulator model in 5G cellular network IJECEIAES
In the modern era of technology, the high speed internet is the most important part of human life. The current available network is reckoned to be slow in speed and not be up to snuff for data transmission regarding business applications. The objective of handover mechanism is to reassign the current session handle by internet gadget. The globe needs the next generation high mobility and throughput performance based internet model. This research paper explains the proposed method of design and development for handover based 5G cellular network. In comparison to the traditional method, we propose to control the handovers between base-stations using a concentric method. The channel simulator is applied over the range of the frequencies from 500 MHz to 150 GHz and radio frequency for the 700 MHz bandwidth. The performance of the simulation system is calculated on the basis of handover preparation and completion time regarding base station as well as number of users. From this experiment we achieve the 7.08 ms handover preparation time and 9.98 ms handover completion time. The author recommended the minimum handover completion time, perform the high speed for 5G cellular networks.
MAR SECURITY: IMPROVED SECURITY MECHANISM FOR EMERGENCY MESSAGES OF VANET USI...IJCNCJournal
Vehicular Ad-hoc network (VANET) is one of the emerging technologies for research community to get various research challenges to construct secured framework for autonomous vehicular communication. The prime concern of this technology is to provide efficient data communication among registered vehicle nodes. The several research ideas are implemented practically to improve overall communication in VANETs by considering security and privacy as major aspects of VANETs. Several mechanisms have been implemented using cryptography algorithms and methodologies. However, these mechanisms provide a solution only for some restricted environments and to limited security threats. Hence, the proposed novel mechanism has been introduced, implemented and tested using key management technique. It provides secured network environment for VANET and its components. Later, this mechanism provides security for data packets of emergency messages using cryptography mechanism. Hence, the proposed novel mechanism is named Group Key Management & Cryptography Schemes (GKMC). The experimental analysis shows significant improvements in the network performance to provide security and privacy for emergency messages. This GKMC mechanism will help the VANET user’s to perform secured emergency message communication in network environment.
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...IJCNCJournal
Deep learning applications, especially multilayer neural network models, result in network intrusion detection with high accuracy. This study proposes a model that combines a multilayer neural network with Dense Sparse Dense (DSD) multi-stage training to simultaneously improve the criteria related to the performance of intrusion detection systems on a comprehensive dataset UNSW-NB15. We conduct experiments on many neural network models such as Recurrent Neural Network (RNN), Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU), etc. to evaluate the combined efficiency with each model through many criteria such as accuracy, detection rate, false alarm rate, precision, and F1-Score.
Performance analysis of massive multiple input multiple output for high speed...IJECEIAES
This paper analytically reviews the performance of massive multiple input multiple output (MIMO) system for communication in highly mobility scenarios like high speed Railways. As popularity of high speed train increasing day by day, high data rate wireless communication system for high speed train is extremely required. 5G wireless communication systems must be designed to meet the requirement of high speed broadband services at speed of around 500 km/h, which is the expected speed achievable by HSR systems, at a data rate of 180 Mbps or higher. Significant challenges of high mobility communications are fast time-varying fading, channel estimation errors, doppler diversity, carrier frequency offset, inter carrier interference, high penetration loss and fast and frequent handovers. Therefore, crucial requirement to design high mobility communication channel models or systems prevails. Recently, massive MIMO techniques have been proposed to significantly improve the performance of wireless networks for upcoming 5G technology. Massive MIMO provide high throughput and high energy efficiency in wireless communication channel. In this paper, key findings, challenges and requirements to provide high speed wireless communication onboard the high speed train is pointed out after thorough literature review. In last, future research scope to bridge the research gap by designing efficient channel model by using massive MIMO and other optimization method is mentioned.
A COMBINATION OF THE INTRUSION DETECTION SYSTEM AND THE OPEN-SOURCE FIREWALL ...IJCNCJournal
There are many security models for computer networks using a combination of Intrusion Detection System and Firewall proposed and deployed in practice. In this paper, we propose and implement a new model of the association between Intrusion Detection System and Firewall operations, which allows Intrusion Detection System to automatically update the firewall filtering rule table whenever it detects a weirdo intrusion. This helps protect the network from attacks from the Internet.
There are number of cluster based routing algorithms in mobile ad hoc networks. Since ad hoc networks are not accompanied by fixed access points, efficient routing is a must for such networks. Clustering approach is applied in mobile ad hoc network because clusters are more easily manageable and are more viable. It consists of segregating the given network into several reasonable clusters by using a clustering algorithm. By performing clustering we elect a worthy node from the cluster as the cluster head in such a way that we strive to reduce the management overheads and thus increasing the efficiency of routing. As for the fact that nodes in mobile ad hoc network have frequent host change and frequent topology change routing plays an important role for maintenance and backup mechanism to stabilize network performance. This paper aims to review the previous research papers and provide a survey on the various cluster based routing protocols in mobile ad hoc network. This paper presents analytical study of cluster based routing algorithms from literature. Index Terms— Ad- hoc networks, Cluster head, Clustering, Protocol, Route selection.
Performance evaluation of interference aware topology power and flow control ...IJECEIAES
Multi-Radio Multi-Channel Wireless Mesh Network (MRMC-WMN) has been considered as one of the key technology for the enhancement of network performance. It is used in a number of real-time applications such as disaster management system, transportation system and health care system. MRMC-WMN is a multi-hop network and allows simultaneous data transfer by using multiple radio interfaces. All the radio interfaces are typically assigned with different channels to reduce the effect of co-channel interference. In MRMC-WMN, when two nodes transmit at the same channel in the range of each other, generates co-channel interference and degrades the network throughput. Co-channel interference badly affects the capacity of each link that reduces the overall network performance. Thus, the important task of channel assignment algorithm is to reduce the co-channel interference and enhance the network performance. In this paper, the problem of channel assignment has been addressed for MRMC-WMN. We have proposed an Interference Aware, Topology, Power and Flow Control (ITPFC) Channel Assignment algorithm for MRMC-WMN. This algorithm assignes the suitable channels to nodes, which provides better link capacity and reduces the co-channel interference. In the previous work performance of the proposed algorithm has been evaluated for a network of 30 nodes. The aim of this paper is to further evaluate the performance of proposed channel assignment algorithm for 40 and 50 nodes network. The results obtained from these networks show the consistent performance in terms of throughput, delay, packet loss and number of channels used per node as compared to LACA, FCPRA and IATC Channel Assignment algorithms.
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
Reduction of Topology Control Using Cooperative Communications in ManetsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
ANALYSIS AND STUDY OF MATHEMATICAL MODELS FOR RWA PROBLEM IN OPTICAL NETWORKSIJEEE
Blocking probability has been one of the key performance to solve Routing and Wavelength Assignment problem (RWA) indexes in the design of wavelength-routed all-optical WDM networks. To evaluate blocking probability different analytical model are introduced.
In this paper, we examine WiMAX – based network and evaluate the performance for quality of service (QoS) using an idea of IEEE 802.16 technology. In our models, the study used a multiprocessor architecture organized by the interconnection network. OPNET Modeler is used to simulate the architecture and to calculate the performance criteria (i.e. throughput, delay and data dropped) that
slightly concerned in network estimation. It is concluded that our models shorten the time quite a bit for
obtaining the performance measures of an end-to-end delay as well as throughput can be used as an
effective tool for this purpose.
Novel Position Estimation using Differential Timing Information for Asynchron...IJCNCJournal
Positioning techniques have been a common objective since the early development of wireless networks. However, current positioning methods in cellular networks, for instance, are still primarily focused on the use of the Global Navigation Satellite System (GNSS), which has several limitations, like high power drainage and failure in indoor scenarios. This study introduces a novel approach employing standard LTE signaling in order to provide high accuracy positioning estimation. The proposed technique is designed in analogy to the human sound localization system, eliminating the need of having information from three spatially diverse Base Stations (BSs). This is inspired by the perfect human 3D sound localization with two ears. A field study is carried out in a dense urban city to verify the accuracy of the proposed technique, with more than 20 thousand measurement samples collected. The achieved positioning accuracy is meeting the latest Federal Communications Commission (FCC) requirements in the planner dimension.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
PERFORMANCE EVALUATION OF VERTICAL HARD HANDOVERS IN CELLULAR MOBILE SYSTEMSijngnjournal
With the rapid increase of new and diverse cellular mobile services, the overlapping of cells has become typical in the majority of the coverage area of the network. Vertical handovers occur between two layers of cells when a user is switched from one layer to the other. In this paper we investigate the influence of network parameters on vertical hard handover performance in a cell environment. The work considers two layers of cells: a layer of macrocells and a layer of microcells. Handover requests enter the macrocell from neighbor macrocells and from microcells that belong to a different layer. Using Markov chain analysis and simulation we calculate network performance parameters such as mean queue delay, handover dropping probability and channel utilization. We also compare the handover performance for the macrocell and macrocell traffic separately. Our results show the influence of total channels, maximum queue size and handover request arrival rate on handover performance. They also show that when the traffic from each layer is treated with equal priority in the system, the performance of each layer is comparable.
Channel encoding system for transmitting image over wireless network IJECEIAES
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
A CELLULAR BONDING AND ADAPTIVE LOAD BALANCING BASED MULTI-SIM GATEWAY FOR MO...pijans
As it is well known, the QoS(quality of service) provided by mobile Internet access point devices is far from
the QoS level offered by the common ADSL modem-router due to several reasons: in fact, mobile Internet
access networks are not designed to support real-time data traffic because of several drawbacks
concerning the wireless medium such as resource sharing, traffic congestion, radio link coverage etc.,
which impact directly such parameters as delay, jitter, and packet loss rate that are strictly connected to
the quality of user experience. The main scope of the present paper is to introduce a dual USIM HSPA
gateway for ad hoc and sensors networks thanks to which it will be possible to guarantee a QoS suitable
for a series of network-centric application such as real-time communications and monitoring, video
surveillance, real-time sensor networks, telemedicine, vehicular and mobile sensor networks and so on. The
main idea is to exploit multiple radio access networks in order to enhance the available end-to-end
bandwidth and the perceived quality of experience. The scope has been reached by combining multiple
radio access with dynamic load balancing and the VPN (virtual private network) bond technique.
Traffic Congestion Prediction using Deep Reinforcement Learning in Vehicular ...IJCNCJournal
In recent years, a new wireless network called vehicular ad-hoc network (VANET), has become a popular research topic. VANET allows communication among vehicles and with roadside units by providing information to each other, such as vehicle velocity, location and direction. In general, when many vehicles likely to use the common route to proceed to the same destination, it can lead to a congested route that should be avoided. It may be better if vehicles are able to predict accurately the traffic congestion and then avoid it. Therefore, in this work, the deep reinforcement learning in VANET to enhance the ability to predict traffic congestion on the roads is proposed. Furthermore, different types of neural networks namely Convolutional Neural Network (CNN), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) are investigated and compared in this deep reinforcement learning model to discover the most effective one. Our proposed method is tested by simulation. The traffic scenarios are created using traffic simulator called Simulation of Urban Mobility (SUMO) before integrating with deep reinforcement learning model. The simulation procedures, as well as the programming used, are described in detail. The performance of our proposed method is evaluated using two metrics; the average travelling time delay and average waiting time delay of vehicles. According to the simulation results, the average travelling time delay and average waiting time delay are gradually improved over the multiple runs, since our proposed method receives feedback from the environment. In addition, the results without and with three different deep learning algorithms, i.e., CNN, MLP and LSTM are compared. It is obvious that the deep reinforcement learning model works effectively when traffic density is neither too high nor too low. In addition, it can be concluded that the effective algorithms for traffic congestion prediction models in descending order are MLP, CNN, and LSTM, respectively.
QoS controlled capacity offload optimization in heterogeneous networksjournalBEEI
An efficient resource allocation mechanism in the physical layer of wireless networks ensures that resources such as bandwidth and power are used with high efficiency in spite of low delay and high edge user data rate. Microcells in the network are typically set with bias settings to artificially increase the Signal-to-Interference-Plus-Noise Ratio, thus encouraging users to offload to the microcell. However, the artificial bias settings are tedious and often suboptimal. This work presents a low complexity algorithm for maximization of network capacity with load balancing in a heterogeneous network without the need for bias setting. The small cells were deployed in a grid topology at a selected distance from macrocell to enhance network capacity through coverage overlap. User association and minimum user throughput were incorporated as constraints to enable closer simulation to real word Quality of Service requirements. The results showed that the proposed algorithm was able to maintain less than 10% user drop rate. The proposed algorithm can increase user confidence as well as maintain load balancing, maintain the scalability, and reduce power consumption of the wireless network.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Mobile Agents based Energy Efficient Routing for Wireless Sensor NetworksEswar Publications
Energy Efficiency and prolonged network lifetime are few of the major concern areas. Energy consumption rated of sensor nodes can be reduced in various ways. Data aggregation, result sharing and filtration of aggregated data among sensor nodes deployed in the unattended regions have been few of the most researched areas in the field of wireless sensor networks. While data aggregation is concerned with minimizing the information transfer from source to sink to reduce network traffic and removing congestion in network, result sharing focuses on sharing of information among agents pertinent to the tasks at hand and filtration of aggregated data so as to remove redundant information. There exist various algorithms for data aggregation and filtration using different mobile agents. In this proposed work same mobile agent is used to perform both tasks data aggregation and data filtration. This approach advocates the sharing of resources and reducing the energy consumption level of sensor nodes.
Optical network is an emerging technology for data communication
inworldwide. The information is transmitted from the source to destination
through the fiber optics. All optical network (AON) provides good
transmission transparency, good expandability, large bandwidth, lower bit
error rate (BER), and high processing speed. Link failure and node failure
haveconsistently occurred in the traditional methods. In order to overcome
the above mentioned issues, this paper proposes a robust software defined
switching enabled fault localization framework (SDSFLF) to monitor the
node and link failure in an AON. In this work, a novel faulty node
localization (FNL) algorithm is exploited to locate the faulty node. Then, the
software defined faulty link detection (SDFLD) algorithm that addresses the
problem of link failure. The failures are localized in multi traffic stream
(MTS) and multi agent system (MAS). Thus, the throughput is improved in
SDSFLF compared than other existing methods like traditional routing and
wavelength assignment (RWA), simulated annealing (SA) algorithm, attackaware RWA (A-RWA) convex, longest path first (LPF) ordering, and
biggest source-destination node degree (BND) ordering. The performance of
the proposed algorithm is evaluated in terms of network load, wavelength
utilization, packet loss rate, and burst loss rate. Hence, proposed SDSFLF
assures that high performance is achieved than other traditional techniques.
Cross Layering using Reinforcement Learning in Cognitive Radio-based Industri...IJCNCJournal
The coupling of multiple protocol layers for a Cognitive Radio-based Industrial Internet of Ad-hoc Sensor Network, enables better interaction, coordination, and joint optimization of different protocols in achieving remarkable performance improvements. In this paper, network, and medium access control (MAC) layer functionalities are cross-layered by developing the joint strategy of routing and effective spectrum sensing and Dynamic Channel Selection (DCS) using the Reinforcement Learning (RL) algorithm. In an industrial ad-hoc scenario, the network layer utilizes the sensed spectrum and selected channel by MAC layer for next-hop routing. MAC layer utilizes the lowest known transmission delay of a channel for a single hop as computed by the network layer, which improves the MAC channel selection operation. The applied RLbased technique (Q learning) enables the CR Secondary Users (SUs) to sense, learn, and make the optimal decision on their environment of operations. The proposed RLCLD schemes improve the SU network performance up to 30% as compared to conventional methods.
A COMBINATION OF THE INTRUSION DETECTION SYSTEM AND THE OPEN-SOURCE FIREWALL ...IJCNCJournal
There are many security models for computer networks using a combination of Intrusion Detection System and Firewall proposed and deployed in practice. In this paper, we propose and implement a new model of the association between Intrusion Detection System and Firewall operations, which allows Intrusion Detection System to automatically update the firewall filtering rule table whenever it detects a weirdo intrusion. This helps protect the network from attacks from the Internet.
There are number of cluster based routing algorithms in mobile ad hoc networks. Since ad hoc networks are not accompanied by fixed access points, efficient routing is a must for such networks. Clustering approach is applied in mobile ad hoc network because clusters are more easily manageable and are more viable. It consists of segregating the given network into several reasonable clusters by using a clustering algorithm. By performing clustering we elect a worthy node from the cluster as the cluster head in such a way that we strive to reduce the management overheads and thus increasing the efficiency of routing. As for the fact that nodes in mobile ad hoc network have frequent host change and frequent topology change routing plays an important role for maintenance and backup mechanism to stabilize network performance. This paper aims to review the previous research papers and provide a survey on the various cluster based routing protocols in mobile ad hoc network. This paper presents analytical study of cluster based routing algorithms from literature. Index Terms— Ad- hoc networks, Cluster head, Clustering, Protocol, Route selection.
Performance evaluation of interference aware topology power and flow control ...IJECEIAES
Multi-Radio Multi-Channel Wireless Mesh Network (MRMC-WMN) has been considered as one of the key technology for the enhancement of network performance. It is used in a number of real-time applications such as disaster management system, transportation system and health care system. MRMC-WMN is a multi-hop network and allows simultaneous data transfer by using multiple radio interfaces. All the radio interfaces are typically assigned with different channels to reduce the effect of co-channel interference. In MRMC-WMN, when two nodes transmit at the same channel in the range of each other, generates co-channel interference and degrades the network throughput. Co-channel interference badly affects the capacity of each link that reduces the overall network performance. Thus, the important task of channel assignment algorithm is to reduce the co-channel interference and enhance the network performance. In this paper, the problem of channel assignment has been addressed for MRMC-WMN. We have proposed an Interference Aware, Topology, Power and Flow Control (ITPFC) Channel Assignment algorithm for MRMC-WMN. This algorithm assignes the suitable channels to nodes, which provides better link capacity and reduces the co-channel interference. In the previous work performance of the proposed algorithm has been evaluated for a network of 30 nodes. The aim of this paper is to further evaluate the performance of proposed channel assignment algorithm for 40 and 50 nodes network. The results obtained from these networks show the consistent performance in terms of throughput, delay, packet loss and number of channels used per node as compared to LACA, FCPRA and IATC Channel Assignment algorithms.
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
Reduction of Topology Control Using Cooperative Communications in ManetsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
ANALYSIS AND STUDY OF MATHEMATICAL MODELS FOR RWA PROBLEM IN OPTICAL NETWORKSIJEEE
Blocking probability has been one of the key performance to solve Routing and Wavelength Assignment problem (RWA) indexes in the design of wavelength-routed all-optical WDM networks. To evaluate blocking probability different analytical model are introduced.
In this paper, we examine WiMAX – based network and evaluate the performance for quality of service (QoS) using an idea of IEEE 802.16 technology. In our models, the study used a multiprocessor architecture organized by the interconnection network. OPNET Modeler is used to simulate the architecture and to calculate the performance criteria (i.e. throughput, delay and data dropped) that
slightly concerned in network estimation. It is concluded that our models shorten the time quite a bit for
obtaining the performance measures of an end-to-end delay as well as throughput can be used as an
effective tool for this purpose.
Novel Position Estimation using Differential Timing Information for Asynchron...IJCNCJournal
Positioning techniques have been a common objective since the early development of wireless networks. However, current positioning methods in cellular networks, for instance, are still primarily focused on the use of the Global Navigation Satellite System (GNSS), which has several limitations, like high power drainage and failure in indoor scenarios. This study introduces a novel approach employing standard LTE signaling in order to provide high accuracy positioning estimation. The proposed technique is designed in analogy to the human sound localization system, eliminating the need of having information from three spatially diverse Base Stations (BSs). This is inspired by the perfect human 3D sound localization with two ears. A field study is carried out in a dense urban city to verify the accuracy of the proposed technique, with more than 20 thousand measurement samples collected. The achieved positioning accuracy is meeting the latest Federal Communications Commission (FCC) requirements in the planner dimension.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
PERFORMANCE EVALUATION OF VERTICAL HARD HANDOVERS IN CELLULAR MOBILE SYSTEMSijngnjournal
With the rapid increase of new and diverse cellular mobile services, the overlapping of cells has become typical in the majority of the coverage area of the network. Vertical handovers occur between two layers of cells when a user is switched from one layer to the other. In this paper we investigate the influence of network parameters on vertical hard handover performance in a cell environment. The work considers two layers of cells: a layer of macrocells and a layer of microcells. Handover requests enter the macrocell from neighbor macrocells and from microcells that belong to a different layer. Using Markov chain analysis and simulation we calculate network performance parameters such as mean queue delay, handover dropping probability and channel utilization. We also compare the handover performance for the macrocell and macrocell traffic separately. Our results show the influence of total channels, maximum queue size and handover request arrival rate on handover performance. They also show that when the traffic from each layer is treated with equal priority in the system, the performance of each layer is comparable.
Channel encoding system for transmitting image over wireless network IJECEIAES
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
A CELLULAR BONDING AND ADAPTIVE LOAD BALANCING BASED MULTI-SIM GATEWAY FOR MO...pijans
As it is well known, the QoS(quality of service) provided by mobile Internet access point devices is far from
the QoS level offered by the common ADSL modem-router due to several reasons: in fact, mobile Internet
access networks are not designed to support real-time data traffic because of several drawbacks
concerning the wireless medium such as resource sharing, traffic congestion, radio link coverage etc.,
which impact directly such parameters as delay, jitter, and packet loss rate that are strictly connected to
the quality of user experience. The main scope of the present paper is to introduce a dual USIM HSPA
gateway for ad hoc and sensors networks thanks to which it will be possible to guarantee a QoS suitable
for a series of network-centric application such as real-time communications and monitoring, video
surveillance, real-time sensor networks, telemedicine, vehicular and mobile sensor networks and so on. The
main idea is to exploit multiple radio access networks in order to enhance the available end-to-end
bandwidth and the perceived quality of experience. The scope has been reached by combining multiple
radio access with dynamic load balancing and the VPN (virtual private network) bond technique.
Traffic Congestion Prediction using Deep Reinforcement Learning in Vehicular ...IJCNCJournal
In recent years, a new wireless network called vehicular ad-hoc network (VANET), has become a popular research topic. VANET allows communication among vehicles and with roadside units by providing information to each other, such as vehicle velocity, location and direction. In general, when many vehicles likely to use the common route to proceed to the same destination, it can lead to a congested route that should be avoided. It may be better if vehicles are able to predict accurately the traffic congestion and then avoid it. Therefore, in this work, the deep reinforcement learning in VANET to enhance the ability to predict traffic congestion on the roads is proposed. Furthermore, different types of neural networks namely Convolutional Neural Network (CNN), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) are investigated and compared in this deep reinforcement learning model to discover the most effective one. Our proposed method is tested by simulation. The traffic scenarios are created using traffic simulator called Simulation of Urban Mobility (SUMO) before integrating with deep reinforcement learning model. The simulation procedures, as well as the programming used, are described in detail. The performance of our proposed method is evaluated using two metrics; the average travelling time delay and average waiting time delay of vehicles. According to the simulation results, the average travelling time delay and average waiting time delay are gradually improved over the multiple runs, since our proposed method receives feedback from the environment. In addition, the results without and with three different deep learning algorithms, i.e., CNN, MLP and LSTM are compared. It is obvious that the deep reinforcement learning model works effectively when traffic density is neither too high nor too low. In addition, it can be concluded that the effective algorithms for traffic congestion prediction models in descending order are MLP, CNN, and LSTM, respectively.
QoS controlled capacity offload optimization in heterogeneous networksjournalBEEI
An efficient resource allocation mechanism in the physical layer of wireless networks ensures that resources such as bandwidth and power are used with high efficiency in spite of low delay and high edge user data rate. Microcells in the network are typically set with bias settings to artificially increase the Signal-to-Interference-Plus-Noise Ratio, thus encouraging users to offload to the microcell. However, the artificial bias settings are tedious and often suboptimal. This work presents a low complexity algorithm for maximization of network capacity with load balancing in a heterogeneous network without the need for bias setting. The small cells were deployed in a grid topology at a selected distance from macrocell to enhance network capacity through coverage overlap. User association and minimum user throughput were incorporated as constraints to enable closer simulation to real word Quality of Service requirements. The results showed that the proposed algorithm was able to maintain less than 10% user drop rate. The proposed algorithm can increase user confidence as well as maintain load balancing, maintain the scalability, and reduce power consumption of the wireless network.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Mobile Agents based Energy Efficient Routing for Wireless Sensor NetworksEswar Publications
Energy Efficiency and prolonged network lifetime are few of the major concern areas. Energy consumption rated of sensor nodes can be reduced in various ways. Data aggregation, result sharing and filtration of aggregated data among sensor nodes deployed in the unattended regions have been few of the most researched areas in the field of wireless sensor networks. While data aggregation is concerned with minimizing the information transfer from source to sink to reduce network traffic and removing congestion in network, result sharing focuses on sharing of information among agents pertinent to the tasks at hand and filtration of aggregated data so as to remove redundant information. There exist various algorithms for data aggregation and filtration using different mobile agents. In this proposed work same mobile agent is used to perform both tasks data aggregation and data filtration. This approach advocates the sharing of resources and reducing the energy consumption level of sensor nodes.
Optical network is an emerging technology for data communication
inworldwide. The information is transmitted from the source to destination
through the fiber optics. All optical network (AON) provides good
transmission transparency, good expandability, large bandwidth, lower bit
error rate (BER), and high processing speed. Link failure and node failure
haveconsistently occurred in the traditional methods. In order to overcome
the above mentioned issues, this paper proposes a robust software defined
switching enabled fault localization framework (SDSFLF) to monitor the
node and link failure in an AON. In this work, a novel faulty node
localization (FNL) algorithm is exploited to locate the faulty node. Then, the
software defined faulty link detection (SDFLD) algorithm that addresses the
problem of link failure. The failures are localized in multi traffic stream
(MTS) and multi agent system (MAS). Thus, the throughput is improved in
SDSFLF compared than other existing methods like traditional routing and
wavelength assignment (RWA), simulated annealing (SA) algorithm, attackaware RWA (A-RWA) convex, longest path first (LPF) ordering, and
biggest source-destination node degree (BND) ordering. The performance of
the proposed algorithm is evaluated in terms of network load, wavelength
utilization, packet loss rate, and burst loss rate. Hence, proposed SDSFLF
assures that high performance is achieved than other traditional techniques.
Cross Layering using Reinforcement Learning in Cognitive Radio-based Industri...IJCNCJournal
The coupling of multiple protocol layers for a Cognitive Radio-based Industrial Internet of Ad-hoc Sensor Network, enables better interaction, coordination, and joint optimization of different protocols in achieving remarkable performance improvements. In this paper, network, and medium access control (MAC) layer functionalities are cross-layered by developing the joint strategy of routing and effective spectrum sensing and Dynamic Channel Selection (DCS) using the Reinforcement Learning (RL) algorithm. In an industrial ad-hoc scenario, the network layer utilizes the sensed spectrum and selected channel by MAC layer for next-hop routing. MAC layer utilizes the lowest known transmission delay of a channel for a single hop as computed by the network layer, which improves the MAC channel selection operation. The applied RLbased technique (Q learning) enables the CR Secondary Users (SUs) to sense, learn, and make the optimal decision on their environment of operations. The proposed RLCLD schemes improve the SU network performance up to 30% as compared to conventional methods.
CROSS LAYERING USING REINFORCEMENT LEARNING IN COGNITIVE RADIO-BASED INDUSTRI...IJCNCJournal
The coupling of multiple protocol layers for a Cognitive Radio-based Industrial Internet of Ad-hoc Sensor
Network, enables better interaction, coordination, and joint optimization of different protocols in achieving
remarkable performance improvements. In this paper, network, and medium access control (MAC) layer
functionalities are cross-layered by developing the joint strategy of routing and effective spectrum sensing
and Dynamic Channel Selection (DCS) using the Reinforcement Learning (RL) algorithm. In an industrial
ad-hoc scenario, the network layer utilizes the sensed spectrum and selected channel by MAC layer for
next-hop routing. MAC layer utilizes the lowest known transmission delay of a channel for a single hop as
computed by the network layer, which improves the MAC channel selection operation. The applied RLbased technique (Q learning) enables the CR Secondary Users (SUs) to sense, learn, and make the optimal
decision on their environment of operations. The proposed RLCLD schemes improve the SU network
performance up to 30% as compared to conventional methods.
SECURING BGP BY HANDLING DYNAMIC NETWORK BEHAVIOR AND UNBALANCED DATASETSIJCNCJournal
The Border Gateway Protocol (BGP) provides crucial routing information for the Internet infrastructure. A problem with abnormal routing behavior affects the stability and connectivity of the global Internet. The biggest hurdles in detecting BGP attacks are extremely unbalanced data set category distribution and the dynamic nature of the network. This unbalanced class distribution and dynamic nature of the network results in the classifier's inferior performance. In this paper we proposed an efficient approach to properly managing these problems, the proposed approach tackles the unbalanced classification of datasets by turning the problem of binary classification into a problem of multiclass classification. This is achieved by splitting the majority-class samples evenly into multiple segments using Affinity Propagation, where the number of segments is chosen so that the number of samples in any segment closely matches the minority-class samples. Such sections of the dataset together with the minor class are then viewed as different classes and used to train the Extreme Learning Machine (ELM). The RIPE and BCNET datasets are used to evaluate the performance of the proposed technique. When no feature selection is used, the proposed technique improves the F1 score by 1.9% compared to state-of-the-art techniques. With the Fischer feature selection algorithm, the proposed algorithm achieved the highest F1 score of 76.3%, which was a 1.7% improvement over the compared ones. Additionally, the MIQ feature selection technique improves the accuracy by 3.5%. For the BCNET dataset, the proposed technique improves the F1 score by 1.8% for the Fisher feature selection technique. The experimental findings support the substantial improvement in performance from previous approaches by the new technique.
Securing BGP by Handling Dynamic Network Behavior and Unbalanced DatasetsIJCNCJournal
The Border Gateway Protocol (BGP) provides crucial routing information for the Internet infrastructure. A problem with abnormal routing behavior affects the stability and connectivity of the global Internet. The biggest hurdles in detecting BGP attacks are extremely unbalanced data set category distribution and the dynamic nature of the network. This unbalanced class distribution and dynamic nature of the network results in the classifier's inferior performance. In this paper we proposed an efficient approach to properly managing these problems, the proposed approach tackles the unbalanced classification of datasets by turning the problem of binary classification into a problem of multiclass classification. This is achieved by splitting the majority-class samples evenly into multiple segments using Affinity Propagation, where the number of segments is chosen so that the number of samples in any segment closely matches the minority-class samples. Such sections of the dataset together with the minor class are then viewed as different classes and used to train the Extreme Learning Machine (ELM). The RIPE and BCNET datasets are used to evaluate the performance of the proposed technique. When no feature selection is used, the proposed technique improves the F1 score by 1.9% compared to state-of-the-art techniques. With the Fischer feature selection algorithm, the proposed algorithm achieved the highest F1 score of 76.3%, which was a 1.7% improvement over the compared ones. Additionally, the MIQ feature selection technique improves the accuracy by 3.5%. For the BCNET dataset, the proposed technique improves the F1 score by 1.8% for the Fisher feature selection technique. The experimental findings support the substantial improvement in performance from previous approaches by the new technique.
NEW TECHNOLOGY FOR MACHINE TO MACHINE COMMUNICATION IN SOFTNET TOWARDS 5Gijwmn
Machine to Machine communication or M2M, refers to a model of communication where devices
communicate directly with each other using the available wired or wireless channels. M2M is a new
concept proposed under 3GPP(3rd Generation Partnership Project); several research are working on
providing solutions for M2M communication for the 5G networks. Challenges associated with M2M
communication are the lack of standards, security, poor infrastructure, interoperability and diverse
architecture. In this paper, we propose a new mechanism called TM2M5G (The Machine to Machine for
5G) based on SOFTNET platform which results in support of 5G heterogeneous network. In this paper, we
propose the architecture for M2M communication based on SOFTNET and provide new features support
like security algorithms for data transmission among devices and scheduling algorithm for seamless
transmission of data packets over the network. Finallysimulation results ofthis algorithm based on a system
level simulator, considering two different approaches for analyzing the parameters such as delay,
throughput and bandwidth are presented.
The full form of EEE is electrical and electronics engineering. Within this day, most of the students showed plenty of interest to hitch a ride on the EEE branch to end their project in III years and thus the IV year. Many of the students plan to do innovative projects which are helpful in real-time.
https://takeoffprojects.com/eee-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Our journal focused on Engineering, Management, Science and Mathematics, we broadly cover research work. our journal publishes Paper Publications, Scopus indexed journals, WoS indexed journals, research paper publications, academic journals, journal publications.
Minimum Process Coordinated Checkpointing Scheme For Ad Hoc Networks pijans
The wireless mobile ad hoc network (MANET) architecture is one consisting of a set of mobile hosts
capable of communicating with each other without the assistance of base stations. This has made possible
creating a mobile distributed computing environment and has also brought several new challenges in
distributed protocol design. In this paper, we study a very fundamental problem, the fault tolerance
problem, in a MANET environment and propose a minimum process coordinated checkpointing scheme.
Since potential problems of this new environment are insufficient power and limited storage capacity, the
proposed scheme tries to reduce the amount of information saved for recovery. The MANET structure used
in our algorithm is hierarchical based. The scheme is based for Cluster Based Routing Protocol (CBRP)
which belongs to a class of Hierarchical Reactive routing protocols. The protocol proposed by us is nonblocking coordinated checkpointing algorithm suitable for ad hoc environments. It produces a consistent
set of checkpoints; the algorithm makes sure that only minimum number of nodes in the cluster are
required to take checkpoints; it uses very few control messages. Performance analysis shows that our
algorithm outperforms the existing related works and is a novel idea in the field. Firstly, we describe an
organization of the cluster. Then we propose a minimum process coordinated checkpointing scheme for
cluster based ad hoc routing protocols.
USER CENTRIC NETWORK SELECTION IN WIRELESS HETNETSijwmn
The future generation wireless networks are expected to beheterogeneous networksconsisting of UMTS,
WLAN, WiMAX, LTE etc. A heterogeneous network provides users with different data rate and Quality of
Service (QoS). Users of future mobile networks will be able to choose from different radio access
technologies. These networks vary widely in service capabilities such as coverage area, bandwidth and
error characteristics. Network selection is a challenging task in heterogeneous networks and will
influence the performance metrics of importance for both service provider and subscriber.This paper
analysesuser centric network selection based on QoE (Quality of Experience) which include both
technical and economical aspects of the user. WLAN-WiMAX-UMTS networks are integrated and the
network selection for the integrated network is performed using game theory based network selection
algorithm.
Optical Networks Automation Overview: A SurveySergio Cruzes
The increasing demand for data has driven the advancement of optical networks from traditional architectures to
more flexible, dynamic and efficient solutions. This includes technologies like flexgrid reconfigurable optical add-drop
multiplexers (ROADMs), variable bandwidth transponders (VBTs) providing different modulation, coding schemes
and baud rates. These advancements have brought about new challenges that concerns to the routing and spectrum
allocation (RSA), fragmented spectrum, need for rapid and efficient channel restoration, and operation and maintenance
management of optical networks. To address these challenges, a dynamic and flexible network requires a highly advanced
network operational system (OS) capable of efficiently managing and allocating network resources. It relies on network
abstraction, sensors, actuators, and software-defined networking (SDN) to enable algorithms, management, control, and
decision-making. Improving the sensing capabilities of the network is crucial. Modern hardware and sensor technology
can help forecast fiber breaks, equipment failures, and other potential issues in advance, allowing for proactive actions
to be taken. Machine learning (ML) methods have been proposed in the literature to enhance the accuracy of quality
of transmission (QoT) estimation, mitigate nonlinearities and provide decisions. This reduces the need for conservative
design margins, maximizes the capacity of optical network systems and reduces the investment in infrastructure. Failure
management is a critical aspect of optical networks. Providing early-warning and proactive protection is essential. This
includes detecting failures, localizing them, identifying the root causes, and estimating their magnitude. Quick response
to failures is vital to maintaining network reliability.
Implementing Machine Learning Algorithms for Predictive Network Maintenance i...ijwmn
With the evolution of fifth generation (5G) network technologies, network maintenance strategies have become increasingly complex, necessitating the use of predictive analysis enabled by Machine Learning (ML) algorithms. This paper emphasizes exploring how ML algorithms can further enhance predictive maintenance in 5G and future networks. It reviews the current literature on this interdisciplinary topic, identifying key ML models such as Decision Trees, Neural Networks, and Support Vector Machines, and discussing their benefits and limitations. Special attention is given to the methodologies in applying these models, handling of data stages, and the training process. Major challenges in implementing ML in the context of network maintenance, such as data privacy, data gathering, model training, and generalizability, are discussed. Furthermore, the research aims to go beyond predicting maintenance needs to introduce a proactive approach in improving overall network performance and pre-empting potential issues based on ML predictions. The paper also discusses possible future trends including advancements in ML algorithms, Automated Machine Learning (AutoML), Explainable AI, and others. The objective is to provide a comprehensive understanding of the current ML-based predictive maintenance field and outline possibilities for future research. The study finds that the application of ML algorithms continues to show promise in transforming the landscape of network management by improving predictive maintenance and proactive performance enhancement strategies. It remains a challenging yet important area in the context of 5G networks.
Implementing Machine Learning Algorithms for Predictive Network Maintenance i...ijwmn
With the evolution of fifth generation (5G) network technologies, network maintenance strategies have become increasingly complex, necessitating the use of predictive analysis enabled by Machine Learning (ML) algorithms. This paper emphasizes exploring how ML algorithms can further enhance predictive maintenance in 5G and future networks. It reviews the current literature on this interdisciplinary topic, identifying key ML models such as Decision Trees, Neural Networks, and Support Vector Machines, and discussing their benefits and limitations. Special attention is given to the methodologies in applying these models, handling of data stages, and the training process. Major challenges in implementing ML in the context of network maintenance, such as data privacy, data gathering, model training, and generalizability, are discussed. Furthermore, the research aims to go beyond predicting maintenance needs to introduce a proactive approach in improving overall network performance and pre-empting potential issues based on ML predictions. The paper also discusses possible future trends including advancements in ML algorithms, Automated Machine Learning (AutoML), Explainable AI, and others. The objective is to provide a comprehensive understanding of the current ML-based predictive maintenance field and outline possibilities for future research. The study finds that the application of ML algorithms continues to show promise in transforming the landscape of network management by improving predictive maintenance and proactive performance enhancement strategies. It remains a challenging yet important area in the context of 5G networks.
Similar to MACHINE LEARNING FOR QOE PREDICTION AND ANOMALY DETECTION IN SELF-ORGANIZING MOBILE NETWORKING SYSTEMS (20)
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
MACHINE LEARNING FOR QOE PREDICTION AND ANOMALY DETECTION IN SELF-ORGANIZING MOBILE NETWORKING SYSTEMS
1. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
DOI: 10.5121/ijwmn.2019.11201 1
MACHINE LEARNING FOR QOE PREDICTION AND
ANOMALY DETECTION IN SELF-ORGANIZING
MOBILE NETWORKING SYSTEMS
Chetana V. Murudkar* and Richard D. Gitlin
Innovation in Wireless Information Networking Lab (iWINLAB)
Department of Electrical Engineering, University of South Florida,
Tampa, Florida 33620, USA.
*Sprint Corp., USA.
ABSTRACT
Existing mobile networking systems lack the level of intelligence, scalability, and autonomous adaptability
required to optimally enable next-generation networks like 5G and beyond, which are expected to be Self -
Organizing Networks (SONs). It is anticipated that machine learning (ML) will be instrumental in designing
future “x”G SON networks with their demanding Quality of Experience (QoE) requirements. This paper
evaluates a methodology that uses supervised machine learning to predict the QoE level of the end user
experiences and uses this information to detect anomalous behavior of dysfunctional network nodes
(eNodeBs/base stations) in self-organizing mobile networks. An end-to-end network scenario is created using
the network simulator ns-3, where end users interact with a remote host that is accessed over the Internet to
run the most commonly used applications like file downloads and uploads and the resulting output is used as
a dataset to implement ML algorithms for QoE prediction and eNodeB (eNB) anomaly detection. Three ML
algorithms were implemented and compared to study their effectiveness and the scalability of the
methodology. In the test network, an accuracy score greater than 99% is achieved using the ML algorithms.
As suggested by the ns-3 simulation the use of ML for QoE prediction will help network operators understand
end-user needs and identify network elements that are failing and need attention and recovery.
KEYWORDS
Machine learning, ns-3, QoE, SON
1. INTRODUCTION
There is little doubt that machine learning (ML) will be a foundation technology for next-generation
wireless networks, such as 5G and beyond. Future wireless networks will be highly integrative and
will create a paradigm shift that includes very high carrier frequencies with massive bandwidths,
extremely high base station and device densities, dynamic “cell-less” networks, and unprecedented
numbers of antennas (Massive MIMO), and these networks will have to possess ground-breaking
levels of flexibility and intelligence [1]. This paper is directed towards demonstrating that the level
of intelligence, complexity and autonomous adaptability required to build such networks can be
achieved by implementing machine learning in combination with self-organizing networks (SON).
It is well-known that user experience is one of the most vital aspects of any industry or business
domain. The occurrence of failures in a network element, such as a base station, may cause
deterioration of this network element’s functions and/or service quality and will, in severe cases,
lead to the complete unavailability of the respective network element [2]. Consequently, anomaly
detection is crucial to minimize the effects of such failures on QoE of the network users. Another
2. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
2
important aspect to consider in the dawn of 5G is energy efficiency or green communications.
Energy-efficient network planning strategies include networks designed to meet peak-hour traffic
such that energy can be saved by partially switching off base stations when they have no active
users or simply very low traffic [1]. This makes anomaly detection even more critical and machine
learning can play an indispensable role in achieving high accuracy in detecting and validating
dysfunctional network elements and distinguishing from an energy saving mode.
The authors in [3] use neighbor cell measurements and handover statistics to detect anomalies and
outages that is based on the number of incoming handovers (inHO) measured on a per cell basis by
neighboring cells. This approach monitors situations where the number of inHO becomes zero as a
potential symptom of cell outage. The authors in [4] use a statistical-based anomaly detection
scheme in 3G networks to find deviations between the collected traffic data and measured
distribution. In [5] the k-nearest neighbor algorithm is used to detect and locate cell outages with
key performance information that uses RSRP (reference signal received power) and SINR (signal
to interference plus noise ratio) measurements collected during normal operations and radio link
failures.
In [6], the authors use a hidden Markov model (HMM) to determine if a base station is healthy,
degraded, crippled or catatonic. The measurements used are serving cell’s reference signal received
power (RSRP), reference signal received quality (RSRQ), and best neighbor cell’s RSRP and
RSRQ. In [7] minimization of drive tests (MDT) reports are used to gather data from a fault-free
operating scenario to profile the behavior of the network. This approach exploits multidimensional
scaling (MDS) techniques to reduce the complexity of data processing while retaining pertinent
information to develop training models to reliably apply anomaly detection techniques. The
performance of k-nearest neighbor and local-outlier-factor based anomaly detection algorithms
was compared and it was found that a global anomaly detection model using k-nearest neighbor
performed better than the local-outlier-factor based anomaly detector which adopts a local
approach for classifying abnormal measurement.
The authors in [8] study the degradation produced by cell outages in the neighboring cells and
propose three methods. One of the methods analyzes the degradation produced by the cell outage
in the neighboring cells based on KPI correlation using historical records for cell outages. The other
two methods proposed are online methods where the first method is a correlation-based approach.
This method calculates the correlation between the observed signal and a reference signal. The
other method used is delta detection where a threshold is determined as a function of the Key
Performance Indicators (KPIs) under normal circumstances. A sample measured under the cell
outage is compared to this threshold in order to determine if a KPI degradation occurred.
While all of the above research approaches for anomaly detection have used different KPI’s and
measurements such as handover statistics, RSRP, RSRQ, number of connection drops, and number
of connection failures, they lack the knowledge of the quality of experience observed by end-users.
Quality of Experience is of crucial importance to end-consumers, network operators and any
stakeholders involved in the service provisioning chain and is a dominant metric to be considered
as the wireless communications networks shift from conventional network-centric paradigms to
more user-centric approaches [9].
The recently introduced methodology, QoE-driven anomaly detection in self-organizing mobile
networks using machine learning [10], implemented machine learning to learn and predict the
quality of end-user experience that is further used for anomaly detection in self-organizing mobile
networks. The metric used to determine the quality of the end-user is Quality of Experience (QoE),
which is the overall acceptability of an application or service as perceived by the end-user [11].
Unlike QoS, QoE incorporates user-centric network decision mechanisms and processes such that
it takes into account not just the technical aspects regarding a service but also incorporates any kind
3. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
3
of human-related quality-affecting factors reflecting the impact that the technical factors have on
the user’s quality perception [12]. The proposed system model [10] used a network simulator, a
parametric QoE model and an optimized version of decision tree machine learning algorithm to
demonstrate and evaluate the approach. This paper is an extension of that work where two other
machine learning algorithms, support vector machine (SVM) and k-nearest neighbors (k-NN), are
implemented for QoE prediction to study the effectiveness and scalability of the proposed system
model. This study evaluates and compares the performance of all three ML algorithms and analyzes
their impact on the system model. The output of the machine learning model is further used for
detecting dysfunctional network nodes (eNBs).
The structure of this paper is as follows: Section 2 briefly describes the system model [10]. In
Section 3, SVM and k-NN algorithms implemented to train the machine learning model for QoE
prediction are explained. Section 4 presents the results and observations obtained by studying the
impact of both of these ML algorithms on the system model and also compares the performance of
SVM, k-NN and decision tree for the dataset generated using the network simulator ns-3 [13]. The
paper ends with the concluding remarks in Section 5.
2. SYSTEM MODEL
A machine learning algorithm is an algorithm that is able to learn from data and make predictions
of new data instances [14]. The system model described in Figure 1 [10] uses the LTE-EPC
Network Simulator model of ns-3 [13] to create a network scenario in order to generate
representative data. 1
The simulation represents an end-to-end network communication where users
run File Transfer Protocol (FTP) applications by interacting with a remote host accessible over the
internet. The data obtained from the simulation serves as the input dataset for the machine learning
model where a parametric QoE model and ML algorithms are implemented to predict QoE scores
of end users that are further used to identify dysfunctional eNodeBs. The parametric QoE model
for FTP services to generate the QoE scores ranging from 0 to 5 for the training set of the machine
learning model to be used is given by the mean opinion score (1)
𝑀𝑂𝑆 𝐹𝑇𝑃 = {
1 𝑢 < 𝑢−
𝑏1. log10(𝑏2. 𝑢) 𝑢−
≤ 𝑢 < 𝑢+
5 𝑢+
≤ 𝑢
(1)
where u represents the data rate of the correctly received data and the values of 𝑏1and 𝑏2
coefficients are obtained from the upper rate (𝑢+
) and lower rate (𝑢−
) expectations for the service
[10], [12], [15], [16]. The model is trained using a machine learning algorithm and QoE scores for
all the users are predicted. The eNodeBs (eNBs) connected to all the users with poor QoE scores
are identified and the mode is determined for each of these eNodeBs to find the QoE score that
occurs most often. If the mode of the QoE scores that are computed using (1) of all the users
connected to an eNB is less than or equal to one, then the eNB is declared as dysfunctional i.e. if
most of the users connected to an eNB have poor QoE scores, then such an eNB is declared to be
dysfunctional.
1
The LTE-EPC simulation model of the ns-3 simulator provides the interconnection of multiple UEs to the internet,
via a radio access network of multiple eNodeBs connected to a single serving gateway-packet data network gateway
node [13].
4. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
4
Figure 1. Flowchart describing the system model.
3. MACHINE LEARNING ALGORITHMS
A machine learning algorithm learns from experience, E, with respect to some tasks, T, and
performance measure, P, to determine if the performance at tasks T, as measured by P, improves
with experience E [14]. The type of task used that is applicable to the system model described in
Figure 1 is regression. In a regression task, the ML algorithm is asked to output a function 𝑓: ℝ 𝑛
→
ℝ [14]. A performance measure is a quantitative measure used to evaluate the abilities of an ML
algorithm. The performance measure used here is the accuracy of the model in producing the correct
output. The types of machine learning algorithms implemented in this research are supervised
machine learning algorithms. These types of algorithms utilize a dataset containing features, where
each example or data point is associated with a target [14]. In our recent work [10], the machine
learning algorithm implemented was an optimized version of decision tree. This paper analyzes the
performance of two other machine learning algorithms, SVM and k-NN, to study their impact on
the system model.
3.1. Support Vector Machine Learning Algorithm
The first algorithm implemented to train the machine learning model is a Support Vector Machine
algorithm. A support vector machine (SVM) constructs a hyperplane or set of hyperplanes in a high
or infinite dimensional space, which can be used for classification, regression or other tasks [17].
If sufficient separation is achieved by the hyperplane with the largest distance to the nearest training
5. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
5
samples of any class, the algorithm will generally be effective. The training samples that are the
closest to the decision surface are called support vectors. The SVM algorithm finds the largest
margin (i.e., “distance”) between the support vectors to obtain optimal decision regions. The type
of SVM algorithm used in the proposed method is SVM regression which can be explained as
follows [17], [18]: In SVM regression, the input vector 𝒙 is first mapped2
onto an 𝑚-dimensional
feature space using some fixed (nonlinear) mapping i.e. by using kernel functions, and then a linear
model is constructed in this feature space to separate the training data points. The linear model in
the feature space 𝑓(𝒙, 𝜔) is given by
𝑓(𝒙, 𝜔) = ∑ 𝜔𝑗 𝑔𝑗 (𝒙) + b
𝑚
𝑗=1
(2)
where 𝑔𝑗 (𝒙), 𝑗 = 1, … . , 𝑚 denotes a set of nonlinear transformations and b is a bias term. A loss
function [19] often used by an SVM to measure the quality of estimation is called the 𝜀 − insensitive
loss function and is given below.
ℒ 𝜀(𝑦, 𝑓(𝒙, 𝜔)) = {
0, 𝑖𝑓 |𝑦 − 𝑓(𝒙, 𝜔)| ≤ 𝜀
|𝑦 − 𝑓(𝒙, 𝜔)| − 𝜀, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(3)
The SVM performs linear regression in the high-dimension feature space using 𝜀 – insensitive loss
and, at the same time, tries to reduce model complexity by minimizing ||𝜔||2
. This can be described
by introducing (non-negative) slack variables 𝜉i, 𝜉i
*
where 𝑖 = 1, … . , 𝑛, to measure the deviation
of training samples outside the 𝜀 – insensitive zone. Thus, SVM regression is formulated as the
minimization of the following function:
min
1
2
||𝜔||2
+ 𝐶 ∑ (𝜉𝑖 +
𝑛
𝑖=1
𝜉i
*
) (4)
subject to {
𝒚𝒊 − 𝑓(𝒙𝒊, 𝜔) ≤ 𝜀 + 𝜉𝑖
∗
𝑓(𝒙𝒊, 𝜔) − 𝒚𝒊 ≤ 𝜀 + 𝜉𝑖
𝜉𝑖, 𝜉𝑖
∗
≥ 0, 𝑖 = 1, … . , 𝑛
where C is a regularization parameter that determines the tradeoff between the model complexity
and the degree to which deviations larger than 𝜀 are tolerated in optimization formulation, 𝒙𝒊
represents the input values, 𝜔 represents the weights, and 𝒚𝒊 represents the target values. This
optimization problem can be transformed into the dual problem and its solution is given by
𝑓(𝑥) = ∑ (𝛼𝑖 −
𝑛
𝑖=1
𝛼i
*
) K (𝒙𝒊, 𝒙) (5)
subject to 0 ≤ 𝛼i
*
≤ C, 0 ≤ 𝛼𝑖 ≤ C, where 𝑛 is the number of support vectors, 𝛼𝑖 is the dual variable,
and the kernel function is given by
K (𝒙, 𝒙𝒊) = ∑ 𝑔𝑗(𝒙)𝑔𝑗(𝒙𝒊)
𝑚
𝑗=1
(6)
SVM performance (estimation accuracy) depends on the optimized setting of meta-parameters C, 𝜀
and the kernel parameters.
2
In SVM, the input space is transformed into a new feature space using kernel functions where it becomes easier to
process the data such that it is linearly separable. Hard margin SVM works when data is completely linearly separable.
But when we have errors (noise/outliers), we use soft margin SVM which uses slack variables (ξ).
6. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
6
3.2. k-Nearest Neighbor Machine Learning Algorithm
The second algorithm implemented to train the machine learning model is the k-nearest neighbor
(k-NN) algorithm. The algorithm is explained as follows [17], [20]: The basic idea behind this
algorithm is to base the estimation on a fixed number of observations k which are closest to the
desired data point. A commonly used metric measure for distance is the Euclidean distance. Given
𝛸 ∈ ℝ 𝑞
and a set of samples {𝑋1, … . , 𝑋 𝑛}, for any fixed point 𝑥 ∈ ℝ 𝑞
, it can be calculated how
close each observation 𝑋𝑖 is to 𝑥 using the Euclidean distance ||𝑥|| = (𝑥′
𝑥)
1
2 where “ ′ ” denotes
the vector transpose. This distance is given as
𝐷𝑖 = ||𝑥 − 𝑋𝑖|| = ((𝑥 − 𝑋𝑖)′(𝑥 − 𝑋𝑖))
1
2 (7)
The order statistics for the distances 𝐷𝑖 are 0 ≤ 𝐷(1) ≤ 𝐷(2) ≤ 𝐷(𝑛). The observations
corresponding to these order statistics are the “nearest neighbors” of 𝑥. The observations ranked by
the distances or “nearest neighbors”, are {𝑋(1), 𝑋(2), 𝑋(3), … . , 𝑋(𝑛)}. The kth
nearest neighbor of 𝑥 is
𝑋(𝑘). For a given k, let
𝑅 𝑥 = ||𝑋(𝑘) − 𝑥|| = 𝐷(𝑘) (8)
denote the Euclidean distance between 𝑥 and 𝑋(𝑘). 𝑅 𝑥 is just the kth
order statistic on the distances
𝐷𝑖. In k-NN regression, the label3
assigned to a query point is computed based on the mean of the
labels of its nearest neighbors. The weights used in the basic type of k-NN regression are uniform
where each point in the local neighborhood contributes to the classification of a query point. In
some cases, it can be beneficial to weigh points such that nearby points contribute more to the
regression than points that are far away. The classic k-NN estimate is given as
𝑔̃(𝑥) =
1
𝑘
∑ 1𝑛
𝑖=1 (||𝑥 − 𝑋𝑖|| ≤ 𝑅 𝑥)𝑦𝑖 (9)
This is the average value of 𝑦𝑖 among the observations that are the k nearest neighbors of 𝑥. A
smooth k-NN estimator is a weighted average of the k nearest neighbors and is given as
𝑔̃(𝑥) =
∑ 𝜔(
||𝑥−𝑋 𝑖||
𝑅 𝑥
)𝑦𝑖
𝑛
𝑖=1
∑ 𝜔(
||𝑥−𝑋 𝑖||
𝑅 𝑥
)𝑛
𝑖=1
(10)
4. SIMULATION RESULTS AND OBSERVATIONS
The values of the primary parameters used to configure the network scenario created in the ns-3
simulation are given in Table 1 [10].
3
In supervised machine learning, the task of the ML model is to predict target values from labelled data. The input is
referred to by terms such as independent variables or features. The output is referred to by terms such as dependent
variables or target labels or target values.
7. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
7
Table 1. Network simulation configuration parameters
Parameters Value
Number of users 50
Number of eNodeBs 5
eNodeB Bandwidth 20 MHz
Transmit Power of
functional eNB
46 dBm
Transmit Power of
dysfunctional eNB
30 dBm
Application Type FTP
The output obtained from the ns-3 simulation run is used as the input dataset for the machine
learning model and the target values for the training set of the machine learning model are
calculated using the parametric QoE model defined in (1). The SVM regression and k-NN regression
algorithms are implemented using this dataset. The performance of SVM, k-NN, and decision tree
[10] is evaluated to study their effectiveness and the scalability of the system model.
As previously mentioned in section III, SVM performance generally depends on the setting of meta-
parameters C, 𝜀 and the kernel parameters. Two kernel functions linear and radial basis function
(rbf) were used to test the performance. The training and testing accuracy for each of these kernel
functions is given in Figure 2 that shows that the rbf function gives better accuracy for the dataset
generated by the ns-3 simulation.
Figure 2. Accuracy of the training and testing sets for SVM regression using linear and rbf kernel functions
Three additional computational parameters that affect the performance of SVM are C, epsilon and
gamma. C is a regularization parameter that determines the tradeoff between the model complexity
8. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
8
and the degree to which deviations larger than epsilon are tolerated, epsilon specifies the epsilon-
tube within which no penalty is associated in the training loss function with points predicted within
a distance epsilon from the actual value, and gamma specifies how far the influence of a single
training example reaches and is the inverse of the radius of influence of samples selected by the
model as support vectors [17], [18]. Comparison done among these parameters to find the optimal
value for each of these parameters for the dataset obtained in this work is illustrated in Figure 3. It
is observed that the optimal values of these parameters for the dataset obtained from the ns-3
simulation are C = 5, gamma = 0.001, and epsilon = 0.01.
Figure 3. Accuracy scores for varying values of C, gamma and epsilon in SVM regression
The training and testing accuracies for k-NN regression for varying values of k are shown in Figure
4. It is observed that the most optimal value of k is 4 for the given dataset.
Figure 4. Accuracy of the training and testing sets for k-NN regression for varying values of k
9. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
9
The training and testing accuracies obtained [10] for decision tree regression for MSE and MAE
criteria at varying values of maximum allowable depth are shown in Figure 5. It is observed that
MSE at maximum depth value 3 gives the most optimum performance.
Figure 5. Accuracy of the training and testing sets for decision tree regression using MSE and MAE across
varying values of maximum allowable depths
It is observed that for the dataset used in this work, accuracy of up to 99.5% is achieved using SVM
regression, 99.4% is achieved using k-NN regression, and 100% is achieved using decision tree
regression as shown in Figure 6.
Figure 6. ML algorithm performance comparison
It is observed that while decision tree and k-NN models are easy to understand and implement, the
complexity of SVM is higher. A limitation of k-NN is that it is sensitive to localized data where
localized anomalies can affect outcomes significantly. Decision tree has a high probability of
overfitting and needs pruning for larger datasets. Subsequent to QoE prediction, all the users with
poor QoE are found and the set of eNBs that served these users are isolated. If the maximum number
of users served by a particular eNB have a QoE score less than or equal to one, such an eNB is
10. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
10
declared to be dysfunctional but if the maximum number of users served by a particular eNB have
a QoE score above 1, then the eNB is declared to be functional.
5. CONCLUSIONS
This paper evaluates the performance of three ML algorithms used in a system model that uses ML
algorithms to learn and predict the end-user experience and is able to detect dysfunctional eNodeBs
in the network. Three ML algorithms were implemented and compared to study their effectiveness
and the scalability of the system model. The ML algorithms SVM regression, k-NN regression, and
decision tree regression were implemented to train a machine learning model used for QoE
prediction that is further used for anomaly detection in SON networks. It was observed that high
accuracy (≥ 99%) can be achieved for QoE prediction and anomaly detection using all three ML
algorithms for the dataset obtained from the ns-3 simulation performed. Decision tree regression
performed slightly better than SVM and k-NN regression, since the training and testing accuracy for
the decision tree regression was better than the other two algorithms. However, decision tree has a
high probability of overfitting and needs pruning for larger datasets. Hence, in case of overfitting,
SVM regression and k-NN regression can serve as good alternatives for the decision tree regression
machine learning algorithm for QoE-driven anomaly detection in SON networks. This paper
demonstrates the potential for incorporating machine learning in next-generation networks for
anomaly detection and suggests that the observed effectiveness and scalability of the proposed
system model should be evaluated in actual networks with physically built hardware and actual
users in the real-world environment.
REFERENCES
[1] Jeffrey G. Andrews, Stefano Buzzi, Wan Choi, Stephen V. Hanly, Angel Lozano, Anthony C. K.
Soong, and Jianzhong Charlie Zhang, “What will 5G be?” IEEE Journal on Selected Areas in
Communications, volume: 32, no. 6, June 2014.
[2] 3GPP TS 32.111-1, “Fault Management; Part 1: 3G fault management requirements,” v14.0.0,
March 2017.
[3] I. de-la-Bandera, R. Barco, P. Muñoz, and I. Serrano, “Cell Outage Detection Based on Handover
Statistics,” IEEE Communications Letters, Volume: 19, Issue: 7, Pages: 1189 – 1192, 2015.
[4] A. D’Alconzo, A. Coluccia, F. Ricciato, and P. Romirer-Maierhofer, “A distribution-based approach
to anomaly detection and application to 3G mobile traffic,” GLOBECOM 2009 - 2009 IEEE Global
Telecommunications Conference, Pages: 1 – 8, 2009.
[5] Wenqian Xue, Mugen Peng, Yu Ma, and Hengzhi Zhang, “Classification-based Approach for Cell
Outage Detection in Self-healing Heterogeneous Networks,” IEEE Wireless Communications and
Networking Conference (WCNC), Pages: 2822 – 2826, 2014.
[6] Multazamah Alias, Navrati Saxena, and Abhishek Roy, “Efficient Cell Outage Detection in 5G
HetNets Using Hidden Markov Model,” IEEE Communications Letters, Volume: 20, Issue: 3,
Pages: 562 – 565, 2016.
[7] Oluwakayode Onireti, Ahmed Zoha, Jessica Moysen, Ali Imran, Lorenza Giupponi, Muhammad
Ali Imran, Adnan Abu-Dayya, “A Cell Outage Management Framework for Dense Heterogeneous
Networks,” IEEE Transactions on Vehicular Technology, Volume: 65, Issue: 4, Pages: 2097 – 2113,
2016.
11. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
11
[8] Isabel de la Bandera, Pablo Muñoz, Inmaculada Serrano, and Raquel Barco, “Improving Cell Outage
Management Through Data Analysis,” IEEE Wireless Communications, Volume:24, Issue: 4, Page
s: 113 – 119, 2017
[9] Eirini Liotou, Dimitris Tsolkas, Nikos Passas, and Lazaros Merakos, “Quality of Experience
Management in Mobile Cellular Networks: Key Issues and Design Challenges,” IEEE
Communications Magazine, volume: 53, issue: 7, 2015.
[10] Chetana V. Murudkar, Richard D. Gitlin, “QoE-driven Anomaly Detection in Self-Organizing
Mobile Networks using Machine Learning,” IEEE Wireless Telecommunications Symposium
(WTS), April 2019, accepted - to be published.
[11] ITU-T Recommendation P.10/G.100 Amendment 2, “Vocabulary for performance and quality of
service,” July 2008.
[12] Eirini Liotou, Dimitris Tsolkas, Nikos Passas, Lazaros Merakos, “A Roadmap on QoE Metrics and
Models,” 23rd International Conference on Telecommunications (ICT), 2016.
[13] ns-3 [online]. Available: https://www.nsnam.org/
[14] Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016.
[15] Dimitris Tsolkas, Eirini Liotou, Nikos Passas, and Lazaros Merakos, “A Survey on Parametric QoE
Estimation for Popular Services,” Journal of network and computer applications, volume: 77, pages:
1-17, January 2017.
[16] Srisakul Thakolsri, Shoaib Khan, Eckehard Steinbach, Wolfgang Kellerer, “QoE-Driven Cross-
Layer Optimization for High Speed Downlink Packet Access,” Journal of Communications, volume:
4, no. 9, pp. 669–680, Oct. 2009.
[17] Sci-kit learn [online]. Available: http://scikit-learn.org/stable/#
[18] Support Vector Machine Regression. [Online]. Available: http://kernelsvm.tripod.com/
[19] Sergios Theodoridis, Machine Learning, Academic Press, 2015
[20] Bruce E. Hansen, Nearest Neighbor Methods. [Online]. Available:
https://www.ssc.wisc.edu/~bhansen/718/NonParametrics10.pdf
[21] Paulo Valente Klaine, Muhammad Ali Imran, Oluwakayode Onireti, Richard Demo Souza, “A
Survey of Machine Learning Techniques Applied to Self-Organizing Cellular Networks,” IEEE
Communications Surveys & Tutorials, volume: 19, issue: 4, 2017.
Authors
Chetana V. Murudkar is pursuing a Ph.D. in Electrical Engineering at University of
South Florida under the supervision of Dr. Richard D. Gitlin. She is an RF Engineer at
Sprint Corporation and her responsibilities involve design, deployment, performance
monitoring, and optimization of Sprint’s multi-technology, multi-band, and multi-vendor
wireless communications mobile network. Her past work experience includes working
with AT&T Labs and Ericsson. She has received an MS degree in Telecommunications
Engineering from Southern Methodist University and a bachelor’s degree in Electronics
and Telecommunications Engineering from University of Mumbai.
12. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 11, No. 2, April 2019
12
Richard D. Gitlin is a State of Florida 21st Century World Class Scholar, Distinguished University
Professor, and the Agere Systems Chaired Distinguished Professor of Electrical
Engineering at the University of South Florida. He has 50 years of leadership in the
communications industry and in academia and he has a record of significant research
contributions that have been sustained and prolific over several decades. Dr. Gitlin is an
elected member of the National Academy of Engineering (NAE), a Fellow of the IEEE, a
Bell Laboratories Fellow, a Charter Fellow of the National Academy of Inventors (NAI),
and a member of the Florida Inventors Hall of Fame (2017). He is also a co-recipient of
the 2005 Thomas Alva Edison Patent Award and the IEEE S.O. Rice prize (1995), co-
authored a communications text, published more than 170 papers, including 3 prize-winning papers, and
holds 65 patents. After receiving his doctorate at Columbia University in 1969, he joined Bell Laboratories,
where he worked for 32-years performing and leading pioneering research and development in digital
communications, broadband networking, and wireless systems including: co-invention of DSL (Digital
Subscriber Line), multicode CDMA (3/4G wireless), and pioneering the use of smart antennas (“MIMO”)
for wireless systems At his retirement, Dr. Gitlin was Senior VP for Communications and Networking
Research at Bell Labs, a multi-national research organization with over 500 professionals. After retiring from
Lucent, he was visiting professor of Electrical Engineering at Columbia University, and later he was Chief
Technology Officer of Hammerhead Systems, a venture funded networking company in Silicon Valley. He
joined USF in 2008 where his research is on wireless cyberphysical systems that advance minimally invasive
surgery and cardiology and on addressing fundamental technical challenges in 5G/6G wireless systems.