This paper addresses web object size which is one of important performance measures and affects to
service time in multiple access environment. Since packets arrive according to Poission distribution and
web service time has arbitrary distribution, M/G/1 model can be used to describe the behavior of the web
server system. In the time division multiplexing (TDM), we can use M/D/1 with vacations model, because
service time is constant and server may have a vacation. We derive the mean web object size satisfying the
constraint such that mean waiting time by round-robin scheduling in multiple access environment is equal
to the mean queueing delay of M/D/1 with vacations model in TDM and M/H2/1 model, respectively.
Performance evaluation shows that the mean web object size increases as the link utilization increases at
the given maximum segment size (MSS), but converges on the lower bound when the number of embedded
objects included in a web page is beyond the threshold. Our results can be applied to the economic design
and maintenance of web service.
Mean Object Size Considering Average Waiting Latency in M/BP/1 SystemIJCNCJournal
This paper deals with the web object size which affects to the service time in multiple access environments. The M/BP/1 model can be considered because packets arrival and web service are Poission and Bound Pareto (BP) distribution respectively. We find mean object size which satisfies that the average waiting latency by deterministic model equals the mean queueing delay of the M/BP/1 model. Performance evaluation shows that the mean web object size is affected by file size bounds and shape parameter of BP distribution, however, the impact of link capacity is not significant. When the system load is low, web object size converges on half the maximum segment size (MSS). Our results can be applied to find mean web object size in the economic web service design.
An approach to dsr routing qos by fuzzy genetic algorithmsijwmn
Although, all prior works improved routing on MANETs, there is no strong advancement on QoS. One of
the newest challenges to improve quality of routing in MANETs is combining the Genetic and Fuzzy
algorithms into routing protocols. The improvements on routing QoS are approached by using Genetic and
Fuzzy algorithms in this project. In cause of storing route information during route discovery, the DSR
routing protocol is chosen by this project. First of all, the suggested protocol in this project added Current
Time into DSR header. So, next intermediate node can obtain its previous link’s cost by this attachment and
adds the Link Cost to route discovery packet. Then, when the route discovery packet received to destination
node, it will expect for other packets till end of packet TTL. Next, the destination node will use collected
packets in Genetic Algorithm to find the two optimum routes. Finally, the destination node sends these
routes to source node. Next improvement is using Fuzzy Triangle Numbers to change route update. In this
case, the suggested protocol uses route error packets’ count and also Triangle Numbers to change route
update period time.
Fuzzy Optimized Metric for Adaptive Network RoutingCSCJournals
Network routing algorithms used today calculate least cost (shortest) paths between nodes. The cost of a path is the sum of the cost of all links on that path. The use of a single metric for adaptive routing is insufficient to reflect the actual state of the link. In general, there is a limitation on the accuracy of the link state information obtained by the routing protocol. Hence it becomes useful if two or more metrics can be associated to produce a single metric that can describe the state of the link more accurately. In this paper, a fuzzy inference rule base is implemented to generate the fuzzy cost of each candidate path to be used in routing the incoming calls. This fuzzy cost is based on the crisp values of the different metrics; a fuzzy membership function is defined. The parameters of these membership functions reflect dynamically the requirement of the incoming traffic service as well as the current state of the links in the path. And this paper investigates how three metrics, the mean link bandwidth, queue utilization and the mean link delay, can be related using a simple fuzzy logic algorithm to produce a optimized cost of the link for a certain interval that is more „precise‟ than either of the single metric, to solve routing problem .
Ad hoc networks are mobile wireless networks where each node is acting as a router. The existing routing protocols such as Destination sequences distance vector (DSDV), Optimized list state routing protocols (OLSR), Ad hoc on demand routing protocol (AODV), dynamic source routing (DSR) are optimized versions of distance vector or link state routing protocols. Reinforcement Learning is new method evolved recently which is learning from interaction with an environment. Q Learning which is based on reinforcement learning that learns from the delayed reinforcements and becomes more popular in areas of networking. Q Learning is applied to the routing algorithms where the routing tables in the distance vector algorithms are replaced by the estimation tables called as Q values. These Q values are based on the link delay. In this paper, various optimization techniques over Q routing are described in detail with their algorithms.
EEDTCA: Energy Efficient, Reduced Delay and Minimum Distributed Topology Cont...Editor IJCATR
Processing packets across Mobile Ad hoc Network, Topology control minimize interference among node in a network, increase the network capacity and extend lifetime of the network. Emerging research in mobile ad hoc networks (MANETs) says, there is a growing requirement of quality of service (QoS) in terms of delay. In order to resolve the delay problem, it is essential to consider topology control in delay constrained environment with energy efficient. In this paper, we discuss reduced delay and minimum distributed topology control algorithm for mobile ad hoc networks. In this proposed system, we study on the delay-constrained topology control problem, and take into account delay and energy efficiency. Simulation results are presented demonstrating the effectiveness of this new technique as compared to other approaches to topology control.
EEDTCA: Energy Efficient, Reduced Delay and Minimum Distributed Topology Cont...Editor IJCATR
Processing packets across Mobile Ad hoc Network, Topology control minimize interference among node in a network, increase the network capacity and extend lifetime of the network. Emerging research in mobile ad hoc networks (MANETs) says, there is a growing requirement of quality of service (QoS) in terms of delay. In order to resolve the delay problem, it is essential to consider topology control in delay constrained environment with energy efficient. In this paper, we discuss reduced delay and minimum distributed topology control algorithm for mobile ad hoc networks. In this proposed system, we study on the delay-constrained topology control problem, and take into account delay and energy efficiency. Simulation results are presented demonstrating the effectiveness of this new technique as compared to other approaches to topology control.
A SURVEY TO REAL-TIME MESSAGE-ROUTING NETWORK SYSTEM WITH KLA MODELLINGijseajournal
ABSTRACT
Messages routing over a network is one of the most fundamental concept in communication which requires simultaneous transmission of messages from a source to a destination. In terms of Real-Time Routing, it refers to the addition of a timing constraint in which messages should be received within a specified time delay. This study involves Scheduling, Algorithm Design and Graph Theory which are essential parts of the Computer Science (CS) discipline. Our goal is to investigate an innovative and efficient way to present these concepts in the context of CS Education. In this paper, we will explore the fundamental modelling of routing real-time messages on networks. We study whether it is possible to have an optimal on-line algorithm for the Arbitrary Directed Graph network topology. In addition, we will examine the message routing’s algorithmic complexity by breaking down the complex mathematical proofs into concrete, visual examples. Next, we explore the Unidirectional Ring topology in finding the transmission’s “makespan”.Lastly, we propose the same network modelling through the technique of Kinesthetic Learning Activity (KLA). We will analyse the data collected and present the results in a case study to evaluate the effectiveness of the KLA approach compared to the traditional teaching method.
Mean Object Size Considering Average Waiting Latency in M/BP/1 SystemIJCNCJournal
This paper deals with the web object size which affects to the service time in multiple access environments. The M/BP/1 model can be considered because packets arrival and web service are Poission and Bound Pareto (BP) distribution respectively. We find mean object size which satisfies that the average waiting latency by deterministic model equals the mean queueing delay of the M/BP/1 model. Performance evaluation shows that the mean web object size is affected by file size bounds and shape parameter of BP distribution, however, the impact of link capacity is not significant. When the system load is low, web object size converges on half the maximum segment size (MSS). Our results can be applied to find mean web object size in the economic web service design.
An approach to dsr routing qos by fuzzy genetic algorithmsijwmn
Although, all prior works improved routing on MANETs, there is no strong advancement on QoS. One of
the newest challenges to improve quality of routing in MANETs is combining the Genetic and Fuzzy
algorithms into routing protocols. The improvements on routing QoS are approached by using Genetic and
Fuzzy algorithms in this project. In cause of storing route information during route discovery, the DSR
routing protocol is chosen by this project. First of all, the suggested protocol in this project added Current
Time into DSR header. So, next intermediate node can obtain its previous link’s cost by this attachment and
adds the Link Cost to route discovery packet. Then, when the route discovery packet received to destination
node, it will expect for other packets till end of packet TTL. Next, the destination node will use collected
packets in Genetic Algorithm to find the two optimum routes. Finally, the destination node sends these
routes to source node. Next improvement is using Fuzzy Triangle Numbers to change route update. In this
case, the suggested protocol uses route error packets’ count and also Triangle Numbers to change route
update period time.
Fuzzy Optimized Metric for Adaptive Network RoutingCSCJournals
Network routing algorithms used today calculate least cost (shortest) paths between nodes. The cost of a path is the sum of the cost of all links on that path. The use of a single metric for adaptive routing is insufficient to reflect the actual state of the link. In general, there is a limitation on the accuracy of the link state information obtained by the routing protocol. Hence it becomes useful if two or more metrics can be associated to produce a single metric that can describe the state of the link more accurately. In this paper, a fuzzy inference rule base is implemented to generate the fuzzy cost of each candidate path to be used in routing the incoming calls. This fuzzy cost is based on the crisp values of the different metrics; a fuzzy membership function is defined. The parameters of these membership functions reflect dynamically the requirement of the incoming traffic service as well as the current state of the links in the path. And this paper investigates how three metrics, the mean link bandwidth, queue utilization and the mean link delay, can be related using a simple fuzzy logic algorithm to produce a optimized cost of the link for a certain interval that is more „precise‟ than either of the single metric, to solve routing problem .
Ad hoc networks are mobile wireless networks where each node is acting as a router. The existing routing protocols such as Destination sequences distance vector (DSDV), Optimized list state routing protocols (OLSR), Ad hoc on demand routing protocol (AODV), dynamic source routing (DSR) are optimized versions of distance vector or link state routing protocols. Reinforcement Learning is new method evolved recently which is learning from interaction with an environment. Q Learning which is based on reinforcement learning that learns from the delayed reinforcements and becomes more popular in areas of networking. Q Learning is applied to the routing algorithms where the routing tables in the distance vector algorithms are replaced by the estimation tables called as Q values. These Q values are based on the link delay. In this paper, various optimization techniques over Q routing are described in detail with their algorithms.
EEDTCA: Energy Efficient, Reduced Delay and Minimum Distributed Topology Cont...Editor IJCATR
Processing packets across Mobile Ad hoc Network, Topology control minimize interference among node in a network, increase the network capacity and extend lifetime of the network. Emerging research in mobile ad hoc networks (MANETs) says, there is a growing requirement of quality of service (QoS) in terms of delay. In order to resolve the delay problem, it is essential to consider topology control in delay constrained environment with energy efficient. In this paper, we discuss reduced delay and minimum distributed topology control algorithm for mobile ad hoc networks. In this proposed system, we study on the delay-constrained topology control problem, and take into account delay and energy efficiency. Simulation results are presented demonstrating the effectiveness of this new technique as compared to other approaches to topology control.
EEDTCA: Energy Efficient, Reduced Delay and Minimum Distributed Topology Cont...Editor IJCATR
Processing packets across Mobile Ad hoc Network, Topology control minimize interference among node in a network, increase the network capacity and extend lifetime of the network. Emerging research in mobile ad hoc networks (MANETs) says, there is a growing requirement of quality of service (QoS) in terms of delay. In order to resolve the delay problem, it is essential to consider topology control in delay constrained environment with energy efficient. In this paper, we discuss reduced delay and minimum distributed topology control algorithm for mobile ad hoc networks. In this proposed system, we study on the delay-constrained topology control problem, and take into account delay and energy efficiency. Simulation results are presented demonstrating the effectiveness of this new technique as compared to other approaches to topology control.
A SURVEY TO REAL-TIME MESSAGE-ROUTING NETWORK SYSTEM WITH KLA MODELLINGijseajournal
ABSTRACT
Messages routing over a network is one of the most fundamental concept in communication which requires simultaneous transmission of messages from a source to a destination. In terms of Real-Time Routing, it refers to the addition of a timing constraint in which messages should be received within a specified time delay. This study involves Scheduling, Algorithm Design and Graph Theory which are essential parts of the Computer Science (CS) discipline. Our goal is to investigate an innovative and efficient way to present these concepts in the context of CS Education. In this paper, we will explore the fundamental modelling of routing real-time messages on networks. We study whether it is possible to have an optimal on-line algorithm for the Arbitrary Directed Graph network topology. In addition, we will examine the message routing’s algorithmic complexity by breaking down the complex mathematical proofs into concrete, visual examples. Next, we explore the Unidirectional Ring topology in finding the transmission’s “makespan”.Lastly, we propose the same network modelling through the technique of Kinesthetic Learning Activity (KLA). We will analyse the data collected and present the results in a case study to evaluate the effectiveness of the KLA approach compared to the traditional teaching method.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud computing have enthused research efforts towards intellectual cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm for the provision of computing infrastructure, which aims to shift the location of the computing infrastructure to the network in order to reduce the maintenance costs of hardware and software resources. Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud computing systems hide a great deal of services from the user through virtualization. In this paper, the cloud data center is modelled as queuing system with a single task arrivals and a task request buffer of infinite capacity.
In this paper, prioritized sweeping confidence based dual reinforcement learning based adaptive network routing is investigated. Shortest Path routing is always not suitable for any wireless mobile network as in high traffic conditions, shortest path will always select the shortest path which is in terms of number of hops, between source and destination thus generating more congestion. In prioritized sweeping reinforcement learning method, optimization is carried out over confidence based dual reinforcement routing on mobile ad hoc network and path is selected based on the actual traffic present on the network at real time. Thus they guarantee the least delivery time to reach the packets to the destination. Analysis is done on 50 Nodes Mobile ad hoc networks with random mobility. Various performance parameters such as Interval and number of nodes are used for judging the network. Packet delivery ratio, dropping ratio and delay shows optimum results using the prioritized sweeping reinforcement learning method.
This work constructs the membership functions of the system characteristics of a batch-arrival queuing system with multiple servers, in which the batch-arrival rate and customer service rate are all fuzzy numbers. The -cut approach is used to transform a fuzzy queue into a family of conventional crisp queues in this context. By means of the membership functions of the system characteristics, a set of parametric nonlinear programs is developed to describe the family of crisp batch-arrival queues with multiple servers. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, the fuzzy batch-arrival queues with multiple servers are represented more accurately and the analytic results are more useful for system designers and practitioners.
Enforcing end to-end proportional fairness with bounded buffer overflow proba...ijwmn
In this paper, we present a distributed flow-based
access scheme for slotted-time protocols, that prov
ides
proportional fairness in ad-hoc wireless networks u
nder constraints on the buffer overflow probabiliti
es at
each node. The proposed scheme requires local infor
mation exchange at the link-layer and end-to-end
information exchange at the transport-layer, and is
cast as a nonlinear program. A medium access contr
ol
protocol is said to be proportionally fair with res
pect to individual end-to-end flows in a network, i
f the
product of the end-to-end flow rates is maximized.
A key contribution of this work lies in the constru
ction of
a distributed dual approach that comes with low com
putational overhead. We discuss the convergence
properties of the proposed scheme and present simul
ation results to support our conclusions.
Transmission Time and Throughput analysis of EEE LEACH, LEACH and Direct Tran...acijjournal
This paper gives a brief description about some routing protocols like EEE LEACH, LEACH and Direct
Transmission protocol (DTx) in Wireless Sensor Network (WSN) and a comparison study of these
protocols based on some performance matrices. Addition to this an attempt is done to calculate their
transmission time and throughput. To calculate these, MATLAB environment is used. Finally, on the basis
of the obtained results from the simulation, the above mentioned three protocols are compared. The
comparison results show that, the EEE LEACH routing protocol has a greater transmission time than
LEACH and DTx protocol and with smaller throughput.
MODELLING TRAFFIC IN IMS NETWORK NODESijdpsjournal
IMS is well integrated with existing voice and data networks, while adopting many of their key characteristics.
The Call Session Control Functions (CSCFs) servers are the key part of the IMS structure. They are the main components responsible for processing and routing signalling messages.
When CSCFs servers (P-CSCF, I-CSCF, S-CSCF) are running on the same host, the SIP message can be internally passed between SIP servers using a single operating system mechanism like a queue. It increases
the reliability of the network [5], [6]. We have proposed in a last work for each type of service (between ICSCF and S-CSCF (call, data, multimedia.))[23], to use less than two servers well dimensioned and running on the same operating system.
Instead dimensioning servers, in order to increase performance, we try to model traffic on IMS nodes, particularly on entries nodes; it will provide results on separation of incoming flows, and then offer more satisfactory service.
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Supporting efficient and scalable multicastingingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
DETERMINING THE NETWORK THROUGHPUT AND FLOW RATE USING GSR AND AAL2Rijujournal
In multi-radio wireless mesh networks, one node is eligible to transmit packets over multiple channels to different destination nodes simultaneously. This feature of multi-radio wireless mesh network makes high throughput for the network and increase the chance for multi path routing. This is because the multiple channel availability for transmission decreases the probability of the most elegant problem called as interference problem which is either of interflow and intraflow type. For avoiding the problem like interference and maintaining the constant network performance or increasing the performance the WMN need to consider the packet aggregation and packet forwarding. Packet aggregation is process of collecting several packets ready for transmission and sending them to the intended recipient through the channel, while the packet forwarding holds the hop-by-hop routing. But choosing the correct path among different available multiple paths is most the important factor in the both case for a routing algorithm. Hence the most challenging factor is to determine a forwarding strategy which will provide the schedule for each node for transmission within the channel. In this research work we have tried to implement two forwarding strategies for the multi path multi radio WMN as the approximate solution for the above said problem. We have implemented Global State Routing (GSR) which will consider the packet forwarding concept and Aggregation Aware Layer 2 Routing (AAL2R) which considers the both concept i.e. both packet forwarding and packet aggregation. After the successful implementation the network performance has been measured by means of simulation study.
ESTIMATION OF MEDIUM ACCESS CONTROL LAYER PACKET DELAY DISTRIBUTION FOR IEEE ...ijwmn
The most important standard in wireless local area networks is IEEE 802.11. This is why much of the
research work for the enhancement of wireless network is usually based on the behavior of IEEE 802.11
protocol. However, some of the ways in which IEEE 802.11 medium access control layer behaves is still
unreliable to guarantee quality of service. For instance, medium access control layer packet delay, jitter
and packet loss rate still remain a challenge. The main objective of this research is to propose an
accurate estimation of the medium access control layer packet delay distribution for IEEE 802.11. This
estimation considers the differences between busy probability and collision probability. These differences
are employed to achieve a more accurate estimation. Finally, the proposed model and simulation are
implemented and validated - using MATLAB program for the purpose of simulation, and Maple program
to undertake the calculation of the equations.
Design and implementation of low latency weighted round robin (ll wrr) schedu...ijwmn
Today’s wireless broadband networks are required to provide QoS guarantee as well as fairness to
different kinds of traffic. Recent wireless standards (such as LTE and WiMAX) have special provisions at
MAC layer for differentiating and scheduling data traffic for achieving QoS. The main focus of this paper is
concerned with high speed packet queuing/scheduling at central node such as base station (BS) or router to
handle network traffic. This paper proposes novel packet queuing scheme termed as Low Latency
Weighted Round Robin (LL-WRR) which is simple and effective amendment to weighted round robin (WRR)
for achieving low latency and improved fairness. Proposed LL-WRR queue scheduling scheme is
implemented in NS-2 considering IEEE 802.16 network [1] with real time video and Constant Bit Rate
(CBR) audio traffic connections. Simulation results show improvement obtained in latency and fairness
using LL-WRR. The proposed scheme introduces extra complexity of computing coefficient but its overall
impact is very small.
Routing in « Delay Tolerant Networks » (DTN) Improved Routing With Prophet an...CSCJournals
In this paper, we address the problem of routing in “delay tolerant networks” (DTN). In such networks there is no guarantee of finding a complete communication path connecting the source and the destination at any time, especially when the destination is not in the same region of the source, what makes the traditional routing protocols inefficient in that transmission of the messages between nodes. We propose to combine the routing protocol Prophet and the model of \"transfer by delegation\" (custody transfer) to improve the routing in DTN network and to exploit the nodes as a common carriers of messages between the network partitioned. To implement this approach and assess those improvements and changes we developed a DTN simulator. Simulation examples are illustrated in the article.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Fuzzy Controller Based Stable Routes with Lifetime Prediction in MANETsCSCJournals
In ad hoc networks, the nodes are dynamically and arbitrary located in a manner that the interconnections between nodes are changing frequently. Thus, designing an effective routing protocol is a critical issue. In this paper, we propose a fuzzy based routing method that selects the most stable route (FSRS) considering the number of intermediate nodes, packet queue occupancy, and internodes distances. Also it takes the produced cost of the selected route as an input to another fuzzy controller predicts its lifetime (FRLP), the evaluation of the proposed method is performed using OMNet++4.0 simulator in terms of packet delivery ratio, average end-to-end delay and normalized routing load.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
Dynamic bandwidth allocation scheme in lr pon with performance modelling and ...IJCNCJournal
We consider models of telecommunication systems that incorporate probability, dense real-time and data.
We present a new formal abstraction method for computing minimum and maximum reachability
probabilities for such models. Our approach uses strictly local formal abstract steps to reduce both the size
of abstract specifications generated and the complexity of operations needed, in comparison to previous
approaches of this kind. A selection of large case studies are implemented the techniques and evaluate,
which include some infinite-state probabilistic real time models, demonstrating improvements over existing
tools in several cases. The capacity of metro and access networks are extended the reach and split ratio of
the conventional Long - Reach Passive Optical Networks (LR-PONs). The efficient solutions of LR-PONs
are appeared in feeder distances around 100km and high split ratios up to 1000-way . Among many
existing approaches, one of the most effective options to improve network performance in LR-PONs are the
multi-thread based dynamic bandwidth allocation (DBA) scheme where several bandwidth allocation
processes are performed in parallel is considered. Without proper intercommunication between the
overlapped threads, multi-thread DBA may lose efficiency and even perform worse than the conventional
single thread algorithm. Real Time Probabilistic Systems are used to evaluate a typical PON systems
performance. This approach is more convenient, flexible, and lower cost than the former simulation method,
which do not need develop special hardware and software tools. Moreover, how changes in performance
depend on changes in the particular modes can be easily analysis by supplying ranges for parameter values.
The proposed algorithm with traditional DBA is compared, and shows its advantage on average packet
delay. The key parameters of the algorithm are analysed and optimized, such as initiating and tuning
multiple threads, inter -thread scheduling, and fairness among users. The algorithms advantage in
numerical results are decreased the average packet delay and improve network throughput under varying
offered loads.
A JOINT TIMING OFFSET AND CHANNEL ESTIMATION USING FRACTIONAL FOURIER TRANSFO...IJCNCJournal
This paper deals with symbol timing offset and channel estimation in OFDM (orthogonal frequency
division multiplexing) system in fast varying channel. Symbol timing offset (STO) estimation is a major task
in OFDM. Most of existing methods for estimating STO used cyclic prefix or training sequences. In this
paper, we consider a new system for STO estimation using constant amplitude zero autocorrelation
(CAZAC) sequences as pilot sequences in conjunction with fractional Fourier transform (FRFT). After STO
estimation is done, timing compensation is made. Thereafter, channel is estimated to well recover the
original transmitted signal. This method gives good results in terms of MSE in comparison with other
known techniques, it estimated well the channel and it is important for fast varying channel. MATLAB
Monte-Carlo simulations are used to evaluate the performance of the proposed estimator.
Visualize network anomaly detection by using k means clustering algorithmIJCNCJournal
With the ever increasing amount of new attacks in today’s world the amount of data will keep increasing,
and because of the base-rate fallacy the amount of false alarms will also increase. Another problem with
detection of attacks is that they usually isn’t detected until after the attack has taken place, this makes
defending against attacks hard and can easily lead to disclosure of sensitive information.
In this paper we choose K-means algorithm with the Kdd Cup 1999 network data set to evaluate the
performance of an unsupervised learning method for anomaly detection. The results of the evaluation
showed that a high detection rate can be achieve while maintaining a low false alarm rate .This paper
presents the result of using k-means clustering by applying Cluster 3.0 tool and visualized this result by
using TreeView visualization tool .
Clustering problems are considered amongst the prominent challenges in statistics and computational science. Clustering of nodes in wireless sensor networks which is used to prolong the life-time of networks is one of the difficult tasks of clustering procedure. In order to perform nodes’ clustering, a number of nodes are determined as cluster heads and other ones are joined to one of these heads, based on different criteria e.g. Euclidean distance. So far, different approaches have been proposed for this process, where swarm and evolutionary algorithms contribute in this regard. In this study, a novel algorithm is proposed based on Artificial Fish Swarm Algorithm (AFSA) for clustering procedure. In the proposed method, the
performance of the standard AFSA is improved by increasing balance between local and global searches.
Furthermore, a new mechanism has been added to the base algorithm for improving convergence speed in
clustering problems. Performance of the proposed technique is compared to a number of state-of-the-art
techniques in this field and the outcomes indicate the supremacy of the proposed technique.
COMPARATIVE ANALYSIS OF DIFFERENT ENCRYPTION TECHNIQUES IN MOBILE AD HOC NETW...IJCNCJournal
In this paper a detailed analysis of Data Encryption Standard (DES), Triple DES (3DES) and Advanced
Encryption Standard (AES) symmetric encryption algorithms in MANET was done using the Network
Simulator 2 (NS-2) in terms of energy consumption, data transfer time, End-to-End delay time and
throughput with varying data sizes. Two simulation models were adopted: the first simulates the network
performance assuming the availability of the common key, and the second simulates the network
performance including the use of the Diffie-Hellman Key Exchange (DHKE) protocol in the key
management phase. The obtained simulation results showed the superiority of AES over DES by 65%, 70%
and 83% in term of the energy consumption, data transfer time, and network throughput respectively. On
the other hand, the results showed that AES is better than 3DES by approximately 90% for all of the
performance metrics. Based on these results the AES was the recommended encryption scheme.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud computing have enthused research efforts towards intellectual cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm for the provision of computing infrastructure, which aims to shift the location of the computing infrastructure to the network in order to reduce the maintenance costs of hardware and software resources. Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud computing systems hide a great deal of services from the user through virtualization. In this paper, the cloud data center is modelled as queuing system with a single task arrivals and a task request buffer of infinite capacity.
In this paper, prioritized sweeping confidence based dual reinforcement learning based adaptive network routing is investigated. Shortest Path routing is always not suitable for any wireless mobile network as in high traffic conditions, shortest path will always select the shortest path which is in terms of number of hops, between source and destination thus generating more congestion. In prioritized sweeping reinforcement learning method, optimization is carried out over confidence based dual reinforcement routing on mobile ad hoc network and path is selected based on the actual traffic present on the network at real time. Thus they guarantee the least delivery time to reach the packets to the destination. Analysis is done on 50 Nodes Mobile ad hoc networks with random mobility. Various performance parameters such as Interval and number of nodes are used for judging the network. Packet delivery ratio, dropping ratio and delay shows optimum results using the prioritized sweeping reinforcement learning method.
This work constructs the membership functions of the system characteristics of a batch-arrival queuing system with multiple servers, in which the batch-arrival rate and customer service rate are all fuzzy numbers. The -cut approach is used to transform a fuzzy queue into a family of conventional crisp queues in this context. By means of the membership functions of the system characteristics, a set of parametric nonlinear programs is developed to describe the family of crisp batch-arrival queues with multiple servers. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, the fuzzy batch-arrival queues with multiple servers are represented more accurately and the analytic results are more useful for system designers and practitioners.
Enforcing end to-end proportional fairness with bounded buffer overflow proba...ijwmn
In this paper, we present a distributed flow-based
access scheme for slotted-time protocols, that prov
ides
proportional fairness in ad-hoc wireless networks u
nder constraints on the buffer overflow probabiliti
es at
each node. The proposed scheme requires local infor
mation exchange at the link-layer and end-to-end
information exchange at the transport-layer, and is
cast as a nonlinear program. A medium access contr
ol
protocol is said to be proportionally fair with res
pect to individual end-to-end flows in a network, i
f the
product of the end-to-end flow rates is maximized.
A key contribution of this work lies in the constru
ction of
a distributed dual approach that comes with low com
putational overhead. We discuss the convergence
properties of the proposed scheme and present simul
ation results to support our conclusions.
Transmission Time and Throughput analysis of EEE LEACH, LEACH and Direct Tran...acijjournal
This paper gives a brief description about some routing protocols like EEE LEACH, LEACH and Direct
Transmission protocol (DTx) in Wireless Sensor Network (WSN) and a comparison study of these
protocols based on some performance matrices. Addition to this an attempt is done to calculate their
transmission time and throughput. To calculate these, MATLAB environment is used. Finally, on the basis
of the obtained results from the simulation, the above mentioned three protocols are compared. The
comparison results show that, the EEE LEACH routing protocol has a greater transmission time than
LEACH and DTx protocol and with smaller throughput.
MODELLING TRAFFIC IN IMS NETWORK NODESijdpsjournal
IMS is well integrated with existing voice and data networks, while adopting many of their key characteristics.
The Call Session Control Functions (CSCFs) servers are the key part of the IMS structure. They are the main components responsible for processing and routing signalling messages.
When CSCFs servers (P-CSCF, I-CSCF, S-CSCF) are running on the same host, the SIP message can be internally passed between SIP servers using a single operating system mechanism like a queue. It increases
the reliability of the network [5], [6]. We have proposed in a last work for each type of service (between ICSCF and S-CSCF (call, data, multimedia.))[23], to use less than two servers well dimensioned and running on the same operating system.
Instead dimensioning servers, in order to increase performance, we try to model traffic on IMS nodes, particularly on entries nodes; it will provide results on separation of incoming flows, and then offer more satisfactory service.
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Supporting efficient and scalable multicastingingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
DETERMINING THE NETWORK THROUGHPUT AND FLOW RATE USING GSR AND AAL2Rijujournal
In multi-radio wireless mesh networks, one node is eligible to transmit packets over multiple channels to different destination nodes simultaneously. This feature of multi-radio wireless mesh network makes high throughput for the network and increase the chance for multi path routing. This is because the multiple channel availability for transmission decreases the probability of the most elegant problem called as interference problem which is either of interflow and intraflow type. For avoiding the problem like interference and maintaining the constant network performance or increasing the performance the WMN need to consider the packet aggregation and packet forwarding. Packet aggregation is process of collecting several packets ready for transmission and sending them to the intended recipient through the channel, while the packet forwarding holds the hop-by-hop routing. But choosing the correct path among different available multiple paths is most the important factor in the both case for a routing algorithm. Hence the most challenging factor is to determine a forwarding strategy which will provide the schedule for each node for transmission within the channel. In this research work we have tried to implement two forwarding strategies for the multi path multi radio WMN as the approximate solution for the above said problem. We have implemented Global State Routing (GSR) which will consider the packet forwarding concept and Aggregation Aware Layer 2 Routing (AAL2R) which considers the both concept i.e. both packet forwarding and packet aggregation. After the successful implementation the network performance has been measured by means of simulation study.
ESTIMATION OF MEDIUM ACCESS CONTROL LAYER PACKET DELAY DISTRIBUTION FOR IEEE ...ijwmn
The most important standard in wireless local area networks is IEEE 802.11. This is why much of the
research work for the enhancement of wireless network is usually based on the behavior of IEEE 802.11
protocol. However, some of the ways in which IEEE 802.11 medium access control layer behaves is still
unreliable to guarantee quality of service. For instance, medium access control layer packet delay, jitter
and packet loss rate still remain a challenge. The main objective of this research is to propose an
accurate estimation of the medium access control layer packet delay distribution for IEEE 802.11. This
estimation considers the differences between busy probability and collision probability. These differences
are employed to achieve a more accurate estimation. Finally, the proposed model and simulation are
implemented and validated - using MATLAB program for the purpose of simulation, and Maple program
to undertake the calculation of the equations.
Design and implementation of low latency weighted round robin (ll wrr) schedu...ijwmn
Today’s wireless broadband networks are required to provide QoS guarantee as well as fairness to
different kinds of traffic. Recent wireless standards (such as LTE and WiMAX) have special provisions at
MAC layer for differentiating and scheduling data traffic for achieving QoS. The main focus of this paper is
concerned with high speed packet queuing/scheduling at central node such as base station (BS) or router to
handle network traffic. This paper proposes novel packet queuing scheme termed as Low Latency
Weighted Round Robin (LL-WRR) which is simple and effective amendment to weighted round robin (WRR)
for achieving low latency and improved fairness. Proposed LL-WRR queue scheduling scheme is
implemented in NS-2 considering IEEE 802.16 network [1] with real time video and Constant Bit Rate
(CBR) audio traffic connections. Simulation results show improvement obtained in latency and fairness
using LL-WRR. The proposed scheme introduces extra complexity of computing coefficient but its overall
impact is very small.
Routing in « Delay Tolerant Networks » (DTN) Improved Routing With Prophet an...CSCJournals
In this paper, we address the problem of routing in “delay tolerant networks” (DTN). In such networks there is no guarantee of finding a complete communication path connecting the source and the destination at any time, especially when the destination is not in the same region of the source, what makes the traditional routing protocols inefficient in that transmission of the messages between nodes. We propose to combine the routing protocol Prophet and the model of \"transfer by delegation\" (custody transfer) to improve the routing in DTN network and to exploit the nodes as a common carriers of messages between the network partitioned. To implement this approach and assess those improvements and changes we developed a DTN simulator. Simulation examples are illustrated in the article.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Fuzzy Controller Based Stable Routes with Lifetime Prediction in MANETsCSCJournals
In ad hoc networks, the nodes are dynamically and arbitrary located in a manner that the interconnections between nodes are changing frequently. Thus, designing an effective routing protocol is a critical issue. In this paper, we propose a fuzzy based routing method that selects the most stable route (FSRS) considering the number of intermediate nodes, packet queue occupancy, and internodes distances. Also it takes the produced cost of the selected route as an input to another fuzzy controller predicts its lifetime (FRLP), the evaluation of the proposed method is performed using OMNet++4.0 simulator in terms of packet delivery ratio, average end-to-end delay and normalized routing load.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
Dynamic bandwidth allocation scheme in lr pon with performance modelling and ...IJCNCJournal
We consider models of telecommunication systems that incorporate probability, dense real-time and data.
We present a new formal abstraction method for computing minimum and maximum reachability
probabilities for such models. Our approach uses strictly local formal abstract steps to reduce both the size
of abstract specifications generated and the complexity of operations needed, in comparison to previous
approaches of this kind. A selection of large case studies are implemented the techniques and evaluate,
which include some infinite-state probabilistic real time models, demonstrating improvements over existing
tools in several cases. The capacity of metro and access networks are extended the reach and split ratio of
the conventional Long - Reach Passive Optical Networks (LR-PONs). The efficient solutions of LR-PONs
are appeared in feeder distances around 100km and high split ratios up to 1000-way . Among many
existing approaches, one of the most effective options to improve network performance in LR-PONs are the
multi-thread based dynamic bandwidth allocation (DBA) scheme where several bandwidth allocation
processes are performed in parallel is considered. Without proper intercommunication between the
overlapped threads, multi-thread DBA may lose efficiency and even perform worse than the conventional
single thread algorithm. Real Time Probabilistic Systems are used to evaluate a typical PON systems
performance. This approach is more convenient, flexible, and lower cost than the former simulation method,
which do not need develop special hardware and software tools. Moreover, how changes in performance
depend on changes in the particular modes can be easily analysis by supplying ranges for parameter values.
The proposed algorithm with traditional DBA is compared, and shows its advantage on average packet
delay. The key parameters of the algorithm are analysed and optimized, such as initiating and tuning
multiple threads, inter -thread scheduling, and fairness among users. The algorithms advantage in
numerical results are decreased the average packet delay and improve network throughput under varying
offered loads.
A JOINT TIMING OFFSET AND CHANNEL ESTIMATION USING FRACTIONAL FOURIER TRANSFO...IJCNCJournal
This paper deals with symbol timing offset and channel estimation in OFDM (orthogonal frequency
division multiplexing) system in fast varying channel. Symbol timing offset (STO) estimation is a major task
in OFDM. Most of existing methods for estimating STO used cyclic prefix or training sequences. In this
paper, we consider a new system for STO estimation using constant amplitude zero autocorrelation
(CAZAC) sequences as pilot sequences in conjunction with fractional Fourier transform (FRFT). After STO
estimation is done, timing compensation is made. Thereafter, channel is estimated to well recover the
original transmitted signal. This method gives good results in terms of MSE in comparison with other
known techniques, it estimated well the channel and it is important for fast varying channel. MATLAB
Monte-Carlo simulations are used to evaluate the performance of the proposed estimator.
Visualize network anomaly detection by using k means clustering algorithmIJCNCJournal
With the ever increasing amount of new attacks in today’s world the amount of data will keep increasing,
and because of the base-rate fallacy the amount of false alarms will also increase. Another problem with
detection of attacks is that they usually isn’t detected until after the attack has taken place, this makes
defending against attacks hard and can easily lead to disclosure of sensitive information.
In this paper we choose K-means algorithm with the Kdd Cup 1999 network data set to evaluate the
performance of an unsupervised learning method for anomaly detection. The results of the evaluation
showed that a high detection rate can be achieve while maintaining a low false alarm rate .This paper
presents the result of using k-means clustering by applying Cluster 3.0 tool and visualized this result by
using TreeView visualization tool .
Clustering problems are considered amongst the prominent challenges in statistics and computational science. Clustering of nodes in wireless sensor networks which is used to prolong the life-time of networks is one of the difficult tasks of clustering procedure. In order to perform nodes’ clustering, a number of nodes are determined as cluster heads and other ones are joined to one of these heads, based on different criteria e.g. Euclidean distance. So far, different approaches have been proposed for this process, where swarm and evolutionary algorithms contribute in this regard. In this study, a novel algorithm is proposed based on Artificial Fish Swarm Algorithm (AFSA) for clustering procedure. In the proposed method, the
performance of the standard AFSA is improved by increasing balance between local and global searches.
Furthermore, a new mechanism has been added to the base algorithm for improving convergence speed in
clustering problems. Performance of the proposed technique is compared to a number of state-of-the-art
techniques in this field and the outcomes indicate the supremacy of the proposed technique.
COMPARATIVE ANALYSIS OF DIFFERENT ENCRYPTION TECHNIQUES IN MOBILE AD HOC NETW...IJCNCJournal
In this paper a detailed analysis of Data Encryption Standard (DES), Triple DES (3DES) and Advanced
Encryption Standard (AES) symmetric encryption algorithms in MANET was done using the Network
Simulator 2 (NS-2) in terms of energy consumption, data transfer time, End-to-End delay time and
throughput with varying data sizes. Two simulation models were adopted: the first simulates the network
performance assuming the availability of the common key, and the second simulates the network
performance including the use of the Diffie-Hellman Key Exchange (DHKE) protocol in the key
management phase. The obtained simulation results showed the superiority of AES over DES by 65%, 70%
and 83% in term of the energy consumption, data transfer time, and network throughput respectively. On
the other hand, the results showed that AES is better than 3DES by approximately 90% for all of the
performance metrics. Based on these results the AES was the recommended encryption scheme.
PERFORMANCE OF TCP CONGESTION CONTROL IN UAV NETWORKS OF VARIOUS RADIO PROPAG...IJCNCJournal
Unmanned aerial vehicles (UAVs) have recently become popular for both recreational and commercial use
and UAV networks have thus started to attract the attention of researchers in area of the computer
communication and networking. One important topic in UAV networks is congestion control because
congestion causes packet losses and delays which result in the waste of all types of network resources such
as bandwidth and power. Although there are studies on the performance of TCP congestion control in
wireless networks, they focus on terrestrial networks of two dimensions in general. In this paper we study
the performance of TCP congestion control in three dimensional UAV networks. In particular, we
investigate how TCP congestion control performs in such type of network using various radio propagation
models. Our data on the average flow throughput, packet delay, and packet loss rate in UAV networks
show that TCP congestion control improves the network performance of UAV networks in general, but it
faces challenges when the link losses become severe. Our study thus shows that investigation on new
congestion control schemes is stilled needed for the emerging UAV networks.
Clock synchronization estimation of non deterministic delays in wireless mess...IJCNCJournal
Clock synchronization is significantly essential as they require universal time on WSN nodes for time measurement, event ordering and coordinated actions, and power management. This paper gives an insight of solving the problem of the non-deterministic delays that exist in the wireless message delivery. Sensor nodes consisting of Arduino Mega and 2.4 GHz nRF24L01+ radio modules are used, and based on the estimation of non-deterministic delays a clock synchronization protocol for WSN is proposed. The results obtained are quiet promising compared to the existing synchronization protocols for WSNs.
Hex-Cell is an interconnection network that has attractive features like the embedding capability of topological structures; such as; bus, ring, tree and mesh topologies. In this paper, we present two algorithms for embedding bus and ring topologies onto Hex-Cell interconnection network. We use three metrics to evaluate our proposed algorithms: dilation, congestion, and expansion. Our evaluation results
show that the congestion of our two proposed algorithms is equal to one; and the dilation is equal to 2d-1 for the first algorithm and 1 for the second.
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud computing have enthused research efforts towards intellectual cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm for the provision of computing infrastructure, which aims to shift the location of the computing infrastructure to the network in order to reduce the maintenance costs of hardware and software resources. Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud computing systems hide a great deal of services from the user through virtualization. In this paper, the cloud data center is modelled as queuing system with a single task arrivals and a task request buffer of infinite capacity.
A downlink scheduler supporting real time services in LTE cellular networksUniversity of Piraeus
The wide spread of real-time services in wireless networks demands scheduling mechanisms supporting strict Quality of Service (QoS) requirements. Nevertheless, the specifications of the LTE standard for mobile connectivity defined by the 3rd Generation Partnership Project (3GPP) does not impose any specific scheduler for the proper allocation of resources to services. Therefore, several LTE schedulers have been proposed in the literature meeting the QoS requirements of modern services. In this paper a QoS aware scheduler for the LTE downlink is proposed namely the FLS-Advanced (FLSA) aiming at prioritizing real-time traffic. The proposed scheduler has been built on three distinct levels assigning the available radio resources to services according to their requirements. Based on simulation results, the FLSA outperforms in terms of packet loss ratio, attainable throughput and fairness the performance of existing schedulers including PF, MLWDF, EXP/PF, FLS, EXP RULE and LOG RULE.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Performance measures for internet server by using m m m queueing modeleSAT Journals
Abstract This paper deals with the performance measurement of single queue multiple server model. This gives the performance measure for internet server for highly dynamic traffic conditions. Our previous work is related to performance measurement of single queue single server model. This is achieved by analyzing the performance measures and capacity planning of internet server using different queuing models by comparing the parameters like queuing length, response time, waiting time for different links. Keywords- Internet server, waiting time, response time, queue length, queuing models
Performance measures for internet server by using m mm queueing modeleSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Manets: Increasing N-Messages Delivery Probability Using Two-Hop Relay with E...ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Packet Loss Rate Differentiation in slotted Optical Packet Switching OCDM/WDMTELKOMNIKA JOURNAL
We propose a multi-class mechanism for Optical Code Division Multiplexing (OCDM), Wavelength
Division Multiplexing (WDM) Optical Packet Switch (OPS) architecture capable of supporting Quality of Service
(QoS) transmission. OCDM/WDM has been proposed as a competitive hybrid switching technology to
support the next generation optical Internet. This paper addresses performance issues in the slotted OPS
networks and proposed four differentiation schemes to support Quality of Service. In addition, we present a
comparison between the proposed schemes as well as, a simulation scheduler design which can be suitable
for the core switch node in OPS networks. Using software simulations the performance of our algorithm in
terms of losing probability, the packet delay, and scalability is evaluated.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
AN OPEN JACKSON NETWORK MODEL FOR HETEROGENEOUS INFRASTRUCTURE AS A SERVICE O...IJCNCJournal
Cloud computing is an environment which provides services for user demand such as software, platform, infrastructure. Applications which are deployed on cloud computing have become more varied and complex to adapt to increase end-user quantity and fluctuating workload. One popular characteristic of
cloud computing is the heterogeneity of network, hosts and virtual machines (VM). There were many studies on cloud computing modeling based on queuing theory, but most studies have focused on homogeneity characteristic. In this study, we propose a cloud computing model based on open Jackson
network for multi-tier application systems which are deployed on heterogeneous VMs of IaaS cloud computing. The important metrics are analyzed in our experiments such as mean waiting time; mean request quantity, the throughput of the system. Besides that, metrics in model is used to modify number VMs
allocated for applications. Result of experiments shows that open queue network provides high efficiency.
RESPONSE SURFACE METHODOLOGY FOR PERFORMANCE ANALYSIS AND MODELING OF MANET R...IJCNCJournal
Numerous studies have analyzed the performances of routing protocols in mobile Ad-hoc networks (MANETs); most of these studies vary at most one or two parameters in experiments and do not study the interactions among these parameters. Furthermore, efficient mathematical modeling of the performances has not been investigated; such models can be useful for performance analysis, optimization, and prediction. This study aims to show the effectiveness of the response surface methodology (RSM) on the performance analysis of routing protocols in MANETs and establish a relationship between the influential parameters and these performances through mathematical modeling. Given that routing performances usually do not follow a linear pattern according to the parameters; mathematical models of factorial designs are not suitable for establishing a valid and reliable relationship between performances and parameters. Therefore, a Box–Behnken design, which is an RSM technique and provides quadratic mathematical models, is used in this study to establish a relationship. The obtained models are statistically analyzed; the models show that the studied performances accurately follow a quadratic evolution. These models provide invaluable information and can be useful in analyzing, optimizing, and predicting performances for mobile Ad-hoc routing protocols.
Mobile elements scheduling for periodic sensor applicationsijwmn
In this paper, we investigate the problem of designing the mobile elements tours such that the length of each tour is below a per-determined length and the depth of the multi-hop routing trees bounded by k. The path of the mobile element is designed to visit subset of the nodes (cache points). These cache points store other nodes data. To address this problem, we propose two heuristic-based solutions. Our solutions take into consideration the distribution of the nodes during the establishment of the tour. The results of our experiments indicate that our schemes significantly outperforms the best comparable scheme in the literature.
MECC scheduling algorithm in vehicular environment for uplink transmission in...IJECEIAES
Single Carrier Frequency Division Multiple Access (SC-FDMA) is chosen because of the lower peak-to-average power ratio (PAPR) value in uplink transmission. However, the contiguity constraint is one of the major constraint presents in uplink packet scheduling, where all RBs allocated to a single UE must be contiguous in the frequency-domain within each time slot to maintain its single carrier. This paper proposed an uplink-scheduling algorithm namely the Maximum Expansion with Contiguity Constraints (MECC) algorithm, which supports both the RT and NRT services. The MECC algorithm is deployed in two stages. In the first stage, the RBs are allocated fairly among the UEs. The second stage allocates the RBs with the highest metric value and expands the allocation on both sides of the matrix, M with respect to the contiguity constraint. The performance of the MECC algorithm was observed in terms of throughput, fairness, delay, and Packet Loss Ratio (PLR) for VoIP, video and best effort flows. The MECC scheduling algorithm is compared to other algorithms namely the Round Robin (RR), Channel-Dependent First Maximum Expansion (CD-FME), and Proportional Fairness First Maximum Expansion (PF-FME). From here, it can be concluded that the MECC algorithm shows the best results among other algorithms by delivering the highest throughput which is up to 81.29% and 90.04% than CD-FME and RR scheduler for RT and NRT traffic respectively, having low PLR and delay which is up to 93.92% and 56.22% of improvement than CD-FME for the RT traffic flow. The MECC also has a satisfactory level of fairness for the cell-edge users in a vehicular environment of LTE network.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
Data collection scheme for wireless sensor network with mobile collectorijwmn
In this paper, we investigate the problem of designing the minimum number of required mobile elements
tours such that each sensor node is either on the tour or one hop away from the tour, and the length of the
tour to be bounded by pre-determined value L. To address this problem, we propose heuristic-based
solution. This solution works by directing the mobile element tour towards the highly dense area in the
network. The experiment results show that our scheme outperform the benchmark scheme by 10% in most
scenarios.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Similar to Web object size satisfying mean waiting (20)
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Web object size satisfying mean waiting
1. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
DOI : 10.5121/ijcnc.2014.6401 01
WEB OBJECT SIZE SATISFYING MEAN WAITING
TIME IN MULTIPLE ACCESS ENVIRONMENT
Y. –J. Lee
Department of Technology Education, Korea National University of Education,
Cheongju, South Korea
ABSTRACT
This paper addresses web object size which is one of important performance measures and affects to
service time in multiple access environment. Since packets arrive according to Poission distribution and
web service time has arbitrary distribution, M/G/1 model can be used to describe the behavior of the web
server system. In the time division multiplexing (TDM), we can use M/D/1 with vacations model, because
service time is constant and server may have a vacation. We derive the mean web object size satisfying the
constraint such that mean waiting time by round-robin scheduling in multiple access environment is equal
to the mean queueing delay of M/D/1 with vacations model in TDM and M/H2/1 model, respectively.
Performance evaluation shows that the mean web object size increases as the link utilization increases at
the given maximum segment size (MSS), but converges on the lower bound when the number of embedded
objects included in a web page is beyond the threshold. Our results can be applied to the economic design
and maintenance of web service.
KEYWORDS
M/D/1 with vacations, M/H2/1, mean waiting time, multiple web access
1. INTRODUCTION
Simultaneous access of multiple users to a server in the web environment increases the mean
waiting delay of an end-user. Therefore, the quality of service (QoS) degradation problem of the
end-user arises. In order to develop a technology for solving this problem, we first should find the
mean waiting delay of the end-user accurately.
Generally, the user's request to the web server per unit time follows the Poisson distribution and
the web service time follows the general distribution instead of the Exponential distribution.
M/G/1 model [1] is known to be suitable to describe a web service. In particular, because web
services are influenced by the size of the web objects, Shi et al [2] presented the result that as a
statistical distribution to describe the web service, Weibull distribution and Exponential
distribution are suitable. Meanwhile, Khayari et. al [3] and Riska et. al [4] have presented an
algorithm to fit the empirical distribution to the Hyper-exponential distribution. When the web
service is given by the Hyper-exponential distribution in the steady state, the research [5] to
obtain the number of concurrent users satisfying the average queueing delay was conducted.
However, more empirical researches for web services distribution related to the Internet are still
needed. Additionally, M/G/1 with vacations model [6, 7, 8, 9] as a modification of M/G/1 model
has proposed.
In the time division multiplexing (TDM), time quantum assigned to each user is slotted so that the
data transmission takes place just only at the starting point of the slot. Therefore, if the system is
empty at the beginning of the slot, the server goes to the vacation state during that time slot. To
2. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
2
apply the TDM scheme to the queueing system can be described as an M/D/1 with vacations
model in which the service distribution is given by the constant.
On the other hand, when several users simultaneously request a web object of the packets in the
web server, and the round-robin scheduling for the web service is used, we can determine the
mean waiting time. When the system is in the steady state, we can infer that the mean waiting
time is equal to the mean queueing delay by the M/D/1 with vacations model in the TDM
approximately.
The objective of this study is to find the web object size satisfying that the mean waiting time for
multiple web access environments is equal to the mean queueing delay for M/D/1 with vacations
model in TDM and the mean queueing delay for M/H2/1 model, respectively. We first find the
number of simultaneous users satisfying that M/H2/1 queueing delay is equal to the queueing
delay in TDM. And then we obtain the web object size. The reason to obtain the web object size
satisfying delay constraint of end-user is why the controlling of that is the most economic way in
the design and operation of the web service.
The rest of this paper is structured as follows. In the next section, we discuss the M/D/1 with
vacations model based on Modiano [8] and Bose [6] in TDM and the M/H2/1 queueing model. In
section 3, we first describe the model to find the mean waiting time by round-robin scheduling in
the multiple web access environment. We then determine the web object size satisfying the
constraint that the mean waiting time equal to the mean queueing delay for M/D/1 with vacations
model in TDM and M/H2/1 model, respectively. In section 4, we present and analyze the
performance evaluation results. Finally, in section 5, we discuss the conclusions and future
research.
2. QUEUEING DELAY FOR M/D/1 WITH VACATIONS MODEL AND M/H2/1
MODEL
2.1. Queueing Delay for M/D/1 with vacation Model
We consider a single-server queueing system where object requests arrive according to a Poission
process with rate λ, but service times have a general distribution (M/G/1). By Pollaczek-Khinchin
formula, the expected mean queueing delay is given by (1) [1, 10].
)1(2
)( 2
ρ
λ
−
=
SE
W (1)
where ρ=λ/µ= λE(S). S is the random variable representing the service time and identically
distributed, mutually independent, and independent of the inter-arrival times.
If the service times are identical for all requests (M/D/1), that is E(S2
) = 1/µ2
, equation (1)
becomes (2).
)1(2 ρµ
ρ
−
=W (2)
Now, we consider M/G/1 queueing model with vacations [9]. In this model, when the queue is
empty, the server's vacation time is represented as IID random variable, and independent with
service times and arrival times. When the system is empty after the vacation, the server can take
another vacation. In this model, the arrival distribution at any time t, should be the same as the
length of the queue [7], R is given by (3)
3. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
3
)(2
)()1(
2
)( 22
VE
VESE
R
ρλ −
+= (3)
Where, E(V) and E(V2
) are the primary and secondary moment of vacation time, respectively.
From (1), the mean queuing delay of M/G/1 with vacations model is derived as (4).
)(2
)(
)1(2
)(
1
22
VE
VESER
W +
−
=
−
=
ρ
λ
ρ
(4)
Now, consider the time division multiplexing (TDM) system such as Figure 1.
Figure 1. TDM system
Figure 1 shows that the m fixed-length packets with each λ/m arrival rate are multiplexed and
arrive into the system according to the Poisson distribution. Total traffic is λ, the service rate (µ)
is equal to 1/m, and the load on the entire system is ρ = λ. Thus, equation (2) with µ=1/m and ρ = λ
gives the mean queuing delay per packet as (5). This delay represents time in the frequency
division multiplexing (FDM) [10]. Equation (5) can be also obtained by setting E(S) = E(S2
) = 1/µ
= m, and E(V) = m, E(V2
) = 1/µ2
=m2
in (4).
)1(2 ρ
ρ
−
=
m
WFDM
(5)
In the TDM, where m traffic streams are time division multiplexed in a scheme, whereby the time
axis is divided in m-slot frames with one slot dedicated to each traffic stream (Figure 1). Thus the
mean queueing delay in TDM is given by (6) [10].
)1(22 λ−
=+=
mm
WW FDMTDM
(6)
2.2. Queueing Delay for M/H2/1 Model
Generally, web objects are composed of two types: static and dynamic. A static object is one
home page first requested. Dynamic objects (N) are embedded in a home page, and requested
after parsing the homepage. We set the static object request rate as λ1 and the dynamic object
request rate as λ2 respectively. Figure 2 visualizes this case.
Figure 2 represents the Hyper-exponential distribution [11], which chooses the ith
negative
exponential distribution with the rate λi and mean 1/λi. That is, the density function is given by
0)(
2
1
≥= ∑=
−
SepSf
i
S
ii
iλ
λ (7)
The j th
moment is given by
4. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
4
∑=
=
2
1
!)(
i
j
i
ij p
jSE
λ
(8)
Figure 2. Graphical representation of web object requests
E(S) and E(S2
) are the first and the second moment of the web object service time, respectively.
They are obtained by (9).
2
2 2
)(
)1(
2
)(
λλ N
SE
N
SE =
+
= (9)
By substituting E(S) and E(S2
) in (1), we obtain the mean queueuing delay for M/H2/1 model in
(10).
NN
N
WH
)1(
1
−
+
=
λ
(10)
3. MEAN WAITING TIME FOR MULTIPLE USERS
We first find out the number of simultaneous users satisfying that M/H2/1 queueing delay(WH) is
equal to the queueing delay in TDM(WTDM). From (6) and (10),
NN
N
mWW TDMH
)1(
)1)(1(2
−
+−
=→=
λ
λ (11)
Now, we consider the mean waiting time when m concurrent users require access to a web object
on a web server.
Web object is divided into multiple packets with a maximum segment size (MSS) by TCP in a
transport layer. Let the web object size to be θ, and MSS to be mss, the relation between the
number of packets (n) and the web object size is given by (12).
=
mss
n
θ (12)
When multiple clients (m) request the same object, each user's expected service time (E(S)) is the
same. However, exact finish time can vary due to the queueing delay; Clients must wait for the
completion of the service.
5. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
5
We assume the asynchronous time division multiplexing based on packets for web service. When
a client requests an object from the server, n packets are included in the object. E(S) means total
response time that each client expects. Now, we assume that a packet based round robin
scheduling policy is used in the multiple web access. This situation can be depicted in Figure 3.
Figure 3. Scheduling for multiple web service
In Figure 3, τij represents jth
packet service time of the ith
user. In order to simplify the modeling,
we let τij = τ(∀ i, j), and then we can derive the mean waiting time as (13) [12].
[ ]
2
)12)(1(
)1)(1()(
1
1
τ
ττ
−−
=
−−+−= ∑=
nm
nmmim
m
W
m
i
R
(13)
Now, by assuming that the service time for single packet (τ) is equal to one time slot in the steady
state of the system, we can infer that Figure 1 and Figure 3 will be approximated.
Therefore, we can obtain the number of packets (n) satisfying that mean waiting time by (13) is
equal to the mean queueing delay for M/D/1 with vacations model in the TDM by (6) as (14).
Here, m represents the number of users (m) obtained by (11).
)1)(1(2
)1)(1(
)1(22
)12)(1(
−−
+−−
=→
−
=
−−
→=
m
mm
n
mnm
WW TDMR
λ
λ
λ (14)
In M/H2/1 model, τ= E(S)/n= 2/[n(N+1)λ] when every τ is same. Thus, the number of packets (n)
satisfying that mean waiting time by (13) is equal to the mean queueing delay for M/H2/1 by (10)
is given by (15).
2
)1()1)(1(2
)1)(1(
)1(2
)1(
2
2
)12)(1(
+−−−
−−
=→
−
=
+
×
−−
→=
NNNm
NNm
n
m
Nn
nm
WW HR
λ
λ (15)
From (12), (14) and (15), we can obtain the web object size (θ) satisfying that mean waiting time
for multiple web access environment is equal to the mean queueing delay for M/D/1 with
vacations model and M/H2/1 model, respectively.
6. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
6
=×
+−−−
−−
=
×
−−
+−−
=
HR
TDMR
WWformss
NNNm
NNm
WWfor
mss
m
mm
2
)1()1)(1(2
)1)(1(
)1)(1(2
)1)(1(
λ
λ
θ (16)
In (16), for WR = WTDM, m ≥ 2 and λ ≤ 1. For WR = WH , m >1+(N+1)2
/ 2N(N-1).
4. PERFORMANCE EVALUATION
We first compute the number of users (m) varying N by using (11). Figure 4 represents the
number of users when mss = 1460 B for various ρ and N.
In Figure 4, when ρ and N are both small, the number of users is very large, but as N increases,
converges to 1. That is, the imbedded number of objects included in a web page can affect the
simultaneous access number of users. The reason is why the number of users and the number of
embedded objects in a page should be balanced in order to satisfy the given mean queueing delay.
Although it is not presented in the figure, as ρ increases, the number of simultaneous access users
decreases to 1, so we cannot find out the web object size because a denominator has infinite value
in (16).
Figure 4. The number of users (m ) when mss=1460 varying N
Now, we compute the mean web object size (θ) satisfying WR=WTDM and WR=WH respectively for
varying N given ρ and mss. Table 1 shows the mean object size when ρ=0.01. Given mss=1460 B,
mean object size is 1482 B for WR=WTDM and 742 B for WR=WH. When mss=536 B, mean object
size is 544 B for WR=WTDM and 273 B for WR=WH. Therefore, the mean object size when WR=WH
is smaller than when WR=WTDM regardless of mss.
Table 2 and Table 3 show the mean object size when ρ=0.05, ρ=0.1 and ρ=0.2, respectively. In
Table 2 and Table 3, we find also that the mean object size when WR=WH is smaller than when
WR=WTDM regardless of mss. That is, when the web service time has Hyper-exponential
distribution, smaller object web size is required.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
7
Table 1. Mean object size (θ) satisfying WR=WTDM and WR=WH for varying N when ρ=0.01
N
WR=WTDM WR=WH WR=WTDM WR=WH
mss=1460 mss= 536
2
3
4
5
6
7
8
9
1470
1473
1476
1480
1483
1487
1491
1495
736
738
739
741
743
745
747
749
540
541
542
543
545
546
547
549
270
271
271
272
273
274
274
275
mean 1482 742 544 273
We define the ratio of the mean object size satisfying WR=WTDM over the mean object size
satisfying WR=WH as (17).
HR
TDMR
WW
WW
r
=
=
=
θ
θ
(17)
Figure 5 depicts the ratio (r) by (17) for varying ρ. The size of mss can not affect the ratio, but ρ
can affect it. The ratio is about 2 when ρ=0.01, however is decreased into 1 when ρ=0.2.
Table 2. Mean object size (θ) satisfying WR=WTDM and WR=WH for varying N when ρ=0.05
N
WR=WTDM WR=WH WR=WTDM WR=WH
mss=1460 mss=536
2
3
4
5
6
7
8
9
1512
1529
1550
1568
1594
1608
1626
1652
761
771
784
795
813
819
830
848
555
561
569
576
585
590
597
607
279
283
288
292
298
301
305
311
mean 1580 803 580 295
Table 3. Mean object size (θ) satisfying WR=WTDM and WR=WH for varying N when ρ=0.1
N
WR=WTDM WR=WH WR=WTDM WR=WH
mss=1460 mss=536
2
3
4
5
6
7
8
9
1572
1615
1657
1703
1744
1811
1947
1947
799
831
858
890
917
979
1143
1118
577
593
608
625
640
665
715
715
293
305
315
327
337
359
420
411
mean 1750 942 642 346
8. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
8
Figure 5. The ratio of mean object size satisfying WR=WTDM over the mean object size satisfying WR=WH
for varying ρ
5. CONCLUSIONS
Mean object size in the multiple access environments is one of essential parameters to design and
maintain the web service. To control the web object size is easy and very cheap maintenance
method in order to satisfy the delay requirement of end-users. In this paper, we present an
analytical model to estimate the web object size satisfying that the mean waiting time for multiple
web service is equal to the mean queueing delay for the M/D/1 with vacations model in TDM
system and the mean queueing delay for M/H2/1 model, respectively. We first find out the
number of users access web server simultaneously, and then derive the web object size models.
Performance evaluations show that mean object size satisfying the M/D/1 with vacations model in
TDM system is larger than that satisfying M/H2/1 model, however the mean object size becomes
the nearly same the utilization factor increases. Future works include more exact model applicable
to the wire and wireless integrated network.
REFERENCES
[1] S. Ross, Introduction to probability model, Academic press, NewYork, 2010, p. 538, USA.
[2] W. Shi, E. Collins, and V. Karamcheti, “Modeling Object Characteristics of Dynamic Web
Content,” Journal of Parallel and Distributed Computing, Elsevier Science, pp. 963-980, 1998.
[3] R. Khayari, R. Sadre and B. R. Haverkort, “Fitting world-wide web request traces with the EM-
algorithm, Performance Evaluation,” Vol. 52, pp. 175-191, 2003.
[4] A. Riska, V. Diev and E. Smirni, “Efficient fitting of long-tailed data sets into hyper-exponential
distributions,” Proc. of IEEE Global Telecommunications Conference (GLOBECOM 2002), Vol.
3, pp. 2513-2517, 2002.
[5] Y. Lee, “Mean waiting delay for web service perceived by end-user in multiple access
environment,” Natural Science , vol. 2, Natural Science Institute of KNUE, pp. 55-58, 2012.
[6] S. K. Bose, “M/G/1 with vacations,” http:// www.iitg.ernet.in/skbose/qbook/Slide_Set_7.PDF, pp.
1-7, 2002.
[7] N. Tian and Z. G. Zhang, Vacation Queueing Model, Springer Science and Business Media, pp.
10-11, 2006.
[8] E. Modiano, “Communication systems engineering,” MIT OpenCourseWare, http://ocw.mit.edu,
pp. 1-19, 2009.
[9] S. W. Fuhrmann, “Technical Note—A Note on the M/G/1 Queue with Server Vacations,”
Operations Research, Vol. 32, No. 6, pp. 1368-1373, 1984.
[10] D. Bertsekas and R. Gallager, Data Networks, Prentice Hall, New Jersey, pp. 186-195, 2007.
9. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.4, July 2014
9
[11] M. S. Obaidat and N. A. Boudriga, Fundamentals of Performance Evaluation of Computer and
Tele-communication Systems, Wiely, pp. 156-157, 2010.
[12] Y. Lee, “Mean waiting time of an end-user in the multiple web access environment,” Proc. of the
Sixth International Conference on Communication Theory, Reliability, and Quality of Service
(CTRQ-2013), pp. 1-4, 2013.