The SIR model is used extensively in the field of epidemiology, in particular, for the analysis of communal
diseases. One problem with SIR and other existing models is that they are tailored to random or Erdos type networks since they do not consider the varying probabilities of infection or immunity per node. In this paper, we present the application and the simulation results of the pSEIRS model that takes into account the probabilities, and is thus suitable for more realistic scale free networks. In the pSEIRS model, the death rate and the excess death rate are constant for infective nodes. Latent and immune periods are assumed to be constant and the infection rate is assumed to be proportional to I (t) N(t) , where N (t) is the size of the total population and I(t) is the size of the infected population. A node recovers from an infection
temporarily with a probability p and dies from the infection with probability (1-p).
A general stochastic information diffusion model in social networks based on ...IJCNCJournal
Social networks are an important infrastructure for information, viruses and innovations propagation. Since users’
behavior has influenced by other users’ activity, some groups of people would be made regard to similarity of users’
interests. On the other hand, dealing with many events in real worlds, can be justified in social networks; spreading
disease is one instance of them. People’s manner and infection severity are more important parameters in
dissemination of diseases. Both of these reasons derive, whether the diffusion leads to an epidemic or not. SIRS is a
hybrid model of SIR and SIS disease models to spread contamination. A person in this model can be returned to
susceptible state after it removed. According to communities which are established on the social network, we use the
compartmental type of SIRS model. During this paper, a general compartmental information diffusion model would
be proposed and extracted some of the beneficial parameters to analyze our model. To adapt our model to realistic
behaviors, we use Markovian model, which would be helpful to create a stochastic manner of the proposed model.
In the case of random model, we can calculate probabilities of transaction between states and predicting value of
each state. The comparison between two mode of the model shows that, the prediction of population would be
verified in each state.
On Semantics and Deep Learning for Event Detection in Crisis SituationsCOMRADES project
In this paper, we introduce Dual-CNN, a semantically-enhanced deep learning model to target the problem of event detection in crisis situations from
social media data. A layer of semantics is added to a traditional Convolutional Neural Network (CNN) model to capture the contextual information that is generally scarce in short, ill-formed social media messages. Our results show that
our methods are able to successfully identify the existence of events, and event types (hurricane, floods, etc.) accurately (> 79% F-measure), but the performance of the model significantly drops (61% F-measure) when identifying fine-grained event-related information (affected individuals, damaged infrastructures, etc.).
These results are competitive with more traditional Machine Learning models, such as SVM.
http://oro.open.ac.uk/49639/1/event_detection.pdf
Probabilistic models for anomaly detection based on usage of network trafficAlexander Decker
This document discusses probabilistic models for anomaly detection based on network traffic usage. It introduces several probabilistic methods and statistical models that can be used for network traffic anomaly detection, including Bayesian theorem, mean and standard deviation models, point and interval estimations, multivariate regression models, Markov processes, and time series models. As an example, it describes modeling the spread of computer worms using epidemiological models such as linear, exponential, logistic, and differential equation models. It also discusses the different possible scenarios an intrusion detection system can encounter and how to calculate probabilities of outcomes using Bayesian theorem.
A unified prediction of computer virus spread in connected networksUltraUploader
This document presents two models for predicting the spread of computer viruses in connected networks: a discrete Markov model and a continuous differential equation model. The Markov model captures short-term behavior dependent on initial conditions, providing extinction probabilities. The differential equation model predicts long-term behavior by defining a boundary between parameter values where the virus will survive or die out. Simulations show good agreement between the models. The models provide new insights into controlling computer virus persistence on networks.
Avian Influenza (H5N1) Expert System using Dempster-Shafer TheoryAndino Maseleno
Based on Cumulative Number of Confirmed Human Cases of Avian Influenza (H5N1) Reported to World Health Organization (WHO) in the 2011 from 15 countries, Indonesia has the largest number death because Avian Influenza which 146 deaths. In this research, the researcher built an Avian Influenza (H5N1) Expert System for identifying avian influenza disease and displaying the result of identification process. In this paper, we describe five symptoms as major symptoms which include depression, combs, wattle, bluish face region, swollen face region, narrowness of eyes, and balance disorders. We use chicken as research object. Dempster-Shafer theory to quantify the degree of belief as inference engine in expert system, our approach uses Dempster-Shafer theory to combine beliefs under conditions of uncertainty and ignorance, and allows quantitative measurement of the belief and plausibility in our identification result. The result reveal that Avian Influenza (H5N1) Expert System has successfully identified the existence of avian influenza and displaying the result of identification process.
Parallel Programming Approaches for an Agent-based Simulation of Concurrent P...Subhajit Sahu
This document describes parallel programming approaches for an agent-based simulation model of concurrent pandemic and seasonal influenza outbreaks. The simulation was initially parallelized using OpenMP to iterate locations and reduce runtime from 1060 seconds to 301 seconds for a population of 1 million, achieving a speedup of 3.52. A proposed CUDA implementation is presented to further improve performance by assigning agent interactions to GPU blocks representing population centers.
Massively Parallel Simulations of Spread of Infectious Diseases over Realisti...Subhajit Sahu
Highlighted notes while studying for project work:
Massively Parallel Simulations of Spread of Infectious Diseases over Realistic Social Networks
Abhinav Bhatele†
Jae-Seung Yeom†
Nikhil Jain†
Chris J. Kuhlman∗
Yarden Livnat‡
Keith R. Bisset∗
Laxmikant V. Kale§
Madhav V. Marathe∗
†Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, California 94551 USA
∗Biocomplexity Institute & Department of Computer Science, Virginia Tech, Blacksburg, Virginia 24061 USA
‡Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah 84112 USA
§Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 USA
E-mail: †{bhatele, yeom2, nikhil}@llnl.gov, ∗{ckuhlman, kbisset, mmarathe}@vbi.vt.edu
Abstract—Controlling the spread of infectious diseases in large populations is an important societal challenge. Mathematically, the problem is best captured as a certain class of reactiondiffusion processes (referred to as contagion processes) over appropriate synthesized interaction networks. Agent-based models have been successfully used in the recent past to study such contagion processes. We describe EpiSimdemics, a highly scalable, parallel code written in Charm++ that uses agent-based modeling to simulate disease spreads over large, realistic, co-evolving interaction networks. We present a new parallel implementation of EpiSimdemics that achieves unprecedented strong and weak scaling on different architectures — Blue Waters, Cori and Mira. EpiSimdemics achieves five times greater speedup than the second fastest parallel code in this field. This unprecedented scaling is an important step to support the long term vision of realtime epidemic science. Finally, we demonstrate the capabilities of EpiSimdemics by simulating the spread of influenza over a realistic synthetic social contact network spanning the continental United States (∼280 million nodes and 5.8 billion social contacts).
A potency relation for worms and next generation attack toolsUltraUploader
This document discusses analyzing and comparing the potency of attack tools like worms. It proposes a methodology using two spaces: attributes of attack tools and metrics to measure potency. Attributes include mobility, modularity, coordination, goal-directedness, and being policy-constrained. Relating these attributes to potency metrics through a "potency relation" could help predict threats and improve defenses. The document outlines this methodology and highlights its potential uses.
A general stochastic information diffusion model in social networks based on ...IJCNCJournal
Social networks are an important infrastructure for information, viruses and innovations propagation. Since users’
behavior has influenced by other users’ activity, some groups of people would be made regard to similarity of users’
interests. On the other hand, dealing with many events in real worlds, can be justified in social networks; spreading
disease is one instance of them. People’s manner and infection severity are more important parameters in
dissemination of diseases. Both of these reasons derive, whether the diffusion leads to an epidemic or not. SIRS is a
hybrid model of SIR and SIS disease models to spread contamination. A person in this model can be returned to
susceptible state after it removed. According to communities which are established on the social network, we use the
compartmental type of SIRS model. During this paper, a general compartmental information diffusion model would
be proposed and extracted some of the beneficial parameters to analyze our model. To adapt our model to realistic
behaviors, we use Markovian model, which would be helpful to create a stochastic manner of the proposed model.
In the case of random model, we can calculate probabilities of transaction between states and predicting value of
each state. The comparison between two mode of the model shows that, the prediction of population would be
verified in each state.
On Semantics and Deep Learning for Event Detection in Crisis SituationsCOMRADES project
In this paper, we introduce Dual-CNN, a semantically-enhanced deep learning model to target the problem of event detection in crisis situations from
social media data. A layer of semantics is added to a traditional Convolutional Neural Network (CNN) model to capture the contextual information that is generally scarce in short, ill-formed social media messages. Our results show that
our methods are able to successfully identify the existence of events, and event types (hurricane, floods, etc.) accurately (> 79% F-measure), but the performance of the model significantly drops (61% F-measure) when identifying fine-grained event-related information (affected individuals, damaged infrastructures, etc.).
These results are competitive with more traditional Machine Learning models, such as SVM.
http://oro.open.ac.uk/49639/1/event_detection.pdf
Probabilistic models for anomaly detection based on usage of network trafficAlexander Decker
This document discusses probabilistic models for anomaly detection based on network traffic usage. It introduces several probabilistic methods and statistical models that can be used for network traffic anomaly detection, including Bayesian theorem, mean and standard deviation models, point and interval estimations, multivariate regression models, Markov processes, and time series models. As an example, it describes modeling the spread of computer worms using epidemiological models such as linear, exponential, logistic, and differential equation models. It also discusses the different possible scenarios an intrusion detection system can encounter and how to calculate probabilities of outcomes using Bayesian theorem.
A unified prediction of computer virus spread in connected networksUltraUploader
This document presents two models for predicting the spread of computer viruses in connected networks: a discrete Markov model and a continuous differential equation model. The Markov model captures short-term behavior dependent on initial conditions, providing extinction probabilities. The differential equation model predicts long-term behavior by defining a boundary between parameter values where the virus will survive or die out. Simulations show good agreement between the models. The models provide new insights into controlling computer virus persistence on networks.
Avian Influenza (H5N1) Expert System using Dempster-Shafer TheoryAndino Maseleno
Based on Cumulative Number of Confirmed Human Cases of Avian Influenza (H5N1) Reported to World Health Organization (WHO) in the 2011 from 15 countries, Indonesia has the largest number death because Avian Influenza which 146 deaths. In this research, the researcher built an Avian Influenza (H5N1) Expert System for identifying avian influenza disease and displaying the result of identification process. In this paper, we describe five symptoms as major symptoms which include depression, combs, wattle, bluish face region, swollen face region, narrowness of eyes, and balance disorders. We use chicken as research object. Dempster-Shafer theory to quantify the degree of belief as inference engine in expert system, our approach uses Dempster-Shafer theory to combine beliefs under conditions of uncertainty and ignorance, and allows quantitative measurement of the belief and plausibility in our identification result. The result reveal that Avian Influenza (H5N1) Expert System has successfully identified the existence of avian influenza and displaying the result of identification process.
Parallel Programming Approaches for an Agent-based Simulation of Concurrent P...Subhajit Sahu
This document describes parallel programming approaches for an agent-based simulation model of concurrent pandemic and seasonal influenza outbreaks. The simulation was initially parallelized using OpenMP to iterate locations and reduce runtime from 1060 seconds to 301 seconds for a population of 1 million, achieving a speedup of 3.52. A proposed CUDA implementation is presented to further improve performance by assigning agent interactions to GPU blocks representing population centers.
Massively Parallel Simulations of Spread of Infectious Diseases over Realisti...Subhajit Sahu
Highlighted notes while studying for project work:
Massively Parallel Simulations of Spread of Infectious Diseases over Realistic Social Networks
Abhinav Bhatele†
Jae-Seung Yeom†
Nikhil Jain†
Chris J. Kuhlman∗
Yarden Livnat‡
Keith R. Bisset∗
Laxmikant V. Kale§
Madhav V. Marathe∗
†Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, California 94551 USA
∗Biocomplexity Institute & Department of Computer Science, Virginia Tech, Blacksburg, Virginia 24061 USA
‡Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah 84112 USA
§Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 USA
E-mail: †{bhatele, yeom2, nikhil}@llnl.gov, ∗{ckuhlman, kbisset, mmarathe}@vbi.vt.edu
Abstract—Controlling the spread of infectious diseases in large populations is an important societal challenge. Mathematically, the problem is best captured as a certain class of reactiondiffusion processes (referred to as contagion processes) over appropriate synthesized interaction networks. Agent-based models have been successfully used in the recent past to study such contagion processes. We describe EpiSimdemics, a highly scalable, parallel code written in Charm++ that uses agent-based modeling to simulate disease spreads over large, realistic, co-evolving interaction networks. We present a new parallel implementation of EpiSimdemics that achieves unprecedented strong and weak scaling on different architectures — Blue Waters, Cori and Mira. EpiSimdemics achieves five times greater speedup than the second fastest parallel code in this field. This unprecedented scaling is an important step to support the long term vision of realtime epidemic science. Finally, we demonstrate the capabilities of EpiSimdemics by simulating the spread of influenza over a realistic synthetic social contact network spanning the continental United States (∼280 million nodes and 5.8 billion social contacts).
A potency relation for worms and next generation attack toolsUltraUploader
This document discusses analyzing and comparing the potency of attack tools like worms. It proposes a methodology using two spaces: attributes of attack tools and metrics to measure potency. Attributes include mobility, modularity, coordination, goal-directedness, and being policy-constrained. Relating these attributes to potency metrics through a "potency relation" could help predict threats and improve defenses. The document outlines this methodology and highlights its potential uses.
Priority based bandwidth allocation in wireless sensor networksIJCNCJournal
Most of the sensor network applications need real time communication and the need for deadline aware real time communication is becoming eminent in these applications. These applications have different dead line requirements also. The real time applications of wireless sensor networks are bandwidth sensitive and need higher share of bandwidth for higher priority data to meet the dead line requirements. In this paper we focus on the MAC layer modifications to meet the real time requirements of different priority data.Bandwidth partitioning among different priority transmissions is implemented through MAC layer modifications. The MAC layer implements a queuing model that supports lower transfer rate for lower
priority packets and higher transfer rate for real
time packets with higher priority, minimizing the end to
end delay. The performance of the algorithm is evaluated with varying node distribution
.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
IMPROVEMENT OF LTE DOWNLINK SYSTEM PERFORMANCES USING THE LAGRANGE POLYNOMIAL...IJCNCJournal
The document describes research on improving the performance of LTE downlink systems using Lagrange polynomial interpolation for channel estimation. It presents the MIMO-OFDM transmission scheme used in LTE and discusses various channel estimation techniques including linear, sinus cardinal, Newton polynomial, and Lagrange polynomial interpolation. Simulation results show that Lagrange polynomial interpolation outperforms other methods in terms of block error rate, throughput, and error vector magnitude versus signal-to-noise ratio. The optimal order of the Lagrange polynomial is determined by evaluating performance for different orders.
Performance analysis and monitoring of various advanced digital modulation an...IJCNCJournal
To achieve better calculative performance in optical fiber communication and for simplicity of
implementation different digital modulation, detection and multiplexing techniques are used. These
techniques maximize the spectral efficiency. This paper reviews a tabular comparative analysis with 3D
graphical representation for different optical digital modulation formats and multiplexing techniques
within and beyond 400 Gb/s. In this particular article we survey about different parameters related to
digital fiber optic communication.
On the migration of a large scale network from i pv4 to ipv6 environmentIJCNCJournal
This document discusses the design of migrating a large-scale network from IPv4 to a dual-stack IPv4/IPv6 environment. It focuses on using dual-stack mechanisms, which allow both IPv4 and IPv6 protocol stacks to run simultaneously. The design considers aspects such as topology, addressing plans, routing protocols, and performance statistics. A dual-stack approach is proposed that involves transitioning the network core, perimeter routers, and other devices in stages to support both IPv4 and IPv6. Addressing plans are provided for infrastructure, loopbacks, and customer networks that aim to support future growth while allowing for aggregation.
ENERGY EFFICIENT COOPERATIVE SPECTRUM SENSING IN COGNITIVE RADIOIJCNCJournal
Sensing in cognitive radio (CR) protects the primary user (PU) from bad interference. Therefore, it is
assumed to be a requirement. However, sensing has two main challenges; first the CR is required to sense
the PU under very low signal to noise ratios which will take longer sensing time, and second, some CR
nodes may suffer from deep fading and shadowing effects. Cooperative spectrum sensing (CSS) is supposed
to solve these challenges. However, CSS adds extra energy consumption due to CRs send the sensing result
to the fusion center and receive the final decision from the fusion center. This is in addition to the sensing
energy itself. Therefore, CSS may consume considerable energy out of the battery of the CR node.
Therefore in this paper, we try to find jointly the sensing time required from each CR node and the number
of CR nodes who should perform sensing such that the energy and energy efficiency (i.e., ratio of
throughput to energy consumed) are optimized. Simulation results show that the joint optimization achieves
better in terms of energy efficiency than other approaches that perform separate optimization.
OPTIMIZING SMART THINGS ADDRESSING THROUGH THE ZIGBEE-BASED INTERNET OF THINGSIJCNCJournal
Devices are becoming increasingly interconnected; linked with each other and with humans. The internet
of things(IoT) concept is currently used in machine to machine (M2M) applications like power, gas, and oil
utilities transmission and transport. The most profound challenge that IoT faces is how to connect several
very different devices into a network of things. In this regard, the standard for sending information between
devices supporting IoT is called ZigBee, also known as the IEEE 802.15.4-2006 standard: ZigBee is
indispensable to the functioning of the IoT. In this paper, OPNET has been used to simulate two quite
differently scaled Wireless Sensor Network environments. The two environments had quite different ZigBee
topologies; thus, an analysis of the performance in regard to each topology could be made. We
propose,ZigBee as optional addressing method for smart-things making up the smart world which
facilitates the transmission and analysis of data automatically.
Zigbee technology and its application inIJCNCJournal
Wireless home automation systems have drawn considerable attentions of the researchers for more than a
decade. The major technologies used to implement these systems include Z-Wave, Insteon, Wavenis,
Bluetooth, WiFi, and ZigBee. Among these technologies the ZigBee based systems have become very popular
because of its low cost and low power consumption. In this paper ZigBee based wireless home automation
systems have been addressed. There are two main parts of this paper. In the first part a brief introduction of
the ZigBee technology has been presented and in the second part a survey work on the ZigBee based wireless
home automation system has been presented. The performances of the ZigBee based systems have also been
compared with those of other competing technologies based systems. In addition some future opportunities
and challenges of the ZigBee based systems have been listed in this paper.
The nature of wireless networks itself created new vulnerabilities that in the classical wired networks do
not exist. This results in an evolutional requirement to implement new sophisticated security mechanism in
form of Intrusion Detection and Prevention Systems. This paper deals with security issues of small office
and home office wireless networks. The goal of our work is to design and evaluate wireless IDPS with use
of packet injection method. Decrease of attacker’s traffic by 95% was observed when compared to
attacker’s traffic without deployment of proposed IDPS system.
Virtual backbone trees for most minimalIJCNCJournal
Virtual backbone trees have been used for efficient communication between sink node and any other node
in the deployed area. But all the proposed virtual backbone trees are not fully energy efficient and EVBTs
have few flaws associated with them. In this paper two such virtual backbones are proposed. The motive
behind the first algorithm, Most Minimal Energy Virtual Backbone Tree (MMEVBT), is to minimise the
energy consumption when packets are transmitted between sink and a target sensor node. The energy
consumption is most minimal and optimal and it is shown why it always has minimal energy consumption
during any transfer of packet between every node with the sink node. For every node, route path with most
minimal energy consumption is identified and a new tree node is elected only when a better minimal energy
consumption route is identified for a node to communicate with the sink and vice versa. By moving sink
periodically it is ensured the battery of the nodes near sink is not completely drained out. Another
backbone construction algorithm is proposed which maximises the network lifetime by increasing the
lifetime of all tree nodes. Simulations are done in NS2 to practically test the algorithms and the results are
discussed in detail.
Performance analysis of routing protocols and tcp variants under http and ftp...IJCNCJournal
MANET stands for mobile ad-hoc network that has multi-hop and dynamic nature, where each station changes its location frequently and automatically configures itself. In this paper, four routing protocols
that areOLSR,GRP,DSR, and AODV are discussed along with three TCP variants that are SACK, New Reno and Reno. The main focus of this paper is to study the impact
scalability, mobility and traffic loads on routing protocols and TCP variants. Thepaper results shows that the proactive protocols OLSR and GRP outperform the reactive protocols AODV and DSR with the same nodes size, nodes speed, and traffic load. On the other hand, the TCP variants research reveal the superiority of the TCP SACK variant over the other two variants in case of adapting to varying network size, while the TCP Reno variant acts more
robustly in varying mobility speeds and traffic loads.
Solving bandwidth guaranteed routing problem using routing dataIJCNCJournal
This paper introduces a traffic engineering routing algorithm that aims to accept as many routing demands
as possible on the condition that a certain amount of bandwidth resource is reserved for each accepted
demand. The novel idea is to select routes based on not only network states but also information derived
from routing data such as probabilities of the ingress egress pairs and usage frequencies of the links.
Experiments with respect to acceptance ratio and computation time have been conducted against various
test sets. Results indicate that the proposed algorithm has better performance than the existing popular algorithms including Minimum Interference Routing Algorithm (MIRA) and Random Race based Algorithm for Traffic Engineering (RRATE)
.
Performance improvement for papr reduction in lte downlink system with ellipt...IJCNCJournal
This paper is concerned with the performance improvement of PAPR reduction of orthogonal frequency division multiplexing (OFDM) signal using amplitude clipping & filtering based design. Note that OFDM is one of the well adept multi-carrier multiplexing transmission scheme which has been implemented in long term evolution (LTE) downlink. Nonetheless peak to average power ratio (PAPR) is the more rattling problem with OFDM, consequently in this paper a reduction procedure of the PAPR by using amplitude clipping and filtering is proposed. Here we used IIR bandpass elliptic filter after amplitude clipping to
reduce the PAPR. The performance of the system in terms of bit error rate (BER) is also canvased as a new
filter based clipping method. Our results show that the proposed methodology of clipping method with the
IIR elliptic band pass filter significantly reduces the PAPR value.
A NOVEL ROUTING PROTOCOL FOR TARGET TRACKING IN WIRELESS SENSOR NETWORKSIJCNCJournal
Wireless sensor networks (WSNs) are large scale integration consists of hundreds or thousands or more
number of sensor nodes. They are tiny, low cost, low weight, and limited battery, primary storage,
processing power. They have wireless capabilities to monitor physical or environmental conditions. This
paper compared the performance analysis of some existing routing protocols for target tracking
application with proposed hierarchical binary tree structure to store the routing information. The sensed
information is stored in controlled way at multiple sensor nodes (e.g. node, parent node and grandparent
node) which deployed using complete binary tree data structure. This reduces traffic implosion and
geographical overlapping. Simulation result showed improved network lifetime by 20%, target detection
probability by 25%, and reduces error rate by 20%, energy efficiency, fault tolerance, and routing
efficiency. We have evaluated our proposed algorithm using NS2.
A PROPOSAL FOR IMPROVE THE LIFETIME OF WIRELESS SENSOR NETWORKIJCNCJournal
The document proposes a new routing protocol for wireless sensor networks that aims to improve network lifetime. The protocol is based on LEACH, an existing energy-efficient clustering protocol, but improves on it by electing cluster heads based on both remaining node energy and distance to the base station. Simulation results show the proposed protocol extends network lifetime by up to 75% compared to LEACH alone by distributing energy usage more evenly across nodes.
A STUDY ON IMPACTS OF RTT INACCURACY ON DYNAMIC BANDWIDTH ALLOCATION IN PON A...IJCNCJournal
This document discusses the impacts of round trip time (RTT) inaccuracy on dynamic bandwidth allocation (DBA) algorithms in passive optical networks (PONs). It analyzes how RTT inaccuracy can lead to collisions or wasted bandwidth in PONs by causing misalignments between the expected and actual transmission times of optical network units (ONUs). The document reports on simulations conducted to evaluate this impact for different levels of RTT deviation. It finds that higher RTT inaccuracy leads to significantly higher collision rates and more wasted bandwidth. The document proposes developing a method to mitigate the performance degradation caused by RTT inaccuracy.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
Modeling and Threshold Sensitivity Analysis of Computer Virus EpidemicIOSR Journals
This document presents a mathematical model for analyzing the spread of computer viruses through a network. It modifies an existing epidemic model by incorporating a range of parameters rather than constant values. The model classifies computers into four groups: susceptible, exposed, infected, and immunized. It uses differential equations to model how the population in each group changes over time and analyzes the epidemic threshold. The document estimates parameter values based on studies of real computer viruses. It aims to help understand virus spreading and determine measures to control viral epidemics.
This document presents a mathematical model for analyzing the spread of computer viruses through a network. It modifies an existing epidemic model by incorporating a range of parameters rather than constant values. The model classifies computers into four groups: susceptible, exposed, infected, and immunized. It uses differential equations to model how the population in each group changes over time and analyzes the epidemic threshold. The document estimates parameter values based on studies of real computer viruses. It aims to help understand virus spreading and determine measures to control viral epidemics.
The Susceptible-Infectious Model of Disease Expansion Analyzed Under the Scop...cscpconf
This paper presents a model to approach the dynamics of infectious diseases expansion. Our
model aims to establish a link between traditional simulation of the Susceptible-Infectious (SI)
model of disease expansion based on ordinary differential equations (ODE), and a very simple
approach based on both connectivity between people and elementary binary rules that define
the result of these contacts. The SI deterministic compartmental model has been analysed and
successfully modelled by our method, in the case of 4-connected neighbourhood.
THE SUSCEPTIBLE-INFECTIOUS MODEL OF DISEASE EXPANSION ANALYZED UNDER THE SCOP...csandit
This paper presents a model to approach the dynamics of infectious diseases expansion. Our model aims to establish a link between traditional simulation of the Susceptible-Infectious (SI) model of disease expansion based on ordinary differential equations (ODE), and a very simple approach based on both connectivity between people and elementary binary rules that define the result of these contacts. The SI deterministic compartmental model has been analysed and successfully modelled by our method, in the case of 4-connected neighbourhood.
This document outlines an epidemiological modeling project on modeling the impact of childhood vaccination strategies on the spread of measles. It introduces key epidemiological concepts like the basic reproductive number R0 and discusses measles as a highly contagious disease. It then presents the SIR compartmental model and its extensions, including incorporating births and deaths, age stratification, and different pediatric vaccination strategies. The document will use these mathematical models to simulate different vaccination scenarios and their effect on measles transmission dynamics.
Priority based bandwidth allocation in wireless sensor networksIJCNCJournal
Most of the sensor network applications need real time communication and the need for deadline aware real time communication is becoming eminent in these applications. These applications have different dead line requirements also. The real time applications of wireless sensor networks are bandwidth sensitive and need higher share of bandwidth for higher priority data to meet the dead line requirements. In this paper we focus on the MAC layer modifications to meet the real time requirements of different priority data.Bandwidth partitioning among different priority transmissions is implemented through MAC layer modifications. The MAC layer implements a queuing model that supports lower transfer rate for lower
priority packets and higher transfer rate for real
time packets with higher priority, minimizing the end to
end delay. The performance of the algorithm is evaluated with varying node distribution
.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
IMPROVEMENT OF LTE DOWNLINK SYSTEM PERFORMANCES USING THE LAGRANGE POLYNOMIAL...IJCNCJournal
The document describes research on improving the performance of LTE downlink systems using Lagrange polynomial interpolation for channel estimation. It presents the MIMO-OFDM transmission scheme used in LTE and discusses various channel estimation techniques including linear, sinus cardinal, Newton polynomial, and Lagrange polynomial interpolation. Simulation results show that Lagrange polynomial interpolation outperforms other methods in terms of block error rate, throughput, and error vector magnitude versus signal-to-noise ratio. The optimal order of the Lagrange polynomial is determined by evaluating performance for different orders.
Performance analysis and monitoring of various advanced digital modulation an...IJCNCJournal
To achieve better calculative performance in optical fiber communication and for simplicity of
implementation different digital modulation, detection and multiplexing techniques are used. These
techniques maximize the spectral efficiency. This paper reviews a tabular comparative analysis with 3D
graphical representation for different optical digital modulation formats and multiplexing techniques
within and beyond 400 Gb/s. In this particular article we survey about different parameters related to
digital fiber optic communication.
On the migration of a large scale network from i pv4 to ipv6 environmentIJCNCJournal
This document discusses the design of migrating a large-scale network from IPv4 to a dual-stack IPv4/IPv6 environment. It focuses on using dual-stack mechanisms, which allow both IPv4 and IPv6 protocol stacks to run simultaneously. The design considers aspects such as topology, addressing plans, routing protocols, and performance statistics. A dual-stack approach is proposed that involves transitioning the network core, perimeter routers, and other devices in stages to support both IPv4 and IPv6. Addressing plans are provided for infrastructure, loopbacks, and customer networks that aim to support future growth while allowing for aggregation.
ENERGY EFFICIENT COOPERATIVE SPECTRUM SENSING IN COGNITIVE RADIOIJCNCJournal
Sensing in cognitive radio (CR) protects the primary user (PU) from bad interference. Therefore, it is
assumed to be a requirement. However, sensing has two main challenges; first the CR is required to sense
the PU under very low signal to noise ratios which will take longer sensing time, and second, some CR
nodes may suffer from deep fading and shadowing effects. Cooperative spectrum sensing (CSS) is supposed
to solve these challenges. However, CSS adds extra energy consumption due to CRs send the sensing result
to the fusion center and receive the final decision from the fusion center. This is in addition to the sensing
energy itself. Therefore, CSS may consume considerable energy out of the battery of the CR node.
Therefore in this paper, we try to find jointly the sensing time required from each CR node and the number
of CR nodes who should perform sensing such that the energy and energy efficiency (i.e., ratio of
throughput to energy consumed) are optimized. Simulation results show that the joint optimization achieves
better in terms of energy efficiency than other approaches that perform separate optimization.
OPTIMIZING SMART THINGS ADDRESSING THROUGH THE ZIGBEE-BASED INTERNET OF THINGSIJCNCJournal
Devices are becoming increasingly interconnected; linked with each other and with humans. The internet
of things(IoT) concept is currently used in machine to machine (M2M) applications like power, gas, and oil
utilities transmission and transport. The most profound challenge that IoT faces is how to connect several
very different devices into a network of things. In this regard, the standard for sending information between
devices supporting IoT is called ZigBee, also known as the IEEE 802.15.4-2006 standard: ZigBee is
indispensable to the functioning of the IoT. In this paper, OPNET has been used to simulate two quite
differently scaled Wireless Sensor Network environments. The two environments had quite different ZigBee
topologies; thus, an analysis of the performance in regard to each topology could be made. We
propose,ZigBee as optional addressing method for smart-things making up the smart world which
facilitates the transmission and analysis of data automatically.
Zigbee technology and its application inIJCNCJournal
Wireless home automation systems have drawn considerable attentions of the researchers for more than a
decade. The major technologies used to implement these systems include Z-Wave, Insteon, Wavenis,
Bluetooth, WiFi, and ZigBee. Among these technologies the ZigBee based systems have become very popular
because of its low cost and low power consumption. In this paper ZigBee based wireless home automation
systems have been addressed. There are two main parts of this paper. In the first part a brief introduction of
the ZigBee technology has been presented and in the second part a survey work on the ZigBee based wireless
home automation system has been presented. The performances of the ZigBee based systems have also been
compared with those of other competing technologies based systems. In addition some future opportunities
and challenges of the ZigBee based systems have been listed in this paper.
The nature of wireless networks itself created new vulnerabilities that in the classical wired networks do
not exist. This results in an evolutional requirement to implement new sophisticated security mechanism in
form of Intrusion Detection and Prevention Systems. This paper deals with security issues of small office
and home office wireless networks. The goal of our work is to design and evaluate wireless IDPS with use
of packet injection method. Decrease of attacker’s traffic by 95% was observed when compared to
attacker’s traffic without deployment of proposed IDPS system.
Virtual backbone trees for most minimalIJCNCJournal
Virtual backbone trees have been used for efficient communication between sink node and any other node
in the deployed area. But all the proposed virtual backbone trees are not fully energy efficient and EVBTs
have few flaws associated with them. In this paper two such virtual backbones are proposed. The motive
behind the first algorithm, Most Minimal Energy Virtual Backbone Tree (MMEVBT), is to minimise the
energy consumption when packets are transmitted between sink and a target sensor node. The energy
consumption is most minimal and optimal and it is shown why it always has minimal energy consumption
during any transfer of packet between every node with the sink node. For every node, route path with most
minimal energy consumption is identified and a new tree node is elected only when a better minimal energy
consumption route is identified for a node to communicate with the sink and vice versa. By moving sink
periodically it is ensured the battery of the nodes near sink is not completely drained out. Another
backbone construction algorithm is proposed which maximises the network lifetime by increasing the
lifetime of all tree nodes. Simulations are done in NS2 to practically test the algorithms and the results are
discussed in detail.
Performance analysis of routing protocols and tcp variants under http and ftp...IJCNCJournal
MANET stands for mobile ad-hoc network that has multi-hop and dynamic nature, where each station changes its location frequently and automatically configures itself. In this paper, four routing protocols
that areOLSR,GRP,DSR, and AODV are discussed along with three TCP variants that are SACK, New Reno and Reno. The main focus of this paper is to study the impact
scalability, mobility and traffic loads on routing protocols and TCP variants. Thepaper results shows that the proactive protocols OLSR and GRP outperform the reactive protocols AODV and DSR with the same nodes size, nodes speed, and traffic load. On the other hand, the TCP variants research reveal the superiority of the TCP SACK variant over the other two variants in case of adapting to varying network size, while the TCP Reno variant acts more
robustly in varying mobility speeds and traffic loads.
Solving bandwidth guaranteed routing problem using routing dataIJCNCJournal
This paper introduces a traffic engineering routing algorithm that aims to accept as many routing demands
as possible on the condition that a certain amount of bandwidth resource is reserved for each accepted
demand. The novel idea is to select routes based on not only network states but also information derived
from routing data such as probabilities of the ingress egress pairs and usage frequencies of the links.
Experiments with respect to acceptance ratio and computation time have been conducted against various
test sets. Results indicate that the proposed algorithm has better performance than the existing popular algorithms including Minimum Interference Routing Algorithm (MIRA) and Random Race based Algorithm for Traffic Engineering (RRATE)
.
Performance improvement for papr reduction in lte downlink system with ellipt...IJCNCJournal
This paper is concerned with the performance improvement of PAPR reduction of orthogonal frequency division multiplexing (OFDM) signal using amplitude clipping & filtering based design. Note that OFDM is one of the well adept multi-carrier multiplexing transmission scheme which has been implemented in long term evolution (LTE) downlink. Nonetheless peak to average power ratio (PAPR) is the more rattling problem with OFDM, consequently in this paper a reduction procedure of the PAPR by using amplitude clipping and filtering is proposed. Here we used IIR bandpass elliptic filter after amplitude clipping to
reduce the PAPR. The performance of the system in terms of bit error rate (BER) is also canvased as a new
filter based clipping method. Our results show that the proposed methodology of clipping method with the
IIR elliptic band pass filter significantly reduces the PAPR value.
A NOVEL ROUTING PROTOCOL FOR TARGET TRACKING IN WIRELESS SENSOR NETWORKSIJCNCJournal
Wireless sensor networks (WSNs) are large scale integration consists of hundreds or thousands or more
number of sensor nodes. They are tiny, low cost, low weight, and limited battery, primary storage,
processing power. They have wireless capabilities to monitor physical or environmental conditions. This
paper compared the performance analysis of some existing routing protocols for target tracking
application with proposed hierarchical binary tree structure to store the routing information. The sensed
information is stored in controlled way at multiple sensor nodes (e.g. node, parent node and grandparent
node) which deployed using complete binary tree data structure. This reduces traffic implosion and
geographical overlapping. Simulation result showed improved network lifetime by 20%, target detection
probability by 25%, and reduces error rate by 20%, energy efficiency, fault tolerance, and routing
efficiency. We have evaluated our proposed algorithm using NS2.
A PROPOSAL FOR IMPROVE THE LIFETIME OF WIRELESS SENSOR NETWORKIJCNCJournal
The document proposes a new routing protocol for wireless sensor networks that aims to improve network lifetime. The protocol is based on LEACH, an existing energy-efficient clustering protocol, but improves on it by electing cluster heads based on both remaining node energy and distance to the base station. Simulation results show the proposed protocol extends network lifetime by up to 75% compared to LEACH alone by distributing energy usage more evenly across nodes.
A STUDY ON IMPACTS OF RTT INACCURACY ON DYNAMIC BANDWIDTH ALLOCATION IN PON A...IJCNCJournal
This document discusses the impacts of round trip time (RTT) inaccuracy on dynamic bandwidth allocation (DBA) algorithms in passive optical networks (PONs). It analyzes how RTT inaccuracy can lead to collisions or wasted bandwidth in PONs by causing misalignments between the expected and actual transmission times of optical network units (ONUs). The document reports on simulations conducted to evaluate this impact for different levels of RTT deviation. It finds that higher RTT inaccuracy leads to significantly higher collision rates and more wasted bandwidth. The document proposes developing a method to mitigate the performance degradation caused by RTT inaccuracy.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
Modeling and Threshold Sensitivity Analysis of Computer Virus EpidemicIOSR Journals
This document presents a mathematical model for analyzing the spread of computer viruses through a network. It modifies an existing epidemic model by incorporating a range of parameters rather than constant values. The model classifies computers into four groups: susceptible, exposed, infected, and immunized. It uses differential equations to model how the population in each group changes over time and analyzes the epidemic threshold. The document estimates parameter values based on studies of real computer viruses. It aims to help understand virus spreading and determine measures to control viral epidemics.
This document presents a mathematical model for analyzing the spread of computer viruses through a network. It modifies an existing epidemic model by incorporating a range of parameters rather than constant values. The model classifies computers into four groups: susceptible, exposed, infected, and immunized. It uses differential equations to model how the population in each group changes over time and analyzes the epidemic threshold. The document estimates parameter values based on studies of real computer viruses. It aims to help understand virus spreading and determine measures to control viral epidemics.
The Susceptible-Infectious Model of Disease Expansion Analyzed Under the Scop...cscpconf
This paper presents a model to approach the dynamics of infectious diseases expansion. Our
model aims to establish a link between traditional simulation of the Susceptible-Infectious (SI)
model of disease expansion based on ordinary differential equations (ODE), and a very simple
approach based on both connectivity between people and elementary binary rules that define
the result of these contacts. The SI deterministic compartmental model has been analysed and
successfully modelled by our method, in the case of 4-connected neighbourhood.
THE SUSCEPTIBLE-INFECTIOUS MODEL OF DISEASE EXPANSION ANALYZED UNDER THE SCOP...csandit
This paper presents a model to approach the dynamics of infectious diseases expansion. Our model aims to establish a link between traditional simulation of the Susceptible-Infectious (SI) model of disease expansion based on ordinary differential equations (ODE), and a very simple approach based on both connectivity between people and elementary binary rules that define the result of these contacts. The SI deterministic compartmental model has been analysed and successfully modelled by our method, in the case of 4-connected neighbourhood.
This document outlines an epidemiological modeling project on modeling the impact of childhood vaccination strategies on the spread of measles. It introduces key epidemiological concepts like the basic reproductive number R0 and discusses measles as a highly contagious disease. It then presents the SIR compartmental model and its extensions, including incorporating births and deaths, age stratification, and different pediatric vaccination strategies. The document will use these mathematical models to simulate different vaccination scenarios and their effect on measles transmission dynamics.
An epidemiological model of virus spread and cleanupUltraUploader
This document presents an epidemiological model for simulating the spread of computer viruses and the effectiveness of signature-based antivirus technologies in containing outbreaks. The model divides machines into susceptible, infectious, detected, and removed states. It can be used to analyze how outbreak size and duration are affected by factors like virus spread rate, signature distribution rate, and cleanup rate. Initial simulations using the model demonstrate its ability to predict the overall dynamics of a virus outbreak both before and after an antivirus signature is released.
A computational model of computer virus propagationUltraUploader
1) A computational model is developed to simulate the propagation of computer viruses and warning messages within organizational social and computer networks.
2) The model represents the networks as graphs and incorporates mechanisms of virus propagation, node state transitions, and the dissemination of warning messages.
3) Experiments show that random graphs with similar characteristics to real-world networks can model social networks, and isolating organizations may prevent virus infection but also limit receipt of important warning messages.
Mathematics Model Development Deployment of Dengue Fever Diseases by Involve ...Dr. Amarjeet Singh
Dengue virus is one of virus that cause deadly disease
was dengue fever. This virus was transmitted through bite of
Aedes aegypti female mosquitoes that gain virus infected by
taking food from infected human blood, then mosquitoes
transmited pathogen to susceptible humans. Suppressed the
spread and growth of dengue fever was important to avoid
and prevent the increase of dengue virus sufferer and
casualties. This problem can be solved with studied
important factors that affected the spread and equity of
disease by sensitivity index. The purpose of this research
were to modify mathematical model the spread of dengue
fever be SEIRS-ASEI type, to determine of equilibrium
point, to determined of basic reproduction number, stability
analysis of equilibrium point, calculated sensitivity index, to
analyze sensitivity, and to simulate numerical on
modification model. Analysis of model obtained disease free
equilibrium (DFE) point and endemic equilibrium point. The
numerical simulation result had showed that DFE, stable if
the basic reproduction number is less than one and endemic
equilibrium point was stable if the basic reproduction
number is more than one.
This document discusses epidemiological modeling of infectious diseases. It describes several common deterministic compartmental models, including SIR, SIS, and SEIR models. These models divide the population into compartments based on disease status, such as susceptible, infected, and recovered. The models are formulated using systems of differential equations to capture the flows between compartments over time. The basic reproduction number is used to determine when herd immunity is achieved in a population through vaccination. More complex models incorporate additional factors like latent periods, temporary immunity, and age structure.
1) The document estimates that prior to travel restrictions on January 23rd in China, 86% of COVID-19 infections were undocumented and asymptomatic or mildly symptomatic.
2) These undocumented infections were estimated to be about half as contagious as documented cases, but due to their greater numbers, were the source of 79% of documented infections.
3) After travel restrictions and other control measures, the estimated proportion of documented infections rose to 65%, reproductive number fell below 1, and transmission rates were reduced, suggesting control measures were effective at identifying previously undocumented cases.
This document presents an agent-based simulation model of the 2009 H1N1 influenza outbreak at Bates College. It begins with an overview of the traditional SIR modeling approach and its limitations in fitting the irregular infection patterns observed at Bates and other small colleges. It then introduces agent-based modeling as an alternative approach that can incorporate changes in individual interactions over time. The document proceeds to build increasingly complex agent-based models, starting with a simple SIR-inspired model and culminating in a "clinic day model" that attempts to replicate Bates' infection spikes following large mixing events like vaccination clinics. The results of these models are analyzed to evaluate their ability to better explain the outbreak dynamics.
A SEIR MODEL FOR CONTROL OF INFECTIOUS DISEASESSOUMYADAS835019
This document presents a SEIR model for controlling infectious diseases with constraints. It begins with an introduction to SEIR models and their use in modeling disease transmission and testing control strategies. It then motivates the study by discussing the COVID-19 pandemic. The document outlines the basic ideas of the SEIR model and describes the compartments and parameters. It presents the optimal control problem formulated to determine vaccination strategies over time. Potential application areas and future research scope are discussed before concluding with references.
Simulation rules are almost the same to what I coded using Python bu.pdfsnewfashion
Simulation rules are almost the same to what I coded using Python but with one interesting
addition a partial immunity (see details below). The main difference between the Python version
and R-version of the code would be the use of vectorization. You should use as little as possible
of loops. In fact, there should be just one loop for the days-count. If anyone tries to create the
exact copy of my Python code but in R, the grade will be quite low. Below are rules that should
help you building a simulation model:
1. We assume that there are N_population citizens in a population. N_population is an input
parameter. During development and testing you can have it small, but your code should be able
to handle a reasonably large value for population size.
2. Every citizen has a health state healthy, sick, dead (or you can code it as 0, 1, 2). When we
start, all citizens are alive and healthy
3. To start the pandemic, you randomly mark a small number of citizens as sick there can be a
parameter for the number of initially infected citizens. You can start with 1 or 2 initial sick cases.
4. One iteration is one day. During the day, every citizen can meet a random number (say
between 0 and 20 inclusive) of randomly selected citizens. You dont really need to control all
citizens as we are interested in meetings of sick people only. We dont care how many healthy
people meet each other.
5. Every sick citizen can stay sick and infectious for 10 days, hence you should have some
counter for each sick citizen. After 10 days a sick citizen becomes healthy and stops spreading
the virus
6. Obviously, dead citizens cannot become sick, they dont meet anyone and, as a result, cannot
infect anyone.
7. During the day every sick citizen has a probability to die (mortality rate) from the disease with
probability 0.5% (quite low probability).
8. If a sick citizen does not die, then they can meet other people as per the rule 4 above, and if
they meet a healthy person, that person might become sick too and start infecting other people
starting from the next day (that is an infection day is day 0 of their sickness). The probability for
a citizen of becoming sick (infection rate) after a contact is 30% (this is quite high).
9. (New rule!) After surviving the infection and getting healthy, the person becomes immune.
This is a partial immunity. It does not make the person invincible but reduces the chance of
infection in future meetings with sick people. Take the immunity coefficient as 0.1. That is, a
probability of being infected for immune person is ten times lower than for the person without an
immunity. The infection rate for the immune person is the original infection rate multiplied by
the immunity coefficient, e.g. (0.3*0.1).
For the test, you can set immunity coefficient equal 1, which means no benefits of immunity and
the result should be the same as in Python example I presented.
10. You should run this simulation for a number of days (iterations) and store each da.
IOSR Journal of Mathematics(IOSR-JM) is an open access international journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Modeling and qualitative analysis of malaria epidemiologyIOSR Journals
We develop and analyze a mathematical model on malaria dynamics using ordinary differential equations, in order to investigate the effect of certain parameters on the transmission and spread of the disease. We introduce dimensionless variables for time, fraction of infected human and fraction of infected mosquito and solve the resulting system of ordinary differential equations numerically. Equilibrium points are established and stability criteria analyzed. The results reveal that the parameters play a crucial role in the interaction between human and infected mosquito.
Modelisation of Ebola Hemoragic Fever propagation in a modern cityJean-Luc Caut
The study of epidemic disease has always been a topic where biological issues mix with social ones.
The aim of this presentation was to modelize in Python language the propagation of Ebola Hemoragic Fever in a modern city thus using SIR model based on Ordinary Differential Equations system and also to produce an amazing Cellular Automaton.
ICPSR - Complex Systems Models in the Social Sciences - Bonus Content - Profe...Daniel Katz
This document discusses computational models of infectious disease transmission. It begins by introducing the objectives and importance of modeling infectious diseases. It then describes the classical Susceptible-Infected-Recovered (SIR) model, including its assumptions, predictions regarding the basic reproductive number (R0), and extensions adding vital dynamics and vaccination. Variations including stochastic models, modeling emotion diffusion, and an example NetLogo spatial SIR outbreak model are also covered. The document discusses analyzing model assumptions, writing more efficient code, and exploring alternative models of recovery times and additional disease states.
This document describes an adaptive model for the spread of infection on networks. It begins by introducing network analysis and percolation models. It then presents the basic SIS (Susceptible-Infected-Susceptible) model for modeling infection spread and derives the steady-state infection rates. It introduces the concept of network adaptation and reviews models of financial contagion that lack adaptation. It then presents a new popularity-based network model and develops an adaptive SIS model that incorporates network changes in response to infection spread. Computational analysis shows the existence of a phase boundary for this adaptive model.
A CLOUD-BASED PROTOTYPE IMPLEMENTATION OF A DISEASE OUTBREAK NOTIFICATION SYS...IJCSEA Journal
This paper describes the design, prototype implementation and performance characteristics of a Disease Outbreak Notification System (DONS). The prototype was implemented in a hybrid cloud environment as an online/real-time system. It detects potential outbreaks of both listed and unknown diseases. It uses data mining techniques to choose the correct algorithm to detect outbreaks of unknown diseases. Our experiments showed that the proposed system has very high accuracy rate in choosing the correct detection algorithm. To our best knowledge, DONS is the first of its kind to detect outbreaks of unknown diseases using data mining techniques.
A Cloud-Based Prototype Implementation of a Disease Outbreak Notification SystemIJCSEA Journal
This paper describes the design, prototype implementation and performance characteristics of a Disease Outbreak Notification System (DONS). The prototype was implemented in a hybrid cloud environment as an online/real-time system. It detects potential outbreaks of both listed and unknown diseases. It uses data mining techniques to choose the correct algorithm to detect outbreaks of unknown diseases. Our experiments showed that the proposed system has very high accuracy rate in choosing the correct detection algorithm. To our best knowledge, DONS is the first of its kind to detect outbreaks of unknown diseases using data mining techniques.
Similar to A COMPUTER VIRUS PROPAGATION MODEL USING DELAY DIFFERENTIAL EQUATIONS WITH PROBABILISTIC CONTAGION AND IMMUNITY. (20)
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
A COMPUTER VIRUS PROPAGATION MODEL USING DELAY DIFFERENTIAL EQUATIONS WITH PROBABILISTIC CONTAGION AND IMMUNITY.
1. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.5, September 2014
A COMPUTER VIRUS PROPAGATION
MODEL USING DELAY DIFFERENTIAL
EQUATIONS WITH PROBABILISTIC
CONTAGION AND IMMUNITY.
M. S. S. Khan
Department of Computer Science ,College of Information & Computer Technology
Sullivan University, Louisville, KY 40205.
ABSTRACT
The SIR model is used extensively in the field of epidemiology, in particular, for the analysis of communal
diseases. One problem with SIR and other existing models is that they are tailored to random or Erdos type
networks since they do not consider the varying probabilities of infection or immunity per node. In this
paper, we present the application and the simulation results of the pSEIRS model that takes into account
the probabilities, and is thus suitable for more realistic scale free networks. In the pSEIRS model, the death
rate and the excess death rate are constant for infective nodes. Latent and immune periods are assumed to
be constant and the infection rate is assumed to be proportional to I (t) N(t) , where N (t) is the size of the
total population and I(t) is the size of the infected population. A node recovers from an infection
temporarily with a probability p and dies from the infection with probability (1-p).
KEYWORDS
SIR model; SEIRS model; Delay differential equations; Epidemic threshold; Scale free network.
1. INTRODUCTION
The growth of the internet has created several challenges and one of these challenges is cyber
security. A reliable cyber defense system is therefore needed to safeguard the valuable
information stored on a system and the information in transit. To achieve this goal, it becomes
essential to understand and study the nature of the various forms of malicious entities such as
viruses, trojans, and worms, and to do so on a wider scale. It also becomes essential to understand
how they spread throughout computer networks.
A computer virus is a malicious computer code that can be of several types, such as a trojan,
worm, and so on [1, 14]. Although each type of malicious entity has a different way of spreading
over the network, they all have common properties such as infectivity, invisibility, latency,
destructibility and unpredictability [10].
More recently, we have also started witnessing viruses that can spread on social networks. These
viruses spread by infecting the accounts of social network users, who click on any option that
may trigger the virus’s takeover of the user’s sharing capabilities, resulting in the spreading of
these malicious programs without the knowledge of the user. While their inner coding might be
different and while their triggering and spreading mechanisms (on physical vs. virtual social
networks) may be different, both traditional viruses such as worms and social network viruses
share, in common, the critical property of proliferation via spreading through a network (physical
or social).
DOI : 10.5121/ijcnc.2014.6508 111
2. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.5, September 2014
These malicious programs behave similarly to an infection in a human population. This, in turn,
allows us to draw comparisons between the study of epidemiology, in particular the mathematical
aspect of infectious diseases[2] and the behavior of a computer virus in a computer network. This
is generally studied via mathematical models of the spread of a virus or disease. Any
mathematical model’s ability to mimic the behavior of the infection largely depends on the
assumptions made during the modeling process.
Most of the existing epidemiology models are modified versions of the classical Kermack and
McKendrick’s [15] model, more commonly known as the SIR (Susceptible/Infected/Recovered)
model. For example, Hethcote [9] proposed a version of the SIR model, in which it was assumed
that the total population was constant. But in real world scenarios, the population will change in
time. Thus, this model was later improved by Diekmann and Heersterbeek[8] by assuming that:
I. The population size changes according to an exponential demographic trend.
II. The infected individuals cannot reproduce.
III. The individuals acquire permanent immunity to further infection when removed from the
112
infected class.
The eigenvalue approach has recently emerged as one of the popular techniques to analyze virus
or malicious entity propagation in a computer network. Wang et al. [17] have associated the
epidemic threshold parameter for a network with the largest eigenvalue of its adjacency matrix.
This technique works only with the assumption that the eigenvalues exist. There are also certain
restrictions on the size of the adjacency matrix, since calculating eigenvalues may not be easy or
even possible for large matrices.
In this paper, we apply the malicious object transmission model [12] in complete and scale free
networks, that assumes a variable population and with constant latent and immune periods. The
current model extends the classical SIR model proposed in [11] to a probabilistic SEIRS [12] type
model in several directions:
I. It includes an Exposed class in addition to the Susceptible, Infected and Recovered
classes.
II. It furthermore includes a constant exposition period w and a constant latency period t .
III. When a node is removed from the infected class, it recovers with a temporary immunity
with a probability p and dies due to the attack of the malicious object with probability (1-
p).
The rest of this paper is organized as follows. In section 2, we introduce the classical SIR model
and implement it on a small network. After presenting simulation and statistical results, we
conclude that the SIR model does not capture the realistic nature of the virus propagation in a
computer network. Thus, we consider the more realistic scale free network in section 3 and give
some historical background about such networks. A system of delay differential equations is then
presented that includes a probabilistic temporary immunity with probability p. A complete
mathematical justification follows the introduction of pSEIRS model. Section 4 discusses the
mathematical conditions for an infection free equilibrium. We also discuss the stability analysis of
the proposed model using the phase plane plots. We then present simulation experiments to
validate the scalability aspect of the proposed model and compare it with the classical SEIRS
model. Section 5 presents our conclusion, then discusses some limitations of compartment type
mathematical models, and finally closes with some possible directions for future work in this
area.
3. International Journal of Computer Networks & Communications (IJCNC) Vol.6, No.5, September 2014
113
2. The Classical SIR Model
The SIR (Susceptible/Infected/Recovered) model [15] is used extensively in the field of
epidemiology, in particular, for the analysis of communal diseases, which spread from an infected
individual to a population. So it is natural to divide the population into the separate groups of
susceptible (S), infected (I) and those who have recovered or become immune to a type of
infection or a disease (R). These subdivisions of the population, which are also different stages in
the epidemic cycle, are sometimes called compartments. A total population of size N is thus
divided into three stages or compartments:
N = S + I + R.
Since the population is going to change with time (t), we have to express these stages as a
function of time, i.e., S (t), I (t) and R (t). Here (infection rate)
and
(recovery rate) are the
transition rates between S ®I and I ® R, respectively, and are both in [0, 1].
The underlying assumption in the traditional SIR model is that the population is homogenous; that
is, everybody makes contact with each other randomly. So from a graph theoretical point of view,
this population represents a complete graph, which would be the case of a computer network that
is totally connected. In this situation, the classical SIR model may be sufficient enough to
understand the malicious object’s propagation. The dynamics of the epidemic evolution can be
captured with the following homogenous differential equations:
= −
.
dS
IS
dt
dI
IS I
dt
dR
I
dt
= −
b
b a
a
=
(1)
Based on (1), we define a basic reproduction rate as
0 R
b
a
=
R0 is a useful parameter to determine if the infection will spread through the population or not.
That is, if R0 1 and I (0) 1, the infection will be able to spread in a population, to a stage that is
called Endemic Equilibrium and is given by
lim
t®¥
(S(t), I (t),R(t))®
N
R0
,
N
b
(R0 −1),
a N
b
(R0 −1)
. (2)
If R0 1, infection will die out in the long run. This is called disease free equilibrium and is
represented as
(3)
lim
t®¥
(S(t), I (t),R(t))®(N,0,0) .
4. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
114
2.1: Simulation Results of the SIR Model with A Small Complete Network
The simulation results of the SIR model in a 12 K
network (complete graph with 12 nodes) are
presented in Fig. 1(a) and 1 (b) with different values of b anda .
Fig.1 (a): SIR model based on b = 0.06 , a = 0.1
In Fig 1(a), R0 =0.6 1. So, the infection will not spread through the entire network.
Fig. 1 (b): SIR model based on b = 0.1 , a = 0.2
In Fig. 1 (b), R0 = 2 1. In this case, infection will spread to most of the nodes and the network
will be in an endemic stage.
We gleam useful information about the virus propagation from Fig. 1(a) and 1(b) in a 12 K type
network and make an analysis based on these simulations. Two obvious observations are the
long-term behavior of the infection and the speed at which the population gets infected. Also, we
can determine the rate at which the population is getting immune to the infection, i.e., the removal
5. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
rate from the susceptible group. Information such as the maximum, minimum, and mean rate of
the infected population can also be determined in each class. A statistical analysis of the cases in
Fig. 1(a) and 1(b) are presented in Table 1 and Table 2.
Susceptible Infected Recovered
115
Table 1: Statistical Analysis of Fig. 1(a) Table 2: Statistical Analysis of Fig. 1(b)
Susceptible Infected Recovered
Min 0.008 0.1297 0
Max 11 7.189 11
Mean 3.619 2.522 5.859
Min 0 0.090 0
Max 11 9.954 11
Mean 1.965 4.832 5.203
This analysis is based on some crucial assumptions, such as:
• Once a node is removed or recovered from the infection, it develops immunity and it
cannot be infected again.
• The population is homogenous and well mixed.
• There is no latency, i.e., there is no time delay between exposure to the infection and
actually getting infected.
The above assumptions make it clear that in order to capture the essence of virus propagation in a
computer network, in a more realistic sense, the SIR model is not sufficient to study virus
propagation. In order to address this limitation, we must assume a network with a scale free
distribution of the node connectivity degrees. We also must take into account the real phenomena
of possible re-infection and the time delay incurred during incubation (between exposition and
infection). For these reasons, in the following sections, we will first briefly review scale free
networks and then present the pSEIRS[12] model that addresses the limitations of the classical
model.
3. Application of Probabilistic SEIRS Model
3. 1: Scale Free Networks
Scale free networks were first observed by Derek de Solla Price in 1965, when he noticed a
heavy-tailed distribution following a Pareto distribution or Power law in his analysis of the
network of citations between scientific papers. Later, in 1999, the work of Barabasi and Albert [3]
reignited interest in scale free networks.
Fig. 2: Example of a scale free network.
6. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
For example, Lloyd and May [11] generated the scale free network, depicted in Fig. 2, using the
network generating algorithm of Barabasi and Albert [3]. There are 110 nodes that are colored
according to their connectivity degree in red, green, blue, and yellow, with the most highly
connected nodes colored in blue. It is worth noting that there are only a few highly connected
nodes, while the majority of the nodes have only a few connections. Thus, in a classical scale free
network, the connectivity of a node follows a power law distribution [11].
116
3.2: Probabilistic SEIRS Model: pSEIRS
In the time delay model, parameter,g , represents the probability of spreading the infection in one
contact. It is obvious that the rate of propagation is proportional to the connectivity of the node.
The propagation rate does not change in time throughout the network.
In order to capture more realistic dynamics within real world scale free networks, several more
variables compared to the classical SIR model are used and to capture more realistic behavior,
delay is consider on the network. An additional stage (Exposed) in the model represents the
phenomenon of incubation, leading to a delay between susceptibility to infection and actual
infection. The Exposed stage makes the model a SEIR model instead of a SIR model.
Furthermore, the pSEIRS model is different from the classical SEIRS model (the latter is obtained
for an immunity probability p = 1). The resulting model is described in terms of the following
variables and constants:
N (t): Total Population size
S (t): Susceptible Population
E (t): Exposed Population
I (t): Infected Population
R (t): Recovered Population
b : Birth rate.
μ : Death rate due to causes other than an infection by a virus.
e :
Death rate due an infection by a virus, it is constant.
a : Recovery rate which is constant.
g : Average number of contacts of a node, also equal to the probability of spreading the virus in
one contact.
w : Latency period or time delay, which is a constant i.e., the time between the exposed and
infected stages.
t : Period of temporary immunity, which is a positive constant.
p : Probability of temporary immunity of a node after recovery.
Once an infection is introduced into a network, its nodes will become susceptible to the infection
and, in due course, will get infected. Once a node is exposed, an incubation period is observed,
which is captured by the new time delay parameter, which therefore models reality better: any
infection goes through an incubation period before it propagates. After infection, anti-virus
software may be executed to treat an infected node, thus providing it with temporary immunity. It
is important to realize that there is no permanent immunity in a real network, thus an immune
node may revert to the susceptible stage again. All these stages of infection from susceptible to
recovered, and the phases in between, are shown in Fig. 3.
7. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
117
μ
μ
μ
b w
a
e
t
μ
Fig. 3: Flow of infection in a SEIRS model.
The pSEIRS model in Fig. 3 works under the following assumptions:
• Any new node entering the network is susceptible.
• The death rate of a node, when the death is due to other reasons that are separate
from virus infection, is constant throughout the network.
• The death rate of a node due to infection is also constant.
• The latency period and immune period are constant.
• The incubation period for the exposed, infected and recovered nodes is
exponentially distributed.
• Once infected, a node can either (i) recover and become immune with probability
of temporary immunity p, or (ii) die from the infection with probability (1-p).
Once a network is attacked (some initial nodes become infective), any node that is in contact with
the infective nodes becomes exposed, i.e., infected but not infectious. Such nodes remain in the
incubation period before becoming infective and have a constant period of temporary immunity,
once an effective anti-virus treatment is run. The total population size is expressed as
(4)
N(t) = S(t) + I (t)+ E(t)+ R(t) .
Also, it is important to understand that the infection remains in the network for at least = max
(, ), so that we have an initial perturbation period. The model in Fig. 3, has the following form
for t :
(5)
(6)
(7)
(8)
dS(t)
dt
= b N(t)−μS(t)−g
S(t)I (t)
N(t)
+a I (t −t )e−μt
t
E(t) = g
S(x)I (x)
N(x)
e−μ (t−x) dx,
t−w
dI (t)
dt
=g
S(t −w)I (t −w)
N(t −w)
e−μw − (μ +e +a )I (t),
t
R(t) = pa I (x)e−μ (t−x) dx.
t−t
8. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
118
equations (5) - (8) are called an integro-differential equation system. On differentiating (6) and
(8), we get the following:
(9)
(10)
dE(t)
dt
dR(t)
dt
=g
S(t)I (t)
N(t)
−g
S(t −w)I (t −w)
N(t −w)
e−μw −μE(t),
The system of differential equations (5), (7), (8) and (10) is collectively called a differential
difference equation system and we will refer to it as M1. That is
( ) ( ) ( )
dS t S t I t
( ) ( ) ( )
b μ g a t
N t S t I t e
( )
dt N t
dE t S t I t S t I t
( ) ( ) ( ) ( ) ( )
( )
w w
g g μ
( ) ( )
1
dt N t N t
( ) ( ) ( )
e E t
( ) ( )
( )
( )
( ) ( ) ( ).
M
dI t S t I t
e I t
dt N t
dR t
p I t I t e R t
dt
μt
μw
μw
μt
w
w w
g μ e a
w
a a t μ
−
−
−
−
= − − + −
− −
= − −
−
− −
= − + + −
= − − −
It is important for the continuity of the solution to M1 to have
E(0) =
g S(x)I (x)
N(x)
eμ x dx
0
−w
(11)
(12)
0
It is worth noting that M1 is different from the model by Cooke and Driessche
[5]
because they do
not discuss the probability of immunity when a node recovers from the infection. M1, on the other
hand, is a probabilistic model; here, a node acquires a temporary immunity with probability p and
[16]
dies with probability 1-p. M1 is also different from Yan and Liu
because their model assumes
that a recovered node acquires permanent immunity, which may not be the case in real networks.
Theorem 1[15]: A solution of the integro-differential equations system (5-8) with N (t) given by
(4) satisfies (9) and (10). Conversely, let S (t), E (t), I (t) and R (t) be a solution of the delay
differential equation system (M1), with N (t) given by (4) and the initial condition on[−t ,0] . If in
addition,
0 0
= =
(0) ( ) ( ) and (0) ( ) , x x E S x I x e dx R p I x e dx μ μ
g a
− −
w t
then this solution satisfies the integro-differential system in (5-8).
4. Infection Free Equilibrium
= pa I (t)−a I (t −t )e−μt −μR(t).
R(0) = pa I (x)eμ x dx
−t
.
9. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
119
Let D be a close and bounded region such that
D = ((S,E, I ,R) : S,E, I ,R ³ 0,S + E + I + R =1). (13)
In (13), we consider the equilibrium of M1, when the infection I = 0, then E = R = 0 and S =1.
This is the only equilibrium on the boundary of D and we also have the threshold parameter for
the existence of the interior equilibrium:
R0 =
g e−bw
e +b +a
.
(14)
The threshold parameter R0 is a measure of the strength of the infection. The quantity
1
b+e +a
is
the mean waiting time in the infection class, thus R0 is the mean number of contacts of an
infection during the mean time in the infection class. Asymptotically, as time t ®¥, we get an
infection free equilibrium [5].
The system of equations in M1 will always reach a disease free equilibrium if R0 1, thus all the
solutions starting in D will approach the disease free equilibrium. On the other hand, when R0 1,
we have an endemic equilibrium meaning that the infection will not die off in the long run.
4.1. Simulation Results with the pSEIRS Model
The dynamic behavior of the system M1 is presented in Fig. 4, with the following values:
b = 0.330,μ = 0.006,e = 0.060, a = 0.040,g = 0.308,w = 0.15,t = 30, p =1.
Fig. 4: pSEIRS model.
Under the above assumption, we find that the threshold parameter is R0 = 7.77. Hence, the system
is in the endemic state. We also have p=1, thus assuming that all the recovered nodes have
permanent immunity. In Fig. 5, we present the overall impact on the population with the above-mentioned
parameters.
10. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
Total Population Propagation
0 50 100 150 200 250 300 350 400
Susceptible Exposed Infected Recovered
120
Time (Days)
Fig. 5: Population propagation under the pSEIRS model with p =1.
110
100
90
80
70
60
50
40
30
20
10
Population
We notice from Table 3, that there are 33 nodes, which will get infected, and that the mean rate of
infection is 12.37 nodes, while the number of recovered nodes is 20.00 with a mean recovery rate
of 7.5 nodes
Table 3: Statistical Analysis of Fig. 4.
Min 4.4 1.64 4.1 1.027
Max 70 10 33.46 19.94
Mean 16.59 6.16 12.37 7.5
Stability is an important issue with any dynamical system. In Fig. 6, we present the phase plane
portrait of our proposed model, which shows that we have an endemic equilibrium; hence our
modeled system is asymptotically stable at the equilibrium (4.511, 0.9362, 4.161).
Fig. 6: Phase Plan Portrait for temporary immunity probability p = 1 and latency periodw = 0.15.
We now look at the dynamics with the temporary immunity probability p = 0.4. We can clearly
notice that the recovery R (t) has declined in Fig. 7 compared to Fig. 4, since we have chosen a
11. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
Susceptible Exposed Infected Recovered
Min 4.216 1.66 4.283 2.027
Max 70 10 33.94 13.94
Mean 17.71 6.405 13.19 7.255
Total Population Propagation
0 50 100 150 200 250 300 350 400
121
lower probability for temporary immunity.
Fig. 7: pSEIRS model.
We also notice from Table 4, that 34 nodes will get infected and the mean rate of infection is
13.00 nodes while the number of recovered nodes is 14.00, with a mean recovery rate of 7.255
node.
Table 4: Statistical Analysis of Fig. 7.
Because of the lower temporary immunity rate, we expected the population to decline further, and
this is confirmed in Fig. 8.
Time t
Fig. 8: Population propagation under the pSEIRS model with p = 0.4.
110
100
90
80
70
60
50
40
30
20
10
Population
In Fig. 9, we again see an asymptotically stable system at the equilibrium (7.054, 0.9407, 4.05).
12. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
122
Fig. 9: Phase Plane Portrait for temporary immunity probability p = 0.4 and latency periodw = 0.15.
Now consider the case, when the latency period is increased. We also consider the temporary
immunity probability to be p = 1 and all the other parameters remain the same as before. The
latency effect on the virus propagation dynamics can be clearly seen in Fig. 10.
Fig. 10: pSEIRS model with w = 30.
Infection is less likely to become an endemic in this case because R0 = 0.3703 1. Instead, we
expect to have an asymptotically stable system at a disease free equilibrium (1.118e-006, 20.63,
11.65), as shown in Fig. 11.
13. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
Fig. 11: Phase Plane Portrait for temporary immunity probability p = 1 and a longer latency period (w = 30.
123
)
4.2. Scalability of the pSEIRS model
So far, we have used relatively speaking a small network of 110 nodes in a scale free network.
We now consider a larger network of 5000 nodes, as shown in Fig. 12. The network was
generated in the Matlab environment using the Barabasi-Albert graph generation algorithm [3].
Fig. 12: Scale free network of 5000 nodes.
It is important to realize that it is very typical for most of the networks arising from social
networks, World Wide Web links, biological networks, computer networks and many other
phenomena, to follow a power law in their connectivity. Thus, networks are conjectured to be
scale free[4]. Here, the immunity probability is p = 0.5 and the rest of the parameters remain the
same as before. So, under these assumptions, R0 = 8.621329079589127e-001 1, meaning that
the infection is less likely to become endemic. The global behavior of the proposed model is
presented in Fig. 13.
14. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
124
Fig. 13: pSEIRS Model with 5000 nodes.
Moreover, we have an asymptotically stable system at a disease free equilibrium of (5.778, 2.51,
4.158). This can be seen in Fig. 14.
Fig. 14: Phase Plane Portrait.
The overall population propagation is presented in Fig. 15, with p = 0.5.
15. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
0 100 200 300 400 500 600 700 800 900 1000
125
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
Fig. 15: Population propagation under the pSEIRS model with p = 0.5 and 5000 nodes.
We now compare the pSEIRS model and the classical SEIRS model (the latter is obtained for an
immunity probability p = 1). In Fig. 16 and Fig. 17, we can clearly see the difference between the
recovery rates R(t) for the different models.
Fig. 16: pSEIRS model with p = 0.25 and p =0.75.
0
Time (Days)
P opulat ion
Total Population Propagation
16. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
126
Fig. 17: Comparison between the classical SEIRS (where p=1) and the pSEIRS model.
In Table 5, we present a comparison of the model under different choices of the immunity
probability p. The pSEIRS model gives a more realistic picture of the scale free network of 5000
nodes since it is unrealistic for all nodes to develop permanent immunity from getting infected
with viruses, trojans and similar entities.
Table 5: Statistical Analysis of Fig. 16 and Fig. 17.
Compartment Probability Min Max Mean
Susceptible
p = 0.25 8.008 2000 345.9
p = 0.5 5.379 2000 327.1
p = 0.75 8.41 2000 346.4
p = 1 5.797 2000 318.3
Exposed
p = 0.25 2.509 1000 296.6
p = 0.5 2.515 1000 322
p = 0.75 2.504 1000 320.2
p = 1 2.506 1000 349.1
Infected
p = 0.25 0.5927 1039 144.5
p = 0.5 1.799 1040 163
p = 0.75 1.212 1038 159.4
p = 1 3.047 1039 175
Recovered
p = 0.25 8.63 1027 364.7
p = 0.5 3.497 1000 253.9
p = 0.75 9.491 1075 324.4
p = 1 0.154 1018 196.5
5. Conclusions and Discussion of Future Directions of Research
We have applied a variable population malicious object transmission model in complete and scale
free networks, with constant latent and immune periods. The applied model extends the classical
SIR model proposed in [11] to a probabilistic SEIR type model in several directions: (i) it includes
an Exposed class in addition to the Susceptible, Infected and Recovered, (ii) it furthermore
includes a constant exposition period w and constant latency period t , (iii) when a node is
removed from the infected class, it recovers with a temporary immunity with a probability p and
17. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
dies due to the attack of the malicious object with probability (1-p). This is in contrast with Yan
and Liu’s model [16] which assumed that the node recovers with permanent immunity.
The model will be in an endemic state if the threshold parameter R0 1. As w increases, R0
decreases and the condition of permanent infection will become less likely to be satisfied. Thus,
the longer the exposure period of a system, the less likely it is that it will become endemic in the
long run.
Another important information that we can gleam from the proposed model is the maximum
number of nodes that will be infected. Knowing this number, we can assume that the most
connected nodes are the most vulnerable to an attack; thus special attention should be paid to
these nodes. This information is of great importance to network administrators. Moreover, one
can get the global dynamic behavior of the network under certain assumptions, as illustrated in
our simulations.
The SEIRS and other related compartment models are good tools to study the global behavior of
an epidemic in a network or a population dynamic. They are relatively simple to implement, yet
they can still give valuable information about the network dynamics. One of the major issues for
these types of models is setting the right set of values for the parameters, thus making them highly
restricted in terms of their behavior. To make these models more realistic, it is important to make
them as robust as possible.
Some of the possible ways in which future work in this topic can expand is to consider a dynamic
rate of change, i.e., dynamic death rate, transmission rate, and latency period. Demirci et al. [7]
considered only one such parameter. Ozalp and Demirci [13] also considered a non-constant
population within their SEIRS model. This is a good starting point to improve the existing SEIRS
model [13]. Choosing a nonlinear incident rate in a SIERS model is also another direction. Though
Ning and Junhong [6] have done some work on this, one can think of improving their work.
127
REFERENCES
[1] J.L. Aron et al., The Benefits of a Notification Process in Addressing the Worsening Computer Virus
Problem: Results of a Survey and a Simulation Model, Computers Security 21 (2002), pp. 142-163.
Available at http://www.sciencedirect.com/science/article/pii/S0167404802002109.
[2] N.T. Bailey, The Mathematical Theory of Infectious Diseases, 2nd ed, Oxford University Press, 1987.
[3] A.-L. Barabási, and R. Albert, Emergence of Scaling in Random Networks, Science 286 (1999), pp.
509-512. Available at http://www.sciencemag.org/content/286/5439/509.abstract.
[4] A. Clauset, C.R. Shalizi, and M.E.J. Newman, Power-Law Distributions in Empirical Data, SIAM
Review 51 (2009), pp. 661-703. Available at http://link.aip.org/link/?SIR/51/661/1
[5] K.L. Cooke, and P. van den Driessche, Analysis of an SEIRS epidemic model with two delays,
Journal of Mathematical Biology 35 (1996), pp. 240-260. Available at
http://dx.doi.org/10.1007/s002850050051.
[6] N. Cui, and J. Li, An SEIRS Model with a Nonlinear Incidence Rate, Procedia Engineering 29 (2012),
pp. 3929-3933. Available at http://www.sciencedirect.com/science/article/pii/S1877705812006066.
[7] E. Demirci, A. Unal, and N. zalp, A fractional order Seir model with density dependent death rate,
Hacet. J. Math. Stat. 40 (2011), pp. 287--295.
[8] O. Diekmann, and J.A.P. Heesterbeek, Mathematical epidemiology of infectious diseases: Model
building, analysis and interpretation, Wiley Series in Mathematical and Computational Biology, ed,
John Wiley Sons Ltd., Chichester, 2000.
[9] H.W. Hethcote, Qualitative analyses of communicable disease models, Mathematical Biosciences 28
(1976), pp. 335-356. Available at
http://www.sciencedirect.com/science/article/pii/0025556476901322.
[10] Y. Kafai, Understanding Virtual Epidemics: Children’s Folk Conceptions of a Computer Virus,
Journal of Science Education and Technology 17 (2008), pp. 523-529. Available at
http://dx.doi.org/10.1007/s10956-008-9102-x.
18. International Journal of Computer Networks Communications (IJCNC) Vol.6, No.5, September 2014
[11] A.L. Lloyd, and R.M. May, How Viruses Spread Among Computers and People, Science 292 (2001),
128
pp. 1316-1317. Available at http://www.sciencemag.org/content/292/5520/1316.short.
[12] B.K. Mishra, and D.K. Saini, SEIRS epidemic model with delay for transmission of malicious objects
in computer network, Applied Mathematics and Computation 188 (2007), pp. 1476-1482. Available at
http://www.sciencedirect.com/science/article/pii/S0096300306015001.
[13] N. Özalp, and E. Demirci, A fractional order SEIR model with vertical transmission, Mathematical
and Computer Modelling 54 (2011), pp. 1-6. Available at
http://www.sciencedirect.com/science/article/pii/S0895717710006497.
[14] R. Perdisci, A. Lanzi, and W. Lee, Classification of packed executables for accurate computer virus
detection, Pattern Recognition Letters 29 (2008), pp. 1941-1946. Available at
http://www.sciencedirect.com/science/article/pii/S0167865508002110.
[15] W.O.Kermack, and A.G. McKendrick, A Contribution to the Mathematical Theory of Epidemics.,
Proc. R. Soc. 115 (1927), pp. 700-721.
[16] P. Yan, and S. Liu, SEIR epidemic model with delay, The ANZIAM Journal 48 (2006), pp. 119-134.
Available at http://dx.doi.org/10.1017/S144618110000345X.
[17] W. Yang et al., Epidemic spreading in real networks: an eigenvalue viewpoint, in Reliable Distributed
Systems, 2003. Proceedings. 22nd International Symposium on, 2003, pp. 25-34.
Dr. Mohammad. S. Khan received his M.Sc. and Ph.D. in Computer Science and
Computer Engineering from the University of Louisville, Kentucky, USA, in 2011 and
2013 respectively. He is currently an Assistant Professor of Computer Science at Sullivan
University. His primary area of search is in ad-hoc networks and network tomography.
His research interest span several fields including ad-hoc network, mobile wireless mesh
network and sensor network, statistical modeling, ODE, wavelets and ring theory. He has been on technical
program committee of various international conferences and technical reviewer of various international
journals in his field. He is member of IEEE.