The topology of mobile ad hoc networks
(
MANETs
)
change dynamically with time. Connected
dominating sets
(
CDS
)
are considered to be an effective topology for net
work-wide broadcasts
in MANETs as only the nodes that are part of the CD
S need to broadcast the message and the
rest of the nodes merely receive the message. Howev
er, with node mobility, a CDS does not exist
for the entire duration of the network session and
has to be regularly refreshed
(
CDS
transition
)
. In an earlier work, we had proposed a benchmarkin
g algorithm to determine a
sequence of CDSs
(
Maximum Stable CDS
)
such that the number of transitions is the global
minimum. In this research, we study the performance
(
CDS Lifetime and CDS Node Size
)
of the
Maximum Stable CDS when a certain fraction of the n
odes in the network are static and
compare the performance with that of the degree-bas
ed CDSs. We observe the lifetime of the
Maximum Stable CDS to only moderately increase
(
by a factor of 2.3
)
as we increase the
percentage of the static nodes in the network; on t
he other hand, the lifetime of the degree-based
CDS increases significantly
(
as large as 13 times
)
as we increase the percentage of static nodes
from 0 to 80
.
This document summarizes a research paper that analyzes how pause time and static nodes impact the stability of connected dominating sets (CDS) in mobile ad hoc networks (MANETs). The paper studies two CDS algorithms: a degree-based CDS heuristic and a maximum stable CDS algorithm. Simulations were conducted by varying node velocity, pause time, and percentage of static nodes. The results show that increasing pause time leads to more stable CDSs than having static nodes, especially for the degree-based CDS. Having static nodes only marginally increases the lifetime of the maximum stable CDS.
This document analyzes and compares the performance of five wireless sensor network clustering protocols: LEACH, EEHC, the protocol by Indranil et al., HEED, and DWEHC. It discusses the operation and key aspects of each protocol, defines quantitative and qualitative performance metrics for analyzing clustering protocols, and presents the results of simulations comparing the protocols based on metrics like total residual energy, number of dead nodes, aggregated data sent to the base station, and overhead. The analysis found that DWEHC performed the best overall by efficiently distributing energy usage through multi-level clustering and minimizing direct communication between nodes and the base station.
A PROPOSAL FOR IMPROVE THE LIFETIME OF WIRELESS SENSOR NETWORKIJCNCJournal
The document proposes a new routing protocol for wireless sensor networks that aims to improve network lifetime. The protocol is based on LEACH, an existing energy-efficient clustering protocol, but improves on it by electing cluster heads based on both remaining node energy and distance to the base station. Simulation results show the proposed protocol extends network lifetime by up to 75% compared to LEACH alone by distributing energy usage more evenly across nodes.
A study of localized algorithm for self organized wireless sensor network and...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
MuMHR: Multi-path, Multi-hop Hierarchical RoutingM H
This document proposes a new routing protocol called MuMHR for wireless sensor networks. MuMHR aims to improve energy efficiency and robustness over LEACH in three ways: 1) By enabling nodes to select cluster heads based on the number of hops away to reduce transmission distances. 2) By using a back-off timer and multiple paths for transmissions to reduce setup overhead and add reliability. 3) By electing backup cluster heads to substitute when primary heads fail. The protocol operates in setup and data transfer phases, with the setup phase selecting cluster heads and forming clusters in an energy-efficient multi-hop manner.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Communication synchronization in cluster based wireless sensor network a re...eSAT Journals
Abstract A wireless sensor network is acquiring more popularity in different sectors. A scalable, low latency and energy efficient are desire challenges that should meet by wireless sensor network. Clustering permits sensors to systematically communicate among clusters. Cluster based sensor network satisfies these challenges as it provides flexible, energy saving and QoS. The communication efficiency and network performance degrades if the interaction between inter-cluster and intra-cluster communication are not managed properly. The proposed work uses two approaches to solve this problem. At aiming low packet delay and high throughput first approach uses cycle- based synchronous scheduling. By completely removing necessity of communication synchronization second approach send packets with no synchronization delay. The combined scheme can take benefit of both approaches. Keywords: Wireless sensor network, clustering, communication synchronization, QoS.
The document discusses Time Division Multiple Access (TDMA) in clustered wireless networks. It proposes a solution to optimize TDMA scheduling that can achieve high power efficiency, zero conflict, and reduced end-to-end delay. A cross-layer optimization model is developed to minimize energy consumption and derive optimal flows on each link. Then an algorithm is presented to obtain TDMA schedules with the minimum frame length based on the optimal flows, guaranteeing minimum delay. Numerical results show the proposed approach significantly reduces energy consumption and delay compared to existing solutions.
This document summarizes a research paper that analyzes how pause time and static nodes impact the stability of connected dominating sets (CDS) in mobile ad hoc networks (MANETs). The paper studies two CDS algorithms: a degree-based CDS heuristic and a maximum stable CDS algorithm. Simulations were conducted by varying node velocity, pause time, and percentage of static nodes. The results show that increasing pause time leads to more stable CDSs than having static nodes, especially for the degree-based CDS. Having static nodes only marginally increases the lifetime of the maximum stable CDS.
This document analyzes and compares the performance of five wireless sensor network clustering protocols: LEACH, EEHC, the protocol by Indranil et al., HEED, and DWEHC. It discusses the operation and key aspects of each protocol, defines quantitative and qualitative performance metrics for analyzing clustering protocols, and presents the results of simulations comparing the protocols based on metrics like total residual energy, number of dead nodes, aggregated data sent to the base station, and overhead. The analysis found that DWEHC performed the best overall by efficiently distributing energy usage through multi-level clustering and minimizing direct communication between nodes and the base station.
A PROPOSAL FOR IMPROVE THE LIFETIME OF WIRELESS SENSOR NETWORKIJCNCJournal
The document proposes a new routing protocol for wireless sensor networks that aims to improve network lifetime. The protocol is based on LEACH, an existing energy-efficient clustering protocol, but improves on it by electing cluster heads based on both remaining node energy and distance to the base station. Simulation results show the proposed protocol extends network lifetime by up to 75% compared to LEACH alone by distributing energy usage more evenly across nodes.
A study of localized algorithm for self organized wireless sensor network and...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
MuMHR: Multi-path, Multi-hop Hierarchical RoutingM H
This document proposes a new routing protocol called MuMHR for wireless sensor networks. MuMHR aims to improve energy efficiency and robustness over LEACH in three ways: 1) By enabling nodes to select cluster heads based on the number of hops away to reduce transmission distances. 2) By using a back-off timer and multiple paths for transmissions to reduce setup overhead and add reliability. 3) By electing backup cluster heads to substitute when primary heads fail. The protocol operates in setup and data transfer phases, with the setup phase selecting cluster heads and forming clusters in an energy-efficient multi-hop manner.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Communication synchronization in cluster based wireless sensor network a re...eSAT Journals
Abstract A wireless sensor network is acquiring more popularity in different sectors. A scalable, low latency and energy efficient are desire challenges that should meet by wireless sensor network. Clustering permits sensors to systematically communicate among clusters. Cluster based sensor network satisfies these challenges as it provides flexible, energy saving and QoS. The communication efficiency and network performance degrades if the interaction between inter-cluster and intra-cluster communication are not managed properly. The proposed work uses two approaches to solve this problem. At aiming low packet delay and high throughput first approach uses cycle- based synchronous scheduling. By completely removing necessity of communication synchronization second approach send packets with no synchronization delay. The combined scheme can take benefit of both approaches. Keywords: Wireless sensor network, clustering, communication synchronization, QoS.
The document discusses Time Division Multiple Access (TDMA) in clustered wireless networks. It proposes a solution to optimize TDMA scheduling that can achieve high power efficiency, zero conflict, and reduced end-to-end delay. A cross-layer optimization model is developed to minimize energy consumption and derive optimal flows on each link. Then an algorithm is presented to obtain TDMA schedules with the minimum frame length based on the optimal flows, guaranteeing minimum delay. Numerical results show the proposed approach significantly reduces energy consumption and delay compared to existing solutions.
A survey on Energy Efficient ProtocolsLEACH, Fuzzy-based approach and Neural ...IJEEE
Wireless Sensor Networks (WSN) plays a very important role in transmitting the data from source to destination but energy consumption is one of the major challenges in these networks. WSN consists of hundreds to thousands of nodes which consume energy while transmitting the information and with a span of time whole energy get consumed and network life time gets reduced. Clustering and Cluster head (CH) selection are important parameters used to enhance the lifetime of the WSN. Clustering use two methods: rotating CH periodically in every round to distribute the energy consumption among nodes and the node with more residual energy becomes CH.This research paper is focused on the performance of the techniques used to enhance the energy efficiency in Wireless Sensor Networks (WSNs). Low- Energy Adaptive Clustering Hierarchy (LEACH), Fuzzy- Based and Neural Network are some of the important techniques used. MATLAB simulation tool is considered in this paper.
Coverage and Connectivity Aware Neural Network Based Energy Efficient Routing...graphhoc
There are many challenges when designing and deploying wireless sensor networks (WSNs). One of the key challenges is how to make full use of the limited energy to prolong the lifetime of the network, because energy is a valuable resource in WSNs. The status of energy consumption should be continuously monitored after network deployment. In this paper, we propose coverage and connectivity aware neural network based energy efficient routing in WSN with the objective of maximizing the network lifetime. In the proposed scheme, the problem is formulated as linear programming (LP) with coverage and connectivity aware constraints. Cluster head selection is proposed using adaptive learning in neural networks followed by coverage and connectivity aware routing with data transmission. The proposed scheme is compared with existing schemes with respect to the parameters such as number of alive nodes, packet delivery fraction, and node residual energy. The simulation results show that the proposed scheme can be used in wide area of applications in WSNs.
A Novel Clustering Algorithm For Coverage A Large Scale In Wireless Sensor Ne...ijcsa
The applications require coverage of the whole monitored area for long periods of time. Clustering is a
way to reduce communications, minimize energy consumption and organize messages among the cluster
head and their members. The message exchange of communication and data transmission between the
different sensor nodes must be minimized to keep and extended the lifetime of the network because of
limited energy resources of the sensors. In this paper, we take into consideration the problem isolated
nodes that are away from the cluster head (CH) and by consequence or CH is not within the reach from
these nodes. To solve this problem, we propose O-LEACH (Orphan Low Energy Adaptive Clustering
Hierarchy) a routing protocol that takes into account the orphan nodes. Indeed, a cluster member will be
able to play the role of a gateway which allows the joining of orphan nodes. Our contribution is to election
a cluster head that has enough energy for a better now to coordinate with these member nodes and
maintain the full coverage for applications which requires of useful data for the entire area to be covered.
The simulation results show that O-LEACH performs better than LEACH in terms of connectivity rate,
energy, scalability and coverage.
Analysis of Cluster Based Anycast Routing Protocol for Wireless Sensor NetworkIJMER
A wireless sensor network is a collection of nodes organized into a cooperative network.
Each node consists of processing capability, may contain multiple types of memory, have a RF
transceiver, have a power source, and accommodate various sensors and actuators. The nodes
communicate wirelessly and often self-organize after being deployed in an ad hoc fashion.
Routing protocols for wireless sensor networks are responsible for maintaining the routes in the
network and have to ensure reliable multi-hop communication .The performance of the network is
greatly influenced by the routing techniques. Routing is to find out the path to route the sensed data to
the base station. In this paper the features of WSNs are introduced and routing protocols are reviewed
for Wireless Sensor Network.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
This document summarizes and categorizes different weight-based clustering algorithms that have been designed for mobile ad hoc networks (MANETs). It discusses:
1) The basic concept of clustering in MANETs and the roles of standard nodes, cluster heads, and cluster gateways.
2) Two categories of clustering algorithms - simple (e.g. lowest ID, connectivity-based) and enhanced (e.g. k-cluster, hierarchical) approaches.
3) Weighted clustering, where each node calculates a weight based on attributes like speed, degree, power, and energy to elect cluster heads.
An Improved Energy Efficient Wireless Sensor Networks Through Clustering In C...Editor IJCATR
One of the major reason for performance degradation in Wireless sensor network is the overhead due to control packet and packet delivery degradation. Clustering in cross layer network operation is an efficient way manage control packet overhead and which ultimately improve the lifetime of a network. All these overheads are crucial in a scalable networks. But the clustering always suffer from the cluster head failure which need to be solved effectively in a large network. As the focus is to improve the average lifetime of sensor network the cluster head is selected based on the battery life of nodes. The cross-layer operation model optimize the overheads in multiple layer and ultimately the use of clustering will reduce the major overheads identified and their by the energy consumption and throughput of wireless sensor network is improved. The proposed model operates on two layers of network ie., Network Layer and Transport Layer and Clustering is applied in the network layer . The simulation result shows that the integration of two layers reduces the energy consumption and increases the throughput of the wireless sensor networks.
A survey on weighted clustering techniques in manetsIAEME Publication
This document summarizes existing weighted clustering techniques for mobile ad hoc networks (MANETs). It discusses clustering approaches that assign weights to nodes based on parameters like degree, transmission power, mobility, and battery power. The Weighted Clustering Algorithm (WCA) and Connectivity-Based Distributed Mobility-aware clustering algorithm (CBDM) are highlighted as approaches that use weighted factors to select cluster heads in a distributed manner. WCA uses four parameters - degree, transmission power, mobility, and battery power - while CBDM extends WCA by additionally considering connectivity and a more accurate battery power metric. Both aim to form stable clustering structures with low overhead through periodic re-evaluation of node weights and selection of cluster heads.
This document summarizes and compares various clustering protocols for wireless sensor networks. It discusses clustering parameters like number of clusters and node mobility. It also classifies clustering algorithms into two main categories: probabilistic (e.g. LEACH) and non-probabilistic (e.g. weight-based and graph-based). Popular probabilistic protocols like LEACH, EEHC and HEED are described. Non-probabilistic protocols discussed include those based on node proximity, weights, and biologically inspired approaches. Overall, the document provides an overview of different clustering algorithm types and compares their advantages and disadvantages.
Efficient Broadcast Authentication with Highest Life Span in Wireless Sensor ...IRJET Journal
This document discusses efficient broadcast authentication to maximize the lifetime of wireless sensor networks. It proposes using a greedy heuristics algorithm with the X-TESLA broadcast authentication protocol. X-TESLA uses short key chains to authenticate broadcast messages but has limitations regarding sleep modes, network failures, idle sessions, extended lifetime, and denial-of-service attacks. The proposed approach combines X-TESLA with greedy heuristics to address these issues and significantly improve network lifetime compared to using each approach alone. It aims to find a broadcast tree that maximizes the minimum residual energy of nodes after broadcasting to prolong the ability of the network to complete future broadcasts.
34 9141 it ns2-tentative route selection approach for edit septianIAESIJEECS
Wireless Sensor Networks (WSNs) assume a crucial part in the field of mechanization and control where detecting of data is the initial step before any automated job could be performed. So as to encourage such perpetual assignments with less vitality utilization proportion, clustering is consolidated everywhere to upgrade the system lifetime. Unequal Cluster-based Routing (UCR) [7] is a standout amongst the most productive answers for draw out the system lifetime and to take care of the hotspot issue that is generally found in equivalent clustering method. In this paper, we propose Tentative Route (TRS) Selection approach for irregular Clustered Wireless Sensor Networks that facilitates in decision an efficient next relay to send the data cumulative by Cluster Heads to the Base Station. Simulation analysis is achieved using the network simulator to demonstrate the effectiveness of the TRS method.
This presentation summarizes Mithileysh Sathiyanarayanan's master's research on proposing a new multi-channel scheduling algorithm called Multi-Channel Deficit Round Robin (MCDRR) for hybrid TDM/WDM optical networks. The MCDRR aims to provide fairness and quality of service guarantees while efficiently utilizing network resources. It extends the existing Deficit Round Robin scheduling to allow overlapping rounds across multiple channels using tunable transmitters and fixed receivers. Simulation results showed the MCDRR achieves near perfect fairness even with uneven traffic flows.
The document proposes a clustering method for wireless sensor networks that uses hybrid compressive sensing to reduce data transmissions. Sensor nodes are organized into clusters, with each cluster having a cluster head. Nodes transmit data to the cluster head without compressive sensing. Cluster heads then use compressive sensing to generate projections from the collected data and transmit them to the sink node along a backbone tree, reducing the total number of transmissions. The authors analyze the relationship between cluster size and transmission count, determining an optimal size that minimizes transmissions when using this hybrid approach.
A Survey Paper on Cluster Head Selection Techniques for Mobile Ad-Hoc NetworkIOSR Journals
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It discusses techniques that select the cluster head based on attributes like node ID, degree of connectivity, mobility, load balancing, and power consumption. Some techniques aim to improve stability and reduce overhead by minimizing cluster changes. Each technique has advantages like simplicity or load balancing, and disadvantages like additional messaging or inability to eliminate ties between nodes. The survey provides a comparison of the techniques on their selection criteria and merits and demerits.
Modelling Clustering of Wireless Sensor Networks with Synchronised Hyperedge ...M H
This paper proposes Synchronised Hyperedge Replacement (SHR) as a suitable modelling framework for Wireless Sensor Networks (WSNs). SHR facilitates explicit modelling of WSNs applications environmental conditions (that significantly affect applications performance) while providing a sufficiently high level of abstraction for the specification of the underling coordination mechanisms. Because it is an intractable problem to solve in distributed manner, and distribution is important, we propose a new Nutrient-flow-based Distributed Clustering (NDC) algorithm to be used as a working example. The key contribution of this work is to demonstrate that SHR is sufficiently expressive to describe WSNs algorithms and their behaviour at a suitable level of abstraction to allow onward analysis.
The Appropriateness of the Factual Density as an Informativeness Measure for ...csandit
This document discusses using factual density as a measure of informativeness for online news articles. It conducted an experiment comparing the factual density of daily news reports on Malaysia Airlines Flight 370 from Fox News and Thompson Reuters over 33 days. The factual density was calculated both with and without a confidence threshold for extracted facts. Statistical analysis found little correlation between the factual densities of the two news sources, suggesting factual density may not be an appropriate measure of informativeness for online news articles on events of widespread public interest.
The document proposes a mobile phone-based approach to identify the monetary value of Iranian cash from a photo, using image processing and machine vision techniques. It can recognize the value with 95% accuracy despite challenges like rotation, scaling, background noise, lighting changes, and partial images. The approach first detects the orientation of the cash using face detection. It then finds and counts the zero digits to localize the monetary value. Finally, it identifies the remaining nonzero digit using a neural network to determine the total value.
In this paper, a modified invasive weed optimization (IWO) algorithm is presented for
optimization of multiobjective flexible job shop scheduling problems (FJSSPs) with the criteria
to minimize the maximum completion time (makespan), the total workload of machines and the
workload of the critical machine. IWO is a bio-inspired metaheuristic that mimics the
ecological behaviour of weeds in colonizing and finding suitable place for growth and
reproduction. IWO is developed to solve continuous optimization problems that’s why the
heuristic rule the Smallest Position Value (SPV) is used to convert the continuous position
values to the discrete job sequences. The computational experiments show that the proposed
algorithm is highly competitive to the state-of-the-art methods in the literature since it is able to
find the optimal and best-known solutions on the instances studied.
A survey on Energy Efficient ProtocolsLEACH, Fuzzy-based approach and Neural ...IJEEE
Wireless Sensor Networks (WSN) plays a very important role in transmitting the data from source to destination but energy consumption is one of the major challenges in these networks. WSN consists of hundreds to thousands of nodes which consume energy while transmitting the information and with a span of time whole energy get consumed and network life time gets reduced. Clustering and Cluster head (CH) selection are important parameters used to enhance the lifetime of the WSN. Clustering use two methods: rotating CH periodically in every round to distribute the energy consumption among nodes and the node with more residual energy becomes CH.This research paper is focused on the performance of the techniques used to enhance the energy efficiency in Wireless Sensor Networks (WSNs). Low- Energy Adaptive Clustering Hierarchy (LEACH), Fuzzy- Based and Neural Network are some of the important techniques used. MATLAB simulation tool is considered in this paper.
Coverage and Connectivity Aware Neural Network Based Energy Efficient Routing...graphhoc
There are many challenges when designing and deploying wireless sensor networks (WSNs). One of the key challenges is how to make full use of the limited energy to prolong the lifetime of the network, because energy is a valuable resource in WSNs. The status of energy consumption should be continuously monitored after network deployment. In this paper, we propose coverage and connectivity aware neural network based energy efficient routing in WSN with the objective of maximizing the network lifetime. In the proposed scheme, the problem is formulated as linear programming (LP) with coverage and connectivity aware constraints. Cluster head selection is proposed using adaptive learning in neural networks followed by coverage and connectivity aware routing with data transmission. The proposed scheme is compared with existing schemes with respect to the parameters such as number of alive nodes, packet delivery fraction, and node residual energy. The simulation results show that the proposed scheme can be used in wide area of applications in WSNs.
A Novel Clustering Algorithm For Coverage A Large Scale In Wireless Sensor Ne...ijcsa
The applications require coverage of the whole monitored area for long periods of time. Clustering is a
way to reduce communications, minimize energy consumption and organize messages among the cluster
head and their members. The message exchange of communication and data transmission between the
different sensor nodes must be minimized to keep and extended the lifetime of the network because of
limited energy resources of the sensors. In this paper, we take into consideration the problem isolated
nodes that are away from the cluster head (CH) and by consequence or CH is not within the reach from
these nodes. To solve this problem, we propose O-LEACH (Orphan Low Energy Adaptive Clustering
Hierarchy) a routing protocol that takes into account the orphan nodes. Indeed, a cluster member will be
able to play the role of a gateway which allows the joining of orphan nodes. Our contribution is to election
a cluster head that has enough energy for a better now to coordinate with these member nodes and
maintain the full coverage for applications which requires of useful data for the entire area to be covered.
The simulation results show that O-LEACH performs better than LEACH in terms of connectivity rate,
energy, scalability and coverage.
Analysis of Cluster Based Anycast Routing Protocol for Wireless Sensor NetworkIJMER
A wireless sensor network is a collection of nodes organized into a cooperative network.
Each node consists of processing capability, may contain multiple types of memory, have a RF
transceiver, have a power source, and accommodate various sensors and actuators. The nodes
communicate wirelessly and often self-organize after being deployed in an ad hoc fashion.
Routing protocols for wireless sensor networks are responsible for maintaining the routes in the
network and have to ensure reliable multi-hop communication .The performance of the network is
greatly influenced by the routing techniques. Routing is to find out the path to route the sensed data to
the base station. In this paper the features of WSNs are introduced and routing protocols are reviewed
for Wireless Sensor Network.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
This document summarizes and categorizes different weight-based clustering algorithms that have been designed for mobile ad hoc networks (MANETs). It discusses:
1) The basic concept of clustering in MANETs and the roles of standard nodes, cluster heads, and cluster gateways.
2) Two categories of clustering algorithms - simple (e.g. lowest ID, connectivity-based) and enhanced (e.g. k-cluster, hierarchical) approaches.
3) Weighted clustering, where each node calculates a weight based on attributes like speed, degree, power, and energy to elect cluster heads.
An Improved Energy Efficient Wireless Sensor Networks Through Clustering In C...Editor IJCATR
One of the major reason for performance degradation in Wireless sensor network is the overhead due to control packet and packet delivery degradation. Clustering in cross layer network operation is an efficient way manage control packet overhead and which ultimately improve the lifetime of a network. All these overheads are crucial in a scalable networks. But the clustering always suffer from the cluster head failure which need to be solved effectively in a large network. As the focus is to improve the average lifetime of sensor network the cluster head is selected based on the battery life of nodes. The cross-layer operation model optimize the overheads in multiple layer and ultimately the use of clustering will reduce the major overheads identified and their by the energy consumption and throughput of wireless sensor network is improved. The proposed model operates on two layers of network ie., Network Layer and Transport Layer and Clustering is applied in the network layer . The simulation result shows that the integration of two layers reduces the energy consumption and increases the throughput of the wireless sensor networks.
A survey on weighted clustering techniques in manetsIAEME Publication
This document summarizes existing weighted clustering techniques for mobile ad hoc networks (MANETs). It discusses clustering approaches that assign weights to nodes based on parameters like degree, transmission power, mobility, and battery power. The Weighted Clustering Algorithm (WCA) and Connectivity-Based Distributed Mobility-aware clustering algorithm (CBDM) are highlighted as approaches that use weighted factors to select cluster heads in a distributed manner. WCA uses four parameters - degree, transmission power, mobility, and battery power - while CBDM extends WCA by additionally considering connectivity and a more accurate battery power metric. Both aim to form stable clustering structures with low overhead through periodic re-evaluation of node weights and selection of cluster heads.
This document summarizes and compares various clustering protocols for wireless sensor networks. It discusses clustering parameters like number of clusters and node mobility. It also classifies clustering algorithms into two main categories: probabilistic (e.g. LEACH) and non-probabilistic (e.g. weight-based and graph-based). Popular probabilistic protocols like LEACH, EEHC and HEED are described. Non-probabilistic protocols discussed include those based on node proximity, weights, and biologically inspired approaches. Overall, the document provides an overview of different clustering algorithm types and compares their advantages and disadvantages.
Efficient Broadcast Authentication with Highest Life Span in Wireless Sensor ...IRJET Journal
This document discusses efficient broadcast authentication to maximize the lifetime of wireless sensor networks. It proposes using a greedy heuristics algorithm with the X-TESLA broadcast authentication protocol. X-TESLA uses short key chains to authenticate broadcast messages but has limitations regarding sleep modes, network failures, idle sessions, extended lifetime, and denial-of-service attacks. The proposed approach combines X-TESLA with greedy heuristics to address these issues and significantly improve network lifetime compared to using each approach alone. It aims to find a broadcast tree that maximizes the minimum residual energy of nodes after broadcasting to prolong the ability of the network to complete future broadcasts.
34 9141 it ns2-tentative route selection approach for edit septianIAESIJEECS
Wireless Sensor Networks (WSNs) assume a crucial part in the field of mechanization and control where detecting of data is the initial step before any automated job could be performed. So as to encourage such perpetual assignments with less vitality utilization proportion, clustering is consolidated everywhere to upgrade the system lifetime. Unequal Cluster-based Routing (UCR) [7] is a standout amongst the most productive answers for draw out the system lifetime and to take care of the hotspot issue that is generally found in equivalent clustering method. In this paper, we propose Tentative Route (TRS) Selection approach for irregular Clustered Wireless Sensor Networks that facilitates in decision an efficient next relay to send the data cumulative by Cluster Heads to the Base Station. Simulation analysis is achieved using the network simulator to demonstrate the effectiveness of the TRS method.
This presentation summarizes Mithileysh Sathiyanarayanan's master's research on proposing a new multi-channel scheduling algorithm called Multi-Channel Deficit Round Robin (MCDRR) for hybrid TDM/WDM optical networks. The MCDRR aims to provide fairness and quality of service guarantees while efficiently utilizing network resources. It extends the existing Deficit Round Robin scheduling to allow overlapping rounds across multiple channels using tunable transmitters and fixed receivers. Simulation results showed the MCDRR achieves near perfect fairness even with uneven traffic flows.
The document proposes a clustering method for wireless sensor networks that uses hybrid compressive sensing to reduce data transmissions. Sensor nodes are organized into clusters, with each cluster having a cluster head. Nodes transmit data to the cluster head without compressive sensing. Cluster heads then use compressive sensing to generate projections from the collected data and transmit them to the sink node along a backbone tree, reducing the total number of transmissions. The authors analyze the relationship between cluster size and transmission count, determining an optimal size that minimizes transmissions when using this hybrid approach.
A Survey Paper on Cluster Head Selection Techniques for Mobile Ad-Hoc NetworkIOSR Journals
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It discusses techniques that select the cluster head based on attributes like node ID, degree of connectivity, mobility, load balancing, and power consumption. Some techniques aim to improve stability and reduce overhead by minimizing cluster changes. Each technique has advantages like simplicity or load balancing, and disadvantages like additional messaging or inability to eliminate ties between nodes. The survey provides a comparison of the techniques on their selection criteria and merits and demerits.
Modelling Clustering of Wireless Sensor Networks with Synchronised Hyperedge ...M H
This paper proposes Synchronised Hyperedge Replacement (SHR) as a suitable modelling framework for Wireless Sensor Networks (WSNs). SHR facilitates explicit modelling of WSNs applications environmental conditions (that significantly affect applications performance) while providing a sufficiently high level of abstraction for the specification of the underling coordination mechanisms. Because it is an intractable problem to solve in distributed manner, and distribution is important, we propose a new Nutrient-flow-based Distributed Clustering (NDC) algorithm to be used as a working example. The key contribution of this work is to demonstrate that SHR is sufficiently expressive to describe WSNs algorithms and their behaviour at a suitable level of abstraction to allow onward analysis.
The Appropriateness of the Factual Density as an Informativeness Measure for ...csandit
This document discusses using factual density as a measure of informativeness for online news articles. It conducted an experiment comparing the factual density of daily news reports on Malaysia Airlines Flight 370 from Fox News and Thompson Reuters over 33 days. The factual density was calculated both with and without a confidence threshold for extracted facts. Statistical analysis found little correlation between the factual densities of the two news sources, suggesting factual density may not be an appropriate measure of informativeness for online news articles on events of widespread public interest.
The document proposes a mobile phone-based approach to identify the monetary value of Iranian cash from a photo, using image processing and machine vision techniques. It can recognize the value with 95% accuracy despite challenges like rotation, scaling, background noise, lighting changes, and partial images. The approach first detects the orientation of the cash using face detection. It then finds and counts the zero digits to localize the monetary value. Finally, it identifies the remaining nonzero digit using a neural network to determine the total value.
In this paper, a modified invasive weed optimization (IWO) algorithm is presented for
optimization of multiobjective flexible job shop scheduling problems (FJSSPs) with the criteria
to minimize the maximum completion time (makespan), the total workload of machines and the
workload of the critical machine. IWO is a bio-inspired metaheuristic that mimics the
ecological behaviour of weeds in colonizing and finding suitable place for growth and
reproduction. IWO is developed to solve continuous optimization problems that’s why the
heuristic rule the Smallest Position Value (SPV) is used to convert the continuous position
values to the discrete job sequences. The computational experiments show that the proposed
algorithm is highly competitive to the state-of-the-art methods in the literature since it is able to
find the optimal and best-known solutions on the instances studied.
El aprendizaje colaborativo es un método educativo en el que los estudiantes trabajan en grupos pequeños y son responsables de su propio aprendizaje y el de los demás. Esto fomenta la solidaridad, la cooperación, el pensamiento crítico y otras habilidades blandas. El trabajo en equipo permite integrar diferentes ideas y perspectivas para lograr trabajos más completos y mejorar la comunicación entre los estudiantes.
STATE SPACE GENERATION FRAMEWORK BASED ON BINARY DECISION DIAGRAM FOR DISTRIB...csandit
This paper proposes a new framework based on Binary Decision Diagrams (BDD) for the graph distribution problem in the context of explicit model checking. The BDD are yet used to represent the state space for a symbolic verification model checking. Thus, we took advantage of high compression ratio of BDD to encode not only the state space, but also the place where each state will be put. So, a fitness function that allows a good balance load of states over the nodes of an homogeneous network is used. Furthermore, a detailed explanation of how to
calculate the inter-site edges between different nodes based on the adapted data structure is presented
Los polímeros son moléculas gigantes formadas por la unión de miles de moléculas más pequeñas llamadas monómeros. Pueden ser naturales u obtenidos artificialmente, y tienen diversas propiedades y usos importantes debido a que adoptan diferentes estructuras. Se clasifican según su origen, composición, estructura y comportamiento ante el calor.
The document provides biographical information about author Roald Dahl. It notes that he was born in Wales in 1916 and had various careers including fighter pilot, screenwriter, and children's book author. He was married to actress Patricia Neal but later divorced and married Felicity Crosland. Roald Dahl died in 1990. The second half of the document describes the characters and setting of his short story "Man From the South", which takes place in a Jamaican hotel and involves a mysterious old South American man betting a young American that he can perform a feat.
Twitter is a popular microblogging service where users create status messages (called
“tweets”). These tweets sometimes express opinions about different topics; and are presented to
the user in a chronological order. This format of presentation is useful to the user since the
latest tweets from are rich on recent news which is generally more interesting than tweets about
an event that occurred long time back. Merely, presenting tweets in a chronological order may
be too embarrassing to the user, especially if he has many followers. Therefore, there is a need
to separate the tweets into different categories and then present the categories to the user.
Nowadays Text Categorization (TC) becomes more significant especially for the Arabic
language which is one of the most complex languages.
In this paper, in order to improve the accuracy of tweets categorization a system based on
Rough Set Theory is proposed for enrichment the document’s representation. The effectiveness
of our system was evaluated and compared in term of the F-measure of the Naïve Bayesian
classifier and the Support Vector Machine classifier.
5 Movies That Capture The Adventurous SpiritMack Prioleau
There is nothing better than a good travel film that gives you the inspiration and motivation to take on a new adventure. These movies truly capture what the adventurous spirit is all about. They also do a great job of showing how travel and adventure can help shape oneself. If you are looking for a little travel inspiration, I recommend that you check out one of these five videos.
This document discusses types and manifestations of heart failure as well as circulatory shock. It describes high-output versus low-output heart failure, systolic versus diastolic failure, and right-sided versus left-sided failure. Manifestations of heart failure include effects of impaired pumping, decreased renal blood flow, and effects of the sympathetic nervous system. Types of shock discussed include cardiogenic, hypovolemic, obstructive, distributive, and septic shock. Causes and complications of shock like acute respiratory distress syndrome, acute renal failure, disseminated intravascular coagulation, and multiple organ dysfunction syndrome are also summarized.
Leave a comment or like my presentation. It motivates me. Namaste!
The IPP Lesson plan format is used in most Jesuit schools especially in the Basic Education department.
Adaptive Trilateral Filter for In-Loop Filteringcsandit
High Efficiency Video Coding (HEVC) has achieved si
gnificant coding efficiency improvement
beyond existing video coding standard by employing
several new coding tools. Deblocking
Filter, Sample Adaptive Offset (SAO) and Adaptive L
oop Filter (ALF) for in-loop filtering are
currently introduced for the HEVC standard. However
, these filters are implemented in spatial
domain despite the fact of temporal correlation wit
hin video sequences. To reduce the artifacts
and better align object boundaries in video, a prop
osed algorithm in in-loop filtering is
proposed. The proposed algorithm is implemented in
HM-11.0 software. This proposed
algorithm allows an average bitrate reduction of ab
out 0.7% and improves the PSNR of the
decoded frame by 0.05%, 0.30% and 0.35% in luminanc
e and chroma.
Tenemos ofertas especiales en varios productos esta semana. Nuestros clientes pueden ahorrar hasta un 30% en muebles, ropa y electrodomésticos. También ofrecemos 2x1 en nuestra selección de artículos deportivos y de ocio.
Este documento trata sobre la redacción de cartas comerciales. Explica que las cartas comerciales son un importante medio de comunicación para crear relaciones duraderas con los clientes. El objetivo es enseñar a los estudiantes a redactar correctamente cartas comerciales seleccionando, clasificando y organizando la información adecuada.
A GENERIC ALGORITHM TO DETERMINE CONNECTED DOMINATING SETS FOR MOBILE AD HOC ...csandit
This document summarizes a research paper that presents a generic algorithm for determining connected dominating sets (CDS) in mobile ad hoc networks. The algorithm can be used to find three types of CDS: maximum density-based, node ID-based, and stability-based. The performance of these three CDS algorithms is evaluated under different mobility models and metrics like CDS node size and lifetime. Key findings include that the performance of a CDS depends on the criteria used to form it and the mobility model, and certain combinations of CDS algorithm and mobility model can maximize lifetime and node size with minimal tradeoff.
A GENERIC ALGORITHM TO DETERMINE CONNECTED DOMINATING SETS FOR MOBILE AD HOC ...cscpconf
The high-level contributions of this paper are: (1) A generic algorithm to determine connected dominating sets (CDS) for mobile ad hoc networks and its use to find CDS of different
categories: maximum density-based (MaxD-CDS), node ID-based (ID-CDS) and stability-based (minimum velocity-based, MinV-CDS); (2) Performance comparison of the above three
categories of CDS algorithms with respect to two categories of mobility models: random node mobility models (Random Waypoint model) and the grid-based vehicular ad hoc network
(VANET) mobility models (City Section and Manhattan mobility models), with respect to the CDS Node Size and Lifetime. The performance of a CDS is observed to depend on the criteria
used to form the CDS (i.e., the node selection criteria of the underlying algorithm) and the mobility model driving the topological changes. For each category of CDS, we identify the
mobility model under which one can simultaneously maximize the lifetime and node size, with minimal tradeoff. For the two VANET mobility models, we also evaluate the impact of the grid
block length on the CDS lifetime and node size.
A generic algorithm to determine connected dominating sets for mobile ad hoc ...csandit
The high-level contributions of this paper are: (1) A generic algorithm to determine connected
dominating sets (CDS) for mobile ad hoc networks and its use to find CDS of different
categories: maximum density-based (MaxD-CDS), node ID-based (ID-CDS) and stability-based
(minimum velocity-based, MinV-CDS); (2) Performance comparison of the above three
categories of CDS algorithms with respect to two categories of mobility models: random node
mobility models (Random Waypoint model) and the grid-based vehicular ad hoc network
(VANET) mobility models (City Section and Manhattan mobility models), with respect to the
CDS Node Size and Lifetime. The performance of a CDS is observed to depend on the criteria
used to form the CDS (i.e., the node selection criteria of the underlying algorithm) and the
mobility model driving the topological changes. For each category of CDS, we identify the
mobility model under which one can simultaneously maximize the lifetime and node size, with
minimal tradeoff. For the two VANET mobility models, we also evaluate the impact of the grid
block length on the CDS lifetime and node size.
Constructing Minimum Connected Dominating Set in Mobile Ad Hoc NetworksGiselleginaGloria
One of the most important challenges of a Mobile Ad Hoc Network (MANET) is to ensure efficient routing among its nodes. A Connected Dominating Set (CDS) is a widely used concept by many protocols for broadcasting and routing in MANETs. Those existing protocols require significant message overhead in construction of CDS. In this paper, we propose a simple, inexpensive and novel algorithm of computing a minimum CDS. The proposed algorithm saves time and message overhead in forming a CDS while supporting node mobility efficiently. Simulation results show that the proposed algorithm is efficient in terms of both message complexity and the size of the CDS.
MULTI-HOP DISTRIBUTED ENERGY EFFICIENT HIERARCHICAL CLUSTERING SCHEME FOR HET...ijfcstjournal
Wireless sensor network (WSNs) are network of Sensor Nodes (SNs) with inherent sensing, processing and
communicating abilities. One of current concerns in wireless sensor networks is developing a stable
clustered heterogeneous protocol prolonging the network lifetime with minimum consumption of battery
power. In the recent times, many routing protocols have been proposed increasing the network lifetime,
stability in short proposing a reliable and robust routing protocol. In this paper we study the impact of
hierarchical clustered network with sensor nodes of two-level heterogeneity. The main approach in this
research is to develop an enhanced multi-hop DEEC routing protocol unlike DEEC. Simulation results
show the proposed protocol is better than DEEC in terms of FDN (First Dead Node), energy consumption
and Packet transmission.
MULTI-HOP DISTRIBUTED ENERGY EFFICIENT HIERARCHICAL CLUSTERING SCHEME FOR H...ijfcstjournal
Wireless sensor network (WSNs) are network of Sensor Nodes (SNs) with inherent sensing, processing and
communicating abilities. One of current concerns in wireless sensor networks is developing a stable
clustered heterogeneous protocol prolonging the network lifetime with minimum consumption of battery
power. In the recent times, many routing protocols have been proposed increasing the network lifetime,
stability in short proposing a reliable and robust routing protocol. In this paper we study the impact of
hierarchical clustered network with sensor nodes of two-level heterogeneity. The main approach in this
research is to develop an enhanced multi-hop DEEC routing protocol unlike DEEC. Simulation results
show the proposed protocol is better than DEEC in terms of FDN (First Dead Node), energy consumption
and Packet transmission.
K dag based lifetime aware data collection in wireless sensor networksijwmn
Wireless Sensor Networks need to be organized for efficient data collection and lifetime maximization. In
this paper, we propose a novel routing structure, namely k-DAG, to balance the load of the base station's
neighbours while providing the worst-case latency guarantee for data collection, and a distributed
algorithm for construction a k-DAG based on a SPD (Shortest Path DAG). In a k-DAG, the lengths of the
longest path and the shortest path of each sensor node to the base station differ by at most k. By adding
sibling edges to a SPD, our distributed algorithm allows critical nodes to have more routing choices. The
simulation results show that our approach significantly outperforms the SPD-based data collection
approach in both network lifetime and load balance.
Connected Dominating Set Construction Algorithm for Wireless Sensor Networks ...ijsrd.com
Energy efficiency plays an important role in wireless sensor networks. All nodes in sensor networks are energy constrained. Clustering is one kind of energy efficient algorithm. To organize the nodes in better way a virtual backbone can be used. There is no physical backbone infrastructure, but a virtual backbone can be formed by constructing a Connected Dominating Set (CDS). CDS has a significant impact on an energy efficient design of routing algorithms in WSN. CDS should first and foremost be small. It should have robustness to node failures. In this paper, we present a general classification of CDS construction algorithms. This survey gives different CDS formation algorithms for WSNs.
AN INVERTED LIST BASED APPROACH TO GENERATE OPTIMISED PATH IN DSR IN MANETS –...Editor IJCATR
In this paper, we design and formulate the inverted list based approach for providing safer path and effective
communication in DSR protocol.Some nodes in network can participate in network more frequenctly whereas some nodes are not
participating. Because of this there is the requirement of such an approach that will take an intelligent decision regarding the sharing of
bandwidth or the resource to a node or the node group. Dynamic source routing protocol (DSR) is an on-demand, source routing
protocol , whereby all the routing information is maintained (continually updated) at mobile nodes.
Analysis of Packet Loss Rate in Wireless Sensor Network using LEACH ProtocolIJTET Journal
Abstract: Wireless sensor network (WSN) is used to collect and send various kinds of messages to a base station (BS). Wireless sensor nodes are deployed randomly and densely in a target region, especially where the physical environment is very harsh that the macro-sensor counterparts cannot be deployed. Low Energy Adaptive Clustering Hierarchical (LEACH) Routing protocol builds a process where it reduces the Packet Loss Rate from 100 % to 55% .Simulations are carried out using NS2 simulator.
Distributed Path Computation Using DIV AlgorithmIOSR Journals
This document summarizes research on distributed path computation algorithms that aim to prevent routing loops. It introduces the Distributed Path Computation with Intermediate Variables (DIV) algorithm, which can operate with any routing algorithm to guarantee loop-freedom. DIV generalizes previous loop-free algorithms and provably outperforms them by reducing synchronous updates and helping maintain paths during network changes. The document also reviews link-state routing, distance-vector routing, and existing loop-prevention techniques like the Diffusing Update Algorithm and Loop Free Invariance algorithms.
This document summarizes research on distributed path computation algorithms that aim to prevent routing loops. It introduces the Distributed Path Computation with Intermediate Variables (DIV) algorithm, which can operate with any routing algorithm to guarantee loop-freedom. DIV generalizes previous loop-free algorithms and provably outperforms them by reducing synchronous updates and helping maintain paths during network changes. The document also reviews link-state routing, distance-vector routing, and existing loop-prevention techniques like the Diffusing Update Algorithm and Loop Free Invariance algorithms.
An Improved Energy Efficient Wireless Sensor Networks Through Clustering In C...Editor IJCATR
One of the major reason for performance degradation in Wireless sensor network is the overhead due to control packet and
packet delivery degradation. Clustering in cross layer network operation is an efficient way manage control packet overhead and which
ultimately improve the lifetime of a network. All these overheads are crucial in a scalable networks. But the clustering always suffer
from the cluster head failure which need to be solved effectively in a large network. As the focus is to improve the average lifetime of
sensor network the cluster head is selected based on the battery life of nodes. The cross-layer operation model optimize the overheads
in multiple layer and ultimately the use of clustering will reduce the major overheads identified and their by the energy consumption
and throughput of wireless sensor network is improved. The proposed model operates on two layers of network ie., Network Layer
and Transport Layer and Clustering is applied in the network layer . The simulation result shows that the integration of two layers
reduces the energy consumption and increases the throughput of the wireless sensor networks.
An Improved Energy Efficient Wireless Sensor Networks Through Clustering In C...Editor IJCATR
One of the major reason for performance degradation in Wireless sensor network is the overhead due to control packet and
packet delivery degradation. Clustering in cross layer network operation is an efficient way manage control packet overhead and which
ultimately improve the lifetime of a network. All these overheads are crucial in a scalable networks. But the clustering always suffer
from the cluster head failure which need to be solved effectively in a large network. As the focus is to improve the average lifetime of
sensor network the cluster head is selected based on the battery life of nodes. The cross-layer operation model optimize the overheads
in multiple layer and ultimately the use of clustering will reduce the major overheads identified and their by the energy consumption
and throughput of wireless sensor network is improved. The proposed model operates on two layers of network ie., Network Layer
and Transport Layer and Clustering is applied in the network layer . The simulation result shows that the integration of two layers
reduces the energy consumption and increases the throughput of the wireless sensor networks.
An Improved Energy Efficient Wireless Sensor Networks Through Clustering In C...Editor IJCATR
One of the major reason for performance degradation in Wireless sensor network is the overhead due to control packet and packet delivery degradation. Clustering in cross layer network operation is an efficient way manage control packet overhead and which ultimately improve the lifetime of a network. All these overheads are crucial in a scalable networks. But the clustering always suffer from the cluster head failure which need to be solved effectively in a large network. As the focus is to improve the average lifetime of sensor network the cluster head is selected based on the battery life of nodes. The cross-layer operation model optimize the overheads in multiple layer and ultimately the use of clustering will reduce the major overheads identified and their by the energy consumption and throughput of wireless sensor network is improved. The proposed model operates on two layers of network ie., Network Layer and Transport Layer and Clustering is applied in the network layer . The simulation result shows that the integration of two layers reduces the energy consumption and increases the throughput of the wireless sensor networks.
This document discusses a Cluster Based Distributed Spanning Tree (CBDST) routing technique for mobile ad hoc networks. CBDST organizes nodes into a hierarchy of clusters to allow instant creation of spanning trees. It aims to improve network connectivity, lifetime, energy efficiency, and load balancing compared to AODV routing. Simulation results show CBDST achieves higher packet delivery ratios and connectivity than AODV, especially with increased pause time when nodes are more static.
Energy Efficient LEACH protocol for Wireless Sensor Network (I-LEACH)ijsrd.com
In the wireless sensor networks (WSNs), the sensor nodes (called motes) are usually scattered in a sensor field an area in which the sensor nodes are deployed. These motes are small in size and have limited processing power, memory and battery life. In WSNs, conservation of energy, which is directly related to network life time, is considered relatively more important souse of energy efficient routing algorithms is one of the ways to reduce the energy conservation. In general, routing algorithms in WSNs can be divided into flat, hierarchical and location based routing. There are two reasons behind the hierarchical routing Low Energy Adaptive Clustering Hierarchy (LEACH) protocol be in explored. One, the sensor networks are dense and a lot of redundancy is involved in communication. Second, in order to increase the scalability of the sensor network keeping in mind the security aspects of communication. Cluster based routing holds great promise for many to one and one to many communication paradigms that are pre valentines or networks.
Energy efficient routing in wireless sensor networksSpandan Spandy
The document summarizes several energy efficient multicast routing protocols for wireless sensor networks. It begins with an introduction to wireless sensor networks and routing challenges. It then summarizes the following protocols: MAODV, TEEN, APTEEN, SPEED, MMSPEED, RPAR, and LEACH. For each protocol, it provides a brief overview of the protocol's design, objectives, components, and how it aims to improve energy efficiency in wireless sensor network routing. The document concludes that providing energy-efficient multicast routing is important for wireless sensor network applications and that the protocols presented aim to achieve lower energy requirements through approaches like clustering, adaptive thresholding, and congestion control.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ENERGY OPTIMISATION SCHEMES FOR WIRELESS SENSOR NETWORKcscpconf
A sensor network is composed of a large number of sensor nodes, which are densely
deployed either inside the phenomenon or very close to it. Sensor nodes have
sensing, processing and transmitting capability . They however have limited energy
and measures need to be taken to make op- timum usage of their energy and save
them from task of only receiving and transmitting data without processing. Various
techniques for energy utilization optimisation have been proposed Ma jor players are
however clustering and relay node placement. In the research related to relay node
placement, it has been proposed to deploy some relay nodes such that the sensors
can transmit the sensed data to a nearby relay node, which in turn delivers the data
to the base stations. In general, the relay node placement problems aim to meet
certain connectivity and/or survivabil- ity requirements of the network by deploying a
minimum number of relay nodes. The other approach is grouping sensor nodes into
clusters with each cluster having a cluster head (CH). The CH nodes aggregate the
data and transmit them to the base station (BS). These two approaches has been
widely adopted by the research community to satisfy the scala- bility objective and generally achieve high energy efficiency and prolong network lifetime in large-scale WSN environments and hence are discussed here along with single hop and multi hop characteristic of sensor node
Similar to Performance of the Maximum Stable Connected Dominating Sets in the Presence of Static Nodes in a Mobile Adhoc Network (20)
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
2. 38 Computer Science & Information Technology (CS & IT)
neighborhood, then every node receives a copy of the message from each of its neighbors
(referred to as flooding). Though flooding guarantees that the broadcast message reaches every
node in the network, each node loses energy to receive message broadcast by each of its
neighbors (in addition to the energy lost at each node to transmit/ broadcast the message once in
its neighborhood). It is not required for every node in the network to broadcast the message in its
neighborhood. Instead, if we use a CDS like communication topology for broadcast, then it would
be sufficient if only the constituent nodes of the CDS broadcast (retransmit) and the rest of the
nodes only spend energy to merely receive the message. The lower the number of nodes
constituting the CDS (referred to as CDS Node Size), the lower the number of retransmissions.
Unfortunately, the problem of determining a minimum node size CDS is NP-hard [8]. Several
heuristics have been proposed to determine CDSs with approximation ratios for the CDS Node
Size as low as possible. A common thread among all of these heuristics is to give preference to
include nodes that have a larger degree as part of the CDS so that the CDS Node Size is as low as
possible. Apparently, nodes with a larger degree (larger number of neighbors) were the preferred
candidates for inclusion to the CDS.
In [9], we observed that the degree-based heuristics are quite unstable in the presence of node
mobility in MANETs. We had then proposed a benchmarking algorithm to determine a sequence
of connected dominating sets over the duration of a network session such that the number CDS
transitions is the global minimum. Referred to as the Maximum Stable CDS algorithm, the
benchmarking algorithm assumes the entire sequence of future topology changes is known a
priori and determines a sequence of long-living stable CDSs as follows: Whenever a CDS is
required at time instant t, we determine a connected graph of the network (called a mobile graph)
whose constituent edges exist for the longest possible time starting from time instant t; we then
simply run a CDS algorithm/heuristic on the mobile graph and use the CDS for the duration of the
mobile graph and repeat the above procedure for the entire network session. The algorithm can be
used to arrive at a sequence of long-living stable CDSs such that the number of transitions (from
one CDS to another) during a network session is the global minimum and the average CDS
lifetime is the global maximum (benchmarks).
As in many MANET simulation studies, the performance comparison of the Maximum Stable
CDS proposed in [9] vis-a-vis the degree-based CDS was only conducted when all the nodes in
the network are mobile and not conducted when a certain percentage of the nodes in the network
are static. This formed the motivation for our research in this paper. Our hypothesis is that the
stability of the degree-based CDS would significantly increase with increase in the percentage of
static nodes in the network (compared to the CDS lifetime incurred when all nodes were mobile)
as there are good chances that an appreciable fraction of the static nodes are also part of the
degree-based CDS and contribute towards its stability and cover the non-CDS nodes that could be
mobile; on the other hand, the lifetime of the Maximum Stable CDS would only marginally
increase, as it would be difficult to find a connected mobile graph that exists for a longer time
even in the presence of a certain fraction of static nodes.
The rest of the paper is organized as follows: Section 2 presents a heuristic to determine degree-
based CDS and evaluates its run-time complexity. Section 3 presents the Maximum Stable CDS
algorithm to determine a sequence of stable CDSs, explains its proof of correctness and illustrates
its working with an example. Section 4 presents the simulation results evaluating the lifetime and
node size of the Maximum Stable CDS vis-a-vis the degree-based CDS under an extensive set of
mobility scenarios varying the maximum node velocity and the percentage of static nodes for
each level of node mobility. Section 5 differentiates our work from related work. Section 6
concludes the paper. Throughout the paper, the terms 'node' and 'vertex', 'link' and 'edge' are used
interchangeably. They mean the same.
3. Computer Science & Information Technology (CS & IT) 39
2. HEURISTIC FOR DEGREE-BASED CONNECTED DOMINATING SET
In this section, we first describe a heuristic that could be used to determine the CDS for a network
graph based on the degree (the number of neighbors of a vertex) in the graph. We then illustrate
examples to show the execution of the CDS construction heuristic. We finally describe an
algorithm that can be used to validate the existence of a CDS at any time instant.
2.1. Generic Heuristic to Determine a CDS
The overall idea is to give preference (for inclusion to the CDS) for vertices that have a larger
degree. As indicated in the pseudo code of Figure 1, we maintain three lists: CDS Node List -
vertices that are part of the CDS; Uncovered Node List - vertices that are yet to be covered by a
node in the CDS Node List; Candidate Node List - vertices that are covered by a node in the CDS
Node List, but not yet considered for inclusion to the CDS Node List. Initially, the CDS Node
List is empty and the Uncovered Node List is the set of all the vertices in the graph; to start with,
the Candidate Node List has a single entry corresponding to the vertex that has the largest degree.
The Candidate Node List is implemented as a priority queue and the vertices in the Candidate
Node List are stored in the decreasing order of their degree (the vertex with the largest degree is
the first vertex to be removed from this list). In each iteration, we remove a vertex from the
Candidate Node List and if it has one or more uncovered neighbor nodes, then the vertex is added
to the CDS Node List and the newly covered neighbor nodes are removed from the Uncovered
Node List and included in the Candidate Node List. If a vertex removed from the Candidate Node
List has no uncovered neighbors, then the vertex is not included in the CDS Node List. We repeat
the above procedure until the Uncovered Node List gets empty or there are no more vertices in
the Candidate Node List. If the Candidate Node List gets empty while the Uncovered Node List is
not yet empty, then it implies the underlying graph is not connected (i.e., all the vertices are not in
one component). If the underlying graph is connected, the iterations stop when there are no more
vertices in the Uncovered Node List (this implies, all the vertices in the graph are covered by at
least one node in the CDS Node List).
--------------------------------------------------------------------------------------------------------------------
Input: Graph G = (V, E); Degree Score Vector D(V) for all the vertices
Output: CDS Node List
Initialization: CDS Node List = ϕ; ∀ x∈V, Uncovered Node List = {x; D(x)}
Candidate Node List = {s; D(s)} where s ∈V and D(s) = )}({ iDMax
Vi∈
Begin CDS Construction
while (Candidate Node List ≠ ϕ AND Uncovered Node List ≠ ϕ) do
Vertex candidateNode = Dequeue(Candidate Node List)
uncoveredNodes = {x | x ∈Neighbors(candidateNode) AND
x ∈Uncovered Node List}
if (uncoveredNodes ≠ ϕ) then
CDS Node List = CDS Node List ∪ {candidateNode}
∀ x ∈ uncoveredNodes,
Uncovered Node List = Uncovered Node List - {x; D(x)}
Candidate Node List = Candidate Node List ∪ {x; D(x)}
end if
end while
if (Uncovered Node List = ϕ) then
return CDS Node List
end if
End CDS Construction
--------------------------------------------------------------------------------------------------------------------
Figure 1. Pseudo Code for a CDS Construction Heuristic based on Node Degree
4. 40 Computer Science & Information Technology (CS & IT)
Note that the degree score for a node is determined offline (prior to the execution of the heuristic)
and is not updated during the execution of the heuristic. In other words, the ordering of the nodes
in the Candidate Node List in each iteration is based on the initial (static) degree values input to
the heuristic. Thus, for example, if two nodes u and v with degree scores of 4 and 6 respectively
are in the Candidate Node List, but node v has only two uncovered neighbors whereas node u has
three uncovered neighbors, node v with a relatively larger degree score would still be ahead of
node u in the Candidate Node List.
The time complexity for the CDS construction heuristic is O(VlogV + ElogV), where V and E are
respectively the number of vertices and edges in the network graph; O(logV) is the time
complexity for a dequeue operation in a priority queue (Candidate Node List) maintained as a
Max-Heap [8] as well as the time complexity to include a vertex in the Candidate Node List by
visiting the neighbors of the vertex that is added to the CDS Node List. We could run the while
loop at most V times, once for each vertex, and across all such iterations, the entire set of edges
are traversed and the end vertices of the edges are considered for inclusion to the Candidate Node
List. Hence, the overall time complexity of the generic heuristic to construct a degree-based CDS
is O((V+E) logV).
2.2. Example to Construct Degree-based CDS
In this section, we show the execution of the degree heuristic on a sample network graph. For
each vertex in Figure 2, the ID is indicated inside the circle and the degree of the vertex is
indicated outside the circle. The last graph in each of these figures is a CDS graph comprising of
only edges between two CDS nodes (indicated as solid lines) and edges between a non-CDS node
and a CDS node (indicated as dotted lines).
Figure 2. Example to Illustrate the Construction of a Degree-based CDS
Figure 2 illustrates the execution of the heuristic to determine degree-based CDS: in iteration 3,
even though vertices 3 and 7 have the same larger degree (3), vertex 3 is not considered for
inclusion to the CDS because all of its three neighbors are already covered (i.e., none of the
neighbors of vertex 3 are in the Uncovered Node List); on the other hand vertex 7 has one
uncovered neighbor and is hence included to the CDS Node List. The sequence of vertices
included in the Degree-based CDS are 2, 4, 7 and 6. One significant observation could be made in
the toy example shown in Figure 2: Vertex 3 with a relatively higher degree (3) is connected to
two high-degree vertices (vertices 2 and 7), but does not contribute much in terms of covering
nodes that are yet in the Uncovered Nodes List once vertices 2 and 7 get into the CDS Node List.
Thus, vertex 3 is a highly enclosed vertex that is not exposed much to vertices outside its own
5. Computer Science & Information Technology (CS & IT) 41
community. On the other hand, even if vertex 2 gets into the CDS Node List, vertex 7 could also
get into the CDS Node List, because they are connected to two different vertices/communities
(vertices 8 and 6) that only either of them could cover, but not both.
2.3. Algorithm to Validate the Existence of a CDS
We now present the algorithm that we use to validate the existence of a CDS at any time instant.
This algorithm would be very useful in discrete-event simulations when we tend to use a CDS
that existed at an earlier time instant (say, t-1) at a later time instant (t). That is, given a CDS
Node List that was connected (i.e., the CDS nodes are reachable from one another directly or
through one or more intermediate CDS nodes) and covered the non-CDS nodes in the network
graph at time instant t-1, we want to validate whether the same holds true at time instant t. We do
so by first constructing a CDS graph based on the CDS Node List at time instant t. All the
vertices in the network are part of the CDS graph and the edges in the CDS graph are those that
exist between any two CDS nodes as well as between a CDS node and a non-CDS node at time
instant t. We check if the CDS Node List is connected at time instant t by running the Breadth
First Search (BFS) [8] algorithm, starting from an arbitrarily chosen CDS node and see if every
other CDS node could be reached as part of the BFS traversal only on the CDS-CDS edges. If all
the CDS nodes could be visited, the CDS Node List is considered to be connected. We then check
whether every non-CDS node has an edge to at least one CDS node at time t. If both the
validations (connectivity of the CDS Node List and coverage of every non-CDS node by at least
one CDS node) are true, then we consider the CDS to exist at time instant t.
3. BENCHMARKING ALGORITHM FOR MAXIMUM STABILITY CDS
In this section, we describe a benchmarking algorithm [9] proposed to determine a sequence of
longest-living connected dominating sets for MANETs such that the number of transitions is the
global minimum and the average lifetime of the CDS is the maximum. We refer to the algorithm
as the Maximum Stable CDS algorithm; it works on the basis of graph intersections and assumes
the availability of information about topology changes for the entire duration of the network
session. The algorithm introduces the notion of a mobile graph G(t, ...., t+k) = G(t) ∩ G(t+1)
∩ G(t+2) ∩ ... ∩ G(t+k), wherein G(t), G(t+1), G(t+2), ..., G(t+k) are static graphs (snapshots)
of the network at time instants t, t+1, t+2, ..., t+k respectively. A mobile graph G(t, ...., t+k) is thus
a graph that essentially captures the vertices and edges that exist in the network for all the time
instants t, t+1, t+2, ..., t+k. The Maximum Stable CDS algorithm works as follows: Whenever a
CDS is required at time instant t, we determine a mobile graph G(t, ...., t+k) such that G(t, ...., t+k)
is connected and G(t, ..., t+k+1) is not connected. We determine a CDS on such a longest-living
connected mobile graph G(t, ...., t+k) and repeat the above procedure for the duration of the
network session to obtain a sequence of longest-living CDSs such that the number of transitions
needed to change from one CDS to another is the optimum (minimum). We say the optimum
number of transitions incurred is the global minimum because the algorithm assumes the
knowledge of the entire topology changes and always chooses the longest-living mobile graph
since the time instant starting from which a CDS is needed. Later, we also prove that the number
of transitions accomplished by any other CDS construction algorithm cannot be below the value
obtained for the Maximum Stable CDS algorithm. Accordingly, the average of the lifetimes of the
longest-living connected dominating sets in the sequence determined by the Maximum Stable
CDS algorithm for the duration of the network session is the global maximum and can serve as a
benchmark to compare the average lifetime incurred with any other sequence of connected
dominating sets (like the degree-based CDSs) under identical conditions.
As seen in the pseudo code for the Maximum Stable CDS algorithm (Figure 3), it does not matter
what CDS construction heuristic we use to determine a CDS in each of the longest-living mobile
6. 42 Computer Science & Information Technology (CS & IT)
graphs G(i, ..., j), because the CDS would not exist in the mobile graph G(i, ..., j+1) as the latter
would not be a connected graph. In this research, we determine the connected dominating sets in
the longest-living mobile graph G(i, ..., j) based on the degree of the vertices in the mobile graph
G(i, ..., j). The average lifetime of the Maximum Stable CDS would not be affected because of
this; however, the size of the Maximum Stable CDS is likely to be affected. As degree-based
CDSs are likely to incur a lower CDS Node Size, it would be only appropriate to determine the
connected dominating sets in each of the longest-living mobile graphs G(i, ..., j) using the degree
of the vertices in the mobile graph so that the Maximum Stable CDS would incur a lower CDS
Node Size and at the same time incur the maximum lifetime. The time complexity of the
Maximum Stable CDS algorithm depends on the time complexity of the underlying heuristic used
to determine the CDSs in each of the mobile graphs. As one can see in the pseudo code (Figure
3), there could be at most T mobile graphs (where T is the number of static graph snapshots,
number of graph samples) on which a CDS heuristic needs to be run; if we use a Θ(V2
) heuristic
to determine degree-based CDSs, then the overall time complexity of the Maximum Stable CDS
algorithm is O(V2
T).
--------------------------------------------------------------------------------------------------------------------
Input: Static graphs G1, G2, ..., GT, where T is the duration of the network session
Output: Maximum Stable CDS
Auxiliary Variables: i, j
Initialization: Maximum Stable CDS = ϕ; i = 1; j = 1
Begin Maximum Stable CDS Construction
while (i ≤ T) do
while G(i, ..., j) is connected AND j ≤ T do
j ← j + 1
end while
if (i < j) then
j ← j - 1
Maximum Stable CDS = Maximum Stable CDS ∪ {CDS in G(i, ..., j) }
i ← j + 1
end if
else
i ← i + 1
end else
j ← i
end while
return Maximum Stable CDS
End Maximum Stable CDS Construction
--------------------------------------------------------------------------------------------------------------------
Figure 3. Pseudo Code for the Benchmarking Algorithm to Determine Maximum Stable CDS
3.1. Proof of Correctness
We now present an informal proof of correctness of the Maximum Stable CDS algorithm. For a
more formal proof, the interested reader is referred to [9]. To prove that the Maximum Stable
CDS algorithm finds a sequence of long-living CDSs that incur the minimum number of
transitions (say m), assume the contrary - i.e., there exists a hypothetical algorithm that
determines a sequence of CDSs that undergo n transitions such that n < m. We will now explore
whether this is possible. For n < m to exist, there has to be an epoch of time instants [p, ..., s] that
is a superset of an epoch of time instants [q, .., r] such that p < q < r < s and that the hypothetical
algorithm was able to find a CDS that existed during time instants [p, ..., s], but the Maximum
Stable CDS algorithm can only find a CDS that existed during time instants [q, ..., r] and had to
7. Computer Science & Information Technology (CS & IT) 43
go a transition at time instant r + 1. This implies the Maximum Stable CDS algorithm could not
find a connected mobile graph G(q, ..., r+1) and could only find a connected mobile graph G(q,
..., r). This means there existed no connected mobile graph from time instants [q, ..., s] and hence
there is no connected mobile graph from time instants [p, ..., s]. If that is the case, it would not be
possible to find a CDS that exists from time instants [p, ..., s]; thus, the sequence of connected
dominating sets determined by the hypothetical algorithm has to undergo at least as many
transitions as those determined by the Maximum Stable CDS algorithm, which is a contradiction
to our initial assumption that n < m. Hence, the number of transitions (m) incurred by the
sequence of long-living CDSs determined by the Maximum Stable CDS algorithm serves as a
lower bound for the number of transitions (n) incurred by any other algorithm/heuristic to
determine a sequence of CDSs.
Figure 4. Example to Illustrate the Execution of the Maximum Stable CDS Benchmarking Algorithm and a
Comparison of Maximum Stable CDS vs. Degree-based CDS
3.2. Example
Figure 4 presents an example to illustrate the execution of the Maximum Stable CDS algorithm
on a sequence of five static graphs G1, ..., G5 and a comparison of the number of transitions
(changes from one CDS to another) and CDS Node Size incurred for Maximum Stable CDS vis-
a-vis a degree-based CDS on the same sequence of static graphs. We see that there exists a
connected mobile graph G(1, 2, 3) and a connected mobile graph G(4, 5). Hence, it would suffice
to use one CDS determined on the mobile graph G(1, 2, 3) and another CDS determined on the
mobile graph G(4, 5) - resulting in only one CDS transition across the five static graphs. On the
8. 44 Computer Science & Information Technology (CS & IT)
other hand, the degree-based CDS determined on an individual static graph does not exist in the
subsequent static graph. Hence, we end up determining a new degree-based CDS on each of the
five static graphs - resulting in a total of four CDS transitions across the five static graphs. In
Figure 4, we use a degree-based heuristic to determine a CDS on each of the two mobile graphs.
We do observe a tradeoff between CDS Node Size and the number of CDS transitions. The
average CDS Node Size incurred for the Maximum Stable CDS is (3*5 + 2*7) / 5 = 5.8, whereas
the average CDS Node Size incurred for degree-based CDS is (3*4 + 2*5) / 5 = 4.4. The mobile
graphs G(1, 2, 3) and G(4, 5) have relatively fewer links than the individual static graphs. Hence,
it is not possible to cover a larger number of nodes with the inclusion of few nodes in the CDS.
As a result, the Maximum Stable CDS incurs a relatively larger CDS Node Size compared to that
of the degree-based CDS.
4. SIMULATIONS
We conducted the simulations in a discrete-event simulator implemented in Java. We assume a
network of dimensions 1000m x 1000m; the number of nodes is fixed as 100 and the transmission
range of the nodes is fixed at 250m; for these values, on average, there are
π*250*250*100/(1000*1000) = 20 neighbors per node. The overall connectivity of the network
for the above conditions is observed to be more than 99.5%. Initially, the nodes are uniform-
randomly distributed throughout the network. The number of static nodes in the network is varied
with values of 0, 20, 40, 60 and 80. Since the total number of nodes in the network is 100, the
above values for the number of static nodes also represent the percentage of static nodes in the
network. A node designated to be static in the beginning of the simulation does not move at all for
the duration of the simulation; likewise, a node designated as mobile - keeps moving for the entire
simulation.
The node mobility model used is the Random Waypoint Model [10], wherein the maximum
velocity of each mobile node is vmax and it is varied with values of 5 m/s (low mobility), 25 m/s
(moderate mobility) and 50 m/s (high mobility). According to the Random Waypoint Model, each
mobile node starts from its initial location and chooses to move to an arbitrary location (located
within the network boundaries) with a velocity that is uniform-randomly selected from the range
[0, ..., vmax]; after moving to the chosen location, the node chooses another arbitrary location
(within the network) and moves to the chosen location with a new velocity that is uniform-
randomly chosen from the range [0, ..., vmax]. In other words, a node moves in a straight line from
one location to another chosen location with a particular velocity and after reaching the targeted
location, the node chooses another target location to move with a velocity that could be different
from the one used before. A mobile node moves like this for the duration of the simulation
session and generates a mobility profile for a particular combination of values for the number of
static nodes and maximum node velocity. The collection of the mobility profiles of all the nodes
is stored in a mobility profile file and is feed in to the heuristic/algorithm for determining a
degree-based CDS and the Maximum Stable CDS. We generate 100 such mobility profile files for
each of the combinations of the values for the number of static nodes (0, 20, 40, 60 and 80) and
the maximum node velocity (5 m/s, 25 m/s and 50 m/s). The decision of whether a node will be
static or mobile is made prior to creating each of the mobility profile files.
We run the simulations for each mobility profile file and average the results observed for the CDS
Lifetime and CDS Node Size for the degree-based CDS as well as the Maximum Stable CDS. We
start the simulations at time 0 sec (based on the initial distribution of the nodes in a particular
mobility profile file) and run the simulations for a period of 1000 seconds, sampling the network
for every 0.25 seconds of the simulation. We take a snapshot of the network at each of these
sampling time instants (referred to as the static graphs). We use the following approach to run the
simulations for the degree-CDS: Whenever a CDS is needed at a particular time instant t, we
9. Computer Science & Information Technology (CS & IT) 45
determine the degree scores of the nodes based on the static graph snapshot of the network at time
instant t and feed in these scores to the CDS construction heuristic (described in Section 2.1). The
CDS determined at time instant t is used for the subsequent sampling time instants as long as it
exists (validated using the algorithm presented in Section 2.3). When the currently known CDS is
observed to no longer exist at a particular time instant, we repeat the above procedure. We
continue like this for the duration of the simulation and measure the lifetime and node size for the
sequence of degree-based connected dominating sets used for the particular mobility profile file.
We average the results for these two metrics (Figures 5-7) observed for all the mobility profile
files generated for a particular combination of values for the number of static nodes and vmax. In
the case of the Maximum Stable CDS algorithm, for a particular simulation run under a mobility
profile file, we determine a sequence of connected mobile graphs for the duration of the
simulation session and run the degree-based CDS construction heuristic to determine a sequence
of connected dominating sets. We repeat this procedure for all the mobility profile files generated
and determine the average of the lifetime and node size for the sequence of Maximum Stable
CDSs for the particular combination of values of the number of static nodes and vmax.
As expected, the Maximum Stable CDS incurred a larger CDS lifetime compared to that of the
degree-based CDS. However, for any given level of node mobility (vmax), the difference in the
CDS lifetimes between the Maximum Stable CDS and degree-based CDS significantly decreases
with increase in the percentage of static nodes in the network. For vmax = 5 m/s, the ratio of the
lifetime of the Maximum Stable CDS to that of the degree-based CDS decreases from a factor of
about 10 to 2.5 as we increase the percentage of static nodes from 0 to 80. As the value of vmax
increases, the reduction in the difference is even more prominent. For vmax = 25 m/s, the ratio of
the lifetime of the Maximum Stable CDS to that of the degree-based CDS decreases from a factor
of about 7.5 to 1.8; and for vmax = 50 m/s, the ratio of the lifetimes decreases from a factor of
about 7 to 1.4. Thus, the lifetime of degree-based CDS becomes very comparable to that of the
Maximum Stable CDS when 80% of the nodes in the network are static and only 20% are mobile.
In general, when more than half of the nodes in the network are static, the lifetime of the degree-
based CDS starts to swiftly approach the lifetime of the Maximum Stable CDS. Thus, as
predicted in our hypothesis, the presence of static nodes in the network has a very positive effect
on the stability of the degree-based CDSs, but only a marginal effect on the lifetime of the
Maximum Stable CDS.
Figure 5. Average CDS Lifetime and Average CDS Node Size: Maximum Node Velocity, vmax = 5 m/s
Figure 6. Average CDS Lifetime and Average CDS Node Size: Maximum Node Velocity, vmax = 25 m/s
10. 46 Computer Science & Information Technology (CS & IT)
Figure 7. Average CDS Lifetime and Average CDS Node Size: Maximum Node Velocity, vmax = 50 m/s
The lifetime of the degree-based CDSs increases by a factor of 5 to 13 as we increase the
percentage of static nodes from 0 to 80; the increase is substantial with increase in the level of
node mobility. At vmax = 5 m/s, the increase in the lifetime of the degree-based CDSs is by a factor
of 5 as we increase the percentage of static nodes from 0 to 80; however at vmax = 25 m/s and 50
m/s, the lifetime of the degree-based CDSs increases by a factor of 10 to 13. At the same time, the
node size for the degree-based CDSs remains slightly high when the percentage of static nodes is
low (0, 20 or 40) and then again decreases by a factor of at most 20% when the percentage of
static nodes increases to 60 and 80. Thus, we observe the presence of static nodes in the network
to significantly increase the lifetime and even marginally reduce the node size for the degree-
based CDSs, thereby reducing the stability-tradeoff appreciably. We observe the lifetime of the
Maximum Stable CDS to increase at most by a factor of 2.3 with increase in the percentage of
static nodes. Like it was observed for the degree-based CDSs, the increase is only about 40%
when vmax = 5 m/s and is higher for vmax = 25 m/s and 50 m/s; similarly, the node size for the
Maximum Stable CDSs decreases by about 20% with increase in the percentage of static nodes.
We observe the node size for the Maximum Stable CDS to converge towards the node size of the
degree-based CDS with increase in the percentage of the static nodes.
5. RELATED WORK
MANET simulation studies have typically used the Random Waypoint model as the mobility
model. As per this mobility model, a mobile node moving in a particular direction could pause for
a certain time after reaching a targeted location and then continue to move. The pause time is one
of the parameters of the Random Waypoint mobility model. The influence of static nodes on the
performance of a communication protocol has been so far evaluated only on the basis of this
pause time. Also, such studies have been typically conducted to analyze the performance of
unicast routing protocols (e.g., [11]) and multicast routing protocols (e.g., [12]) for MANETs.
Even in such works, the focus had been on the impact of pause time on metrics such as delay,
throughput, energy consumption, etc, but not explicitly on the lifetime of the communication
topologies like paths and trees. To the best of our knowledge, we have not come across any
studies that consider the influence of static nodes of any form (including pause time) on the
performance of network-wide communication topologies, like that of connected dominating sets
for MANETs. In this paper, we chose to designate certain nodes as static throughout the
simulation and let the other nodes to move (rather than letting all mobile nodes to move and pause
alternately). We wanted to see if the presence of static nodes (nodes that do not move at all) could
improve the overall stability of connected dominating sets (as it is seen in the simulations) so that
we could henceforth come up with a network design strategy wherein even though certain nodes
in a network are bound to be mobile all the time, one may deploy certain percentage of static
nodes that could significantly increase the stability of network-wide communication topologies
such as connected dominating sets.
11. Computer Science & Information Technology (CS & IT) 47
6. CONCLUSIONS
The simulation results confirm our hypothesis: We observe that for a given level of node
mobility, the lifetime of the Maximum Stable CDS does not increase substantially with increase
in the percentage of static nodes; on the other hand, the lifetime of the degree-based CDS
increases significantly (and approaches the lifetime of the Maximum Stable CDS) with increase
in the percentage of static nodes. As we increase vmax from 5 m/s to 50 m/s, the average lifetime
incurred for the Maximum Stable CDS increases by a factor of 1.4-2.3 as we increase the
percentage of static nodes from 0 to 80. This could be attributed to the fact that the mobility of
certain percentage of nodes is sufficient to disrupt the connectivity of the mobile graphs spanning
over several time instants. On the other hand, as we increase vmax from 5 m/s to 50 m/s, the
lifetime of the degree-based CDS increases substantially (by factors as large as 13) with increase
in the percentage of the static nodes from 0 to 80. Thus, even if a certain fraction of nodes move
faster, the presence of a significant fraction of static nodes helps to boost the lifetime of the
degree-based CDS. In this paper, the degree-based CDS heuristic did not take into account the
presence of static nodes while considering nodes for inclusion in the CDS. With the results
observed in this paper, we anticipate that if the heuristic is adapted to take into account both the
node velocity (or some form of node mobility) and node degree while forming a CDS, the
stability of such CDSs could be significantly higher than that incurred with CDSs determined
using a degree-based heuristic without any appreciable increase in the CDS node size. We thus
anticipate the results of this paper to open new avenues of research for improving the stability of
degree-based heuristics for MANETs.
REFERENCES
[1] C. Siva Ram Murthy and B. S. Manoj, Ad Hoc Wireless Networks: Architectures and Protocols,
Prentice Hall, 1st edition, March 2012.
[2] D. B. Johnson and D. A. Maltz, "Dynamic Source Routing in Ad Hoc Wireless Networks," Mobile
Computing - The Kluwer International Series in Engineering and Computer Science, vol. 353, pp.
153-181, 1996.
[3] C. E. Perkins and E. M. Royer, "Ad Hoc On-demand Distance Vector Routing," Proceedings of the
2nd IEEE Workshop on Mobile Computing Systems and Applications, pp. 90-100, New Orleans, LA,
USA, Feb 25-26, 1999.
[4] C. W. Wu and Y. C. Tay, "AMRIS: A Multicast Protocol for Ad hoc Wireless Networks,"
Proceedings of the IEEE Military Communications Conference, vol. 1, pp. 25-29, Oct 31-Nov 3,
1999.
[5] A. B. Mnaouer, L. Chen, C. H. Foh and J. W. Tantra, "OPHMR: An Optimized Polymorphic Hybrid
Multicast Routing Protocol for MANET," IEEE Transactions on Mobile Computing, vol. 6, no. 5, pp.
551-562, May 2007.
[6] F. Dai and J. Wu, "Performance Analysis of Broadcast Protocols in Ad Hoc Networks Based on Self-
Pruning," IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 11, pp. 1-13,
November 2004.
[7] S. Saha, S. R. Hussain and A. K. M. Ashikur Rahman, "RBP: Reliable Broadcast Protocol in Large
Scale Mobile Ad Hoc Networks," Proceedings of the 24th IEEE International Conference on
Advanced Information Networking and Applications, pp. 526-532, Perth, WA, USA, April 20-23,
2010.
[8] T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, Introduction to Algorithms, 3rd ed., MIT
Press, July 2009.
[9] N. Meghanathan and A. Farago, "On the Stability of Paths, Steiner Trees and Connected Dominating
Sets in Mobile Ad Hoc Networks," Ad Hoc Networks, vol. 6, no. 5, pp. 744-769, July 2008.
[10] C. Bettstetter, H. Hartenstein and X. Perez-Costa, “Stochastic Properties of the Random-Way Point
Mobility Model,” Wireless Networks, vol. 10, no. 5, pp. 555-567, September 2004.
[11] S. R. Das, C. E. Perkins, E. M. Royer, "Performance Comparison of Two On-demand Routing
Protocols for Ad hoc Networks," Proceedings of the IEEE International Conference on Computers
and Communications, vol. 1, pp. 3-12, Tel Aviv, Israel, March 26-30, 2000.
12. 48 Computer Science & Information Technology (CS & IT)
[12] K. Kavitha and K. Selvakumar, "Analyzing Multicast Routing Protocols with Different Mobility
Models," International Journal of Emerging Technology and Advanced Engineering, vol. 3, no. 1, pp.
33-41, January 2013.