In the past, Ad-hoc networks were used in limited areas which require secure group communication without Internet access, such as the army or emergencies. However, Ad-hoc networks currently are widely used in variety applications like group chat, smart applications, research testbed etc. Ad-hoc network is basically group based network in the absence of access point so it is prevalent to provide group key approach to prevent information leakage. When we use group key approach, we need to consider which group key management method is the most suitable for the architecture because the cost and frequency of the rekeying operation remain as an unresolved issue. In this paper, we present analysis about existing group key management solutions for Ad-hoc network and suggest a new approach to reduce frequency of the rekeying operation.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
Graphical Visualization of MAC Traces for Wireless Ad-hoc Networks Simulated ...idescitation
Many network simulators (e.g., ns2) are already
being used for performing wired and wireless network
simulations. But, with the current graphical visualization
support in-built in ns2, it is difficult to understand the node
status, packet status and the MAC level events particularly
for Ad-hoc networks. In this paper, we extend the visualization
support in ns-2 that should help research community in the
area of wireless networks to analyze different MAC level
events in an efficient manner. In particular, we have developed
two types of visualizations namely, temporal and spatial.
Temporal visualization helps to analyze success or failure of
a packet with respect to time while spatial visualization helps
to understand the effects due to proximity of nodes. The trace
is made highly configurable in terms of different attributes
like specific nodes and time duration.
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Ripple Algorithm to Evaluate the Importance of Network Nodesrahulmonikasharma
Inthis paper raise the ripples algorithm to evaluate the importance of network node was proposed, its principle is based onthe direct influence of adjacent nodes, and affect farther nodes indirectlyby closer ones just like the ripples on the water. Then we defined two judgments,the discriminationof node importance and the accuracy of key node selecting, to verify its efficiency. The greater degree of discriminationand higher accuracy means better efficiency of algorithm. At last we performed experiment on ARPA network, to compare the efficiency of different algorithms, closeness centricity, node deletion, node contraction method, algorithm raised by Zhou Xuan etc. and ripple method. Results show that ripple algorithm is better than the other measures in the discrimination of node importance and the accuracy of key node selecting.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
Graphical Visualization of MAC Traces for Wireless Ad-hoc Networks Simulated ...idescitation
Many network simulators (e.g., ns2) are already
being used for performing wired and wireless network
simulations. But, with the current graphical visualization
support in-built in ns2, it is difficult to understand the node
status, packet status and the MAC level events particularly
for Ad-hoc networks. In this paper, we extend the visualization
support in ns-2 that should help research community in the
area of wireless networks to analyze different MAC level
events in an efficient manner. In particular, we have developed
two types of visualizations namely, temporal and spatial.
Temporal visualization helps to analyze success or failure of
a packet with respect to time while spatial visualization helps
to understand the effects due to proximity of nodes. The trace
is made highly configurable in terms of different attributes
like specific nodes and time duration.
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Ripple Algorithm to Evaluate the Importance of Network Nodesrahulmonikasharma
Inthis paper raise the ripples algorithm to evaluate the importance of network node was proposed, its principle is based onthe direct influence of adjacent nodes, and affect farther nodes indirectlyby closer ones just like the ripples on the water. Then we defined two judgments,the discriminationof node importance and the accuracy of key node selecting, to verify its efficiency. The greater degree of discriminationand higher accuracy means better efficiency of algorithm. At last we performed experiment on ARPA network, to compare the efficiency of different algorithms, closeness centricity, node deletion, node contraction method, algorithm raised by Zhou Xuan etc. and ripple method. Results show that ripple algorithm is better than the other measures in the discrimination of node importance and the accuracy of key node selecting.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Throughput and Handover Latency Evaluation for Multicast Proxy Mobile IPV6journalBEEI
The objective of this paper is to present performance analysis of a new enhanced mobile multicast network mobility management scheme. The initial developed network mobility management called Proxy Mobile IPv6 (PMIPv6) is based on unicast network support. This paper enabled multicast support in network mobility management and named it as MPMIPv6. Additionally this enhancement also provides better network performance with the new context transfer operations and fast reroute operations. In brief, this paper also describes other current mobile multicast schemes. The new scheme is evaluated using mathematical analysis and NS3.19 simulator. Theoretically this scheme reduces service recovery time, total signalling cost, handover latency, and packet loss for multicast communication. However for this paper, the analysed parameters are throughput and handover latency. Both mathematical and simulation results exhibit better network performance for multicast environment compared to the standard benchmark scheme.
A study of localized algorithm for self organized wireless sensor network and...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Enforcing end to-end proportional fairness with bounded buffer overflow proba...ijwmn
In this paper, we present a distributed flow-based
access scheme for slotted-time protocols, that prov
ides
proportional fairness in ad-hoc wireless networks u
nder constraints on the buffer overflow probabiliti
es at
each node. The proposed scheme requires local infor
mation exchange at the link-layer and end-to-end
information exchange at the transport-layer, and is
cast as a nonlinear program. A medium access contr
ol
protocol is said to be proportionally fair with res
pect to individual end-to-end flows in a network, i
f the
product of the end-to-end flow rates is maximized.
A key contribution of this work lies in the constru
ction of
a distributed dual approach that comes with low com
putational overhead. We discuss the convergence
properties of the proposed scheme and present simul
ation results to support our conclusions.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
OpenFlow is one of the most commonly used protocols for communication between the
controller and the forwarding element in a software defined network (SDN). A model based on
M/M/1 queues is proposed in [1] to capture the communication between the forwarding element
and the controller. Albeit the model provides useful insight, it is accurate only for the case when
the probability of expecting a new flow is small.
Secondly, it is not straight forward to extend the model in [1] to more than one forwarding
element in the data plane. In this work we propose a model which addresses both these
challenges. The model is based on Jackson assumption but with corrections tailored to the
OpenFlow based SDN network. Performance analysis using the proposed model indicates that
the model is accurate even for the case when the probability of new flow is quite large. Further
we show by a toy example that the model can be extended to more than one node in the data plane.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
Fast Data Collection with Interference and Life Time in Tree Based Wireless S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Mean Object Size Considering Average Waiting Latency in M/BP/1 SystemIJCNCJournal
This paper deals with the web object size which affects to the service time in multiple access environments. The M/BP/1 model can be considered because packets arrival and web service are Poission and Bound Pareto (BP) distribution respectively. We find mean object size which satisfies that the average waiting latency by deterministic model equals the mean queueing delay of the M/BP/1 model. Performance evaluation shows that the mean web object size is affected by file size bounds and shape parameter of BP distribution, however, the impact of link capacity is not significant. When the system load is low, web object size converges on half the maximum segment size (MSS). Our results can be applied to find mean web object size in the economic web service design.
Throughput and Handover Latency Evaluation for Multicast Proxy Mobile IPV6journalBEEI
The objective of this paper is to present performance analysis of a new enhanced mobile multicast network mobility management scheme. The initial developed network mobility management called Proxy Mobile IPv6 (PMIPv6) is based on unicast network support. This paper enabled multicast support in network mobility management and named it as MPMIPv6. Additionally this enhancement also provides better network performance with the new context transfer operations and fast reroute operations. In brief, this paper also describes other current mobile multicast schemes. The new scheme is evaluated using mathematical analysis and NS3.19 simulator. Theoretically this scheme reduces service recovery time, total signalling cost, handover latency, and packet loss for multicast communication. However for this paper, the analysed parameters are throughput and handover latency. Both mathematical and simulation results exhibit better network performance for multicast environment compared to the standard benchmark scheme.
A study of localized algorithm for self organized wireless sensor network and...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Enforcing end to-end proportional fairness with bounded buffer overflow proba...ijwmn
In this paper, we present a distributed flow-based
access scheme for slotted-time protocols, that prov
ides
proportional fairness in ad-hoc wireless networks u
nder constraints on the buffer overflow probabiliti
es at
each node. The proposed scheme requires local infor
mation exchange at the link-layer and end-to-end
information exchange at the transport-layer, and is
cast as a nonlinear program. A medium access contr
ol
protocol is said to be proportionally fair with res
pect to individual end-to-end flows in a network, i
f the
product of the end-to-end flow rates is maximized.
A key contribution of this work lies in the constru
ction of
a distributed dual approach that comes with low com
putational overhead. We discuss the convergence
properties of the proposed scheme and present simul
ation results to support our conclusions.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
OpenFlow is one of the most commonly used protocols for communication between the
controller and the forwarding element in a software defined network (SDN). A model based on
M/M/1 queues is proposed in [1] to capture the communication between the forwarding element
and the controller. Albeit the model provides useful insight, it is accurate only for the case when
the probability of expecting a new flow is small.
Secondly, it is not straight forward to extend the model in [1] to more than one forwarding
element in the data plane. In this work we propose a model which addresses both these
challenges. The model is based on Jackson assumption but with corrections tailored to the
OpenFlow based SDN network. Performance analysis using the proposed model indicates that
the model is accurate even for the case when the probability of new flow is quite large. Further
we show by a toy example that the model can be extended to more than one node in the data plane.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
Fast Data Collection with Interference and Life Time in Tree Based Wireless S...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Mean Object Size Considering Average Waiting Latency in M/BP/1 SystemIJCNCJournal
This paper deals with the web object size which affects to the service time in multiple access environments. The M/BP/1 model can be considered because packets arrival and web service are Poission and Bound Pareto (BP) distribution respectively. We find mean object size which satisfies that the average waiting latency by deterministic model equals the mean queueing delay of the M/BP/1 model. Performance evaluation shows that the mean web object size is affected by file size bounds and shape parameter of BP distribution, however, the impact of link capacity is not significant. When the system load is low, web object size converges on half the maximum segment size (MSS). Our results can be applied to find mean web object size in the economic web service design.
A study of localized algorithm for self organized wireless sensor network and...eSAT Journals
Abstract Communication between two nodes is a big issue in now days. To achieve that, wireless network plays a big role. With limited source of energy, memory & computation power wireless sensor networks can be composed by mass number of sensor nodes. Where the sensors’ lifetime depend on energies. Like autonomous system in LAN, cluster can be defined for wireless sensor network, where cluster head plays the prime role for the energy conservation. So, the optimization of cluster head within cluster along with number of nodes is a big research issue. Here I have analyzed the advanced optimization algorithm for sensor network clustering theoretically. I have tried to test the proposed method as a clustering algorithm and compared it with other recent sensor network clustering algorithm named Algorithm for Cluster Establishment (ACE) Index Terms: ACE; Data sets; Localized algorithm; Migration; Spawning threshold function
A New Security Level for Elliptic Curve Cryptosystem Using Cellular Automata ...Editor IJCATR
Elliptic curve cryptography (ECC) is an effective approach to protect privacy and security of information. Encryption
provides only one level of security during transmission over the channel. Hence there is a need for a stronger encryption which is very
hard to break. So, to achieve better results and improve security, information has to pass through several levels of encryption. The aim
of this paper would be to provide two levels of security. First level comprises of plaintext using as security key compressed block to
encrypt text based ECC technique and the second level comprises of scrambling method with compression using 2D Cellular rules. In
particular, we propose an efficient encryption algorithm based ECC using Cellular automata and it is termed as Elliptic Curve
Cryptosystem based Cellular Automata (ECCCA). This paper presents the implementation of ECCCA for communication over
insecure channel. The results are provided to show the encryption performance of the proposed method.
EEIT2-F: energy-efficient aware IT2-fuzzy based clustering protocol in wirel...IJECEIAES
Improving the network lifetime is still a vital challenge because most wireless sensor networks (WSNs) run in an unreached environment and offer almost impossible human access and tracking. Clustering is one of the most effective methods for ensuring that the relevant device process takes place to improve network scalability, decrease energy consumption and maintain an extended network lifetime. Many researches have been developed on the numerous effective clustering algorithms to address this problem. Such algorithms almost dominate on the cluster head (CH) selection and cluster formation; using the intelligent type1 fuzzy-logic (T1-FL) scheme. In this paper, we suggest an interval type2 FL (IT2-FL) methodology that assumes uncertain levels of a decision to be more efficient than the T1-FL model. It is the so-called energy-efficient interval type2 fuzzy (EEIT2-F) low energy adaptive clustering hierarchical (LEACH) protocol. The IT2-FL system depends on three inputs of the residual energy of each node, the node distance from the base station (sink node), and the centrality of each node. Accordingly, the simulation results show that the suggested clustering protocol outperforms the other existing proposals in terms of energy consumption and network lifetime.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
The hierarchical routing of data in WSNs is a specific class of routing protocols it encompasses solutions that take a restructuring of the physical network in a logical hierarchy system for the optimization of the consum-ption of energy. Several hierarchical routing solutions proposed, namely: the protocol LEACH (Low Energy Adaptive Clustering Hierarchy) consist of dividing the network in distributed clusters at one pop in order of faster data delivery and PEGASIS protocol (Power-Efficient Gathering in Sensor Information Systems) which uses the principle of constructing a chain’s sensor node. Our contribution consists of a hierarchical routing protocol, which is the minimization of the energy consumption by reducing the transmission distance of data and reducing the data delivery time. Our solution combines the two hierarchical routing approaches: chain based approach and the cluster based approach. Our approach allows for multi-hop communications, intra- and intercluster, and a collaborative aggregation of data in each Cluster, and a collaborative aggregation of data at each sensor node.
Design of a Reliable Wireless Sensor Network with Optimized Energy Efficiency...paperpublications3
Abstract: Data gathering in wireless sensor network (WSN) is a crucial field of study and it can be optimized various algorithms like clustering, aggregation, and cryptographic technique in order to reliably transfer data between sensor and sink. But these techniques do not provide an optimized data gathering wireless sensor network because of the fact that they do not leverage the advantages of various techniques. Our problem definition is to create a reliable data gathering wireless sensor network which ensures good energy efficiency and lower delay as compared to existing techniques.
Keywords: Aggregation, Clustering, Data Gathering, Cryptography, Data Compression, Run Length Encoding.
Title: Design of a Reliable Wireless Sensor Network with Optimized Energy Efficiency and Delay
Author: Neelam Ashok Meshram
ISSN 2349-7815
International Journal of Recent Research in Electrical and Electronics Engineering (IJRREEE)
Paper Publications
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
Distributed Spatial Modulation based Cooperative Diversity Schemeijwmn
: In this paper, a distributed spatial modulation based cooperative diversity scheme for relay
wireless networks is proposed. Where, the space-time block code is exploited to integrate with distributed
spatial modulation. Therefore, the interested transmission scheme achieves high diversity gain. By using
Monte-Carlo simulation based on computer, we showed that our proposed transmission scheme outperforms
state-of-the-art cooperative relaying schemes in terms bit error rate (BER) performance.
Improving The Performance of Viterbi Decoder using Window System IJECEIAES
An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
Comparing ICH-leach and Leach Descendents on Image Transfer using DCT IJECEIAES
The development and miniaturization of CMOS (microphones and cameras) in the last years have allowed the creation of WMSN (Wireless Multimedia Sensor Network). Therefore, transferring multimedia content through the network has become an important field of research. It transmits the recorded multimedia data wirelessly from a node to another to reach the Sink (base station). Thus, routing protocols make a big contribution in this process, because they participate in optimizing the node's resource usage. Since Leach protocol was designed only to minimize energy consumption of the network. The goal of this paper is to compare our protocol in tnansfering images with other Leach protocol descendants. By using the application layer, we applied the jpeg compression using the frequency domain on images before sending them to the network. In this paper, readers will find statistics concerning the lifetime of the network, the energy consumption and most importantly statistics about received images. Also, we used Castalia framework to simulate real conditions of transmission simulation results proved the efficiency of our protocol by prolonging the lifetime of the network and transmitting more images with better quality compared to other protocols.
A Good Performance OTP Encryption Image based on DCT-DWT SteganographyTELKOMNIKA JOURNAL
The security aspect is very important in data transmission. One way to secure data is with
steganography and cryptography. Surely research on this should continue to be developed to improve
security. In this paper, we proposed a combination of steganographic and cryptographic algorithms for
double protection during data transmission. The selected steganographic algorithm is the use of a
combination of DCT and DWT domain transformations. Because the Imperceptibility aspect is a very
important aspect of steganographic techniques, this aspect needs to be greatly improved. In the proposed
method of DCT transformation first, proceed with DWT transformation. From the experimental results
obtained better imperceptibility quality, compared with existing methods. To add OTP message security
applied algorithm to encrypt the message image, before it is inserted. This is evidenced by experiments
conducted on 20 grayscale images measuring 512x512 with performance tests using MSE, PSNR, and
NC. Experimental results prove that DCT-DWT-OTP generates PNSR more than 50 dB, and NC of all
images is 1.
Similar to REDUCING FREQUENCY OF GROUP REKEYING OPERATION (20)
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2. 64 Computer Science & Information Technology (CS & IT)
incomplete so the solution cannot cope with dynamic and scalable groups. In ad-hoc network,
very frequent rekeying operation will eventually cause a broadcast storm.
Figure 1. Ad-hoc Group Key Management
The second category of research focuses on reducing the frequency of re-keying [2, 4].
Especially, Kronos and Iolus adopt a time-driven method, which performs the rekeying operation
periodically instead of performing it whenever member changes occur [2, 4]. The authors
demonstrated that the time-driven method could considerably reduce the overhead of re-keying
even though such limitations as no guarantee of forward secrecy and delays in adding new
members. However, for highly dynamic groups, the cost of the time-driven re-keying could be
O(n) as well [2]. In ad-hoc networks, O(n) cost usually causes “broadcast storm” situation due to
centralized architecture (i.e., a GC/KS should generate n messages in worst-case). For these
reasons, we cannot apply this method directly to GKM in ad-hoc networks.
In this paper, we suggest a composition of the time-driven method and GKMPAN method [3],
which is one of the main GKM approaches for ad-hoc networks, to eliminate O(n) problem in the
time-driven method. GKMPAN is novel method of distributed GKM in ad-hoc network so it can
remove bottleneck and O(n) problems of the time-driven method. We modify the rekeying
process of GKMPAN to facilitate the installation of the time-driven model in ad-hoc environment
and to solve key exhaustion problem of GKMPAN. With this approach, we can achieve following
works:
• Reduce the frequency and the cost of rekeying operation with reliable security
strength
• Conduct rekeying operation without “broadcast storm” with distributed architecture
• Solve key exhaustion problem of GKMPAN
The organization of the remainder of this paper is as follows. In Section 2, we explain basic Local
Key Hierarchy (LKH) [1], GKMPAN [3] and the cost analysis of the time-driven LKH method.
In Section 3, we propose the expanded GKMPAN. Section 4 evaluates expanded GKMPAN and
Section 5 present related works. Finally, Section 6 presents our conclusions and future works.
2. BACKGROUND
2.1. Local Key Hierarchy (LKH)
In the LKH method [1], a GC/KS constructs and manages a binary key tree T illustrated in Fig. 2.
The root of T corresponds to the TEK. All the remaining nodes hold KEKs (= Kn) and a leaf node
is associated with a group member (= Ui). A member Ui holds the TEK and all the KEKs
associated with the nodes from Ki to the root. For instance, U8 holds K8, K78, K5678, and TEK.
3. Computer Science & Information Technology (CS & IT) 65
Figure 2. LKH Leaving Example
If U8 leaves the group, then K8, K78, K5678, and TEK should be changed to ensure forward secrecy.
We call the nodes whose key must be changed as “affected nodes.” To change these keys, the
GC/KS only needs to send three different types of rekeying messages to the groups; {TEK}K1234
for the members (U1, U2, U3 and U4), {TEK}K56 for the members (U5 and U6), and {TEK}K7 for
the member (U7).
If U8 joins the group, then K8, K78, K5678, and new TEK should be generated to ensure backward
secrecy. The GC constructs and sends rekey messages to the users. A rekey message contains one
or more encrypted new key(s), and a user needs to decrypt it with appropriate keys in order to get
the new keys.
By using the binary key tree, the LKH scheme effectively reduces the cost of rekeying from O(n)
to O(log (n)). We have applied cost-effective LKH scheme to effectively apply time-driven to ad-
hoc networks and evaluated the cost.
2.1.1. Cost Evaluation of Time-Driven LKH
In Fig. 2, we count K1234, K56, and K7 to calculate the cost of rekeying process because each group
member can securely obtain a new group key by encrypting/decrypting message with the key of
the root nodes of sub-trees composed of non-affected nodes; K1234, K56, and K7. Similarly, we can
calculate rekeying cost of time-driven method with LKH by counting the root nodes of sub-trees
composed of non-affected nodes with breadth-first search (BFS). These root nodes are referred to
as root of subtree (RS). Considering the time-driven case, we denote the number of members
whose state changed within a period as m and the size of the group as n. For the simplicity, we
assume that leaving and entering events are occurred in the same frequency, besides m and n are
in the form of the power of two: 2, 8, and 64.
The Fig. 3 shows how to calculate the maximum number of rekeying messages in time-driven
method. In time-driven method, there are many leaving and entering events during a cycle, so the
tree has multiple affected branches. If a member Ui leaves the group, (log n) nodes are affected,
because a member Ui has all the keys along the path from its leaf to the root. To get the maximum
cost of rekeying, we treat the case that leaving nodes do not overlap, as is possible in the tree. In
this situation, the rows up to a height of log m from the root have fewer nodes than m in that
rows; therefore, they do not have any unaffected nodes, because m nodes leave the group and a
4. 66 Computer Science & Information Technology (CS & IT)
leaving node follows one path from the leaf to the root in the tree T. In other words, all the nodes
in the rows below a height of log m are affected by leaving or entering events.
Figure 3. Maximum number of rekeying messages in the time-driven approach
Now, it is possible to find the RS nodes in each row by subtracting both the affected nodes and
those already covered by the elements from parent RS nodes.
RS = Erow – Eaffected – Ecovered_list (E: node Element)
Once the RS nodes in a row have been found, the child nodes of them are inserted into the
covered list, which is used to find the RS nodes of the next row. This sequence of operations is
described right-below edge in the Fig. 3. For entering events, additional m messages are needed
because each newbie should respectively receive its secure KEK. Hence, the total number of
rekeying messages is
Cost = (log n – log m) × m + m.
With this formula, if n = 1024 and m = 16, the number of rekeying messages exceeds 100 during
a cycle, which means that a change of only 2% will cause a “broadcast storm” due to centralized
model. The general equation for any form of m, which is not exactly a power of 2, is
Cost = (log n – floor (log m)) * m + m + 2 * (2floor (log m)
– m).
2.2. GKMPAN
In GKMPAN [3], all node obtains a distinct subset of keys out of a large key pool from the Key
Server (KS) and these keys are used as Key Encryption Keys (KEKs) for delivering group keys
(TEK). This method has the advantage in that we can ignore the process of securely sharing
KEKs between a GC and new member because it is already shared in offline mechanism.
Moreover, this model uses hop-by-hop approach to eliminate bottleneck problem.
When a leaving event occurs, the GC broadcasts the list of revoked keys possessed by a leaving
node, so that this revoked keys will not be used anymore in this group. The GC generates and
sends a new TEK encrypted with Km, the key is possessed by the maximum number of remaining
nodes, to ensure forward security. With this broadcast message, however, some of the nodes
cannot decrypt it because they do not contain Km in their private key list. To fully deliver new
TEK to its child nodes, the GC should resend the new TEK encrypted with other keys which are
shared between GC and some of the excluded nodes based on a delivery tree.
5. Computer Science & Information Technology (CS & IT) 67
When a node receives new TEK encrypted with Km or other KEKs, then, it continuously sends
new TEK to its child nodes in a manner like the GC along the delivery tree. Each node only cares
the child nodes like an independent key server. Then, we can infer that the maximum overhead of
each node is the number of child nodes of itself on the delivery tree.
We can apply the time-driven method to this approach because it eliminates the bottleneck
problem, which could be found in the centralized model [5]; however, there is a problem that the
number of available keys (KEKs) decreases as the number of removed nodes increases.
GKMPAN [3] asserts that the problem can be solved using big l and small m. However, for
performance, we should choose small l and large m, because if a small l and a large m are used, a
key is shared with many nodes so that the node can share the new TEK with a very small number
of messages
The biggest difference between GKMPAN and our method is that GKMAPN uses the pre-
allocated KEKs to share the new TEK and we use the old TEK that was used in the previous
rekeying stage. The advantage of our approach is that the dependency between performance and
key exhaustion is eliminated. We excluded the keys held by the revoked nodes only during the
rekeying stage of one cycle. When node i leaves the group, we can use the remaining keys in the
rekeying stage, except for the keys that node i had in the key pool. However, in the next rekeying
stage, the keys that were excluded can also be used.
3. OUR APPROACH (EXTENDED GKMPAN)
Table 1. Terminology
GC/KS Group Controller/Key Server
KEK Key Encryption Key
TEK Transmission Encryption Key (group key)
l Size of a key pool
Km The key is possessed by the maximum number of remaining nodes in network
m The number of keys in the possession of a group member (= remaining node)
Kb The key owned by node b
Lleave List of leaving node at current stage
Ljoin List of joining node at current stage
To solve the O(n) and bottleneck problem in the time-driven method, we propose a new rekeying
strategy based on GKMPAN [3], which uses a pre-distributed set of keys. The main idea of our
approach is to securely pass a new TEK encrypted with the old TEK “hop-by-hop”; {TEK}old_TEK.
Before we explain details, the Table 1 describes terminologies which are used in last of the paper.
3.1. Model Description (Extended GKMPAN)
As described above, our model uses time-driven enabled GKMPAN approach to complement
weak point of time-driven method. Our model is also hop-by-hop process so we assume that each
node can easily acquire the information about the nodes within its one-hop distance. We also
assume all group members have pre-distributed set of keys from key pool l, which will be used as
KEKs. With the GKMPAN method, it is possible to know which keys a member possesses with
only the id of the member. Because of this, a node can figure out which keys the surrounding
nodes have. Moreover, asymmetric key mechanism is used, so the sender only needs to know the
6. 68 Computer Science & Information Technology (CS & IT)
public key of a key of the receiver. In this approach, we do not need to worry about which keys
overlap between the sender and receiver because the all members have capability to encrypt a
message with any key in l (i.e., a sender knows all the public keys of all the KEKs).
Figure 5. Hop-by-hop Encryption
Figure 4. Flow Chart of a Rekeying Stage in
Extended GKMPAN
After completing one cycle of the time-driven method, the GC obtains both a list of revoked
members and newly joined members list. Then, the rekeying process starts and the GC securely
sends a new group key (TEK), the revoked list, and the newly joined list to its surrounding nodes.
The node receiving the rekeying message becomes the distributor and starts the process
represented in Fig. 4. The legitimate nodes contain remaining nodes and the members in Ljoin.
As depicted in the Fig. 4, the first step of secure rekeying process is to figure out which keys can
be used to distribute new group key by excluding the keys held by Lleave. The second step is to
find Km that the key is possessed by the maximum number of legitimate nodes who did not
receive new TEK yet within one-hop. After finding Km, the distributer sends a rekeying message e
ncrypted with both the old TEK and Km to the all nodes within one hop, {Lleave, Ljoin, TEKnew}TEKold
+ Km. If all the neighboring nodes have received a new TEK, the rekeying stage ends. If not, the
distributer starts again from the second step by finding Km, which is possessed by the maximum
number of legitimate nodes who did not receive new TEK yet within one-hop.
7. Computer Science & Information Technology (CS & IT) 69
The role of Km is to hide the new TEK from the nodes that are revoked at current stage (i.e.,
Lleave), because they know the old TEK. Km is selected from the keys possessed by legitimate
nodes except for the revoked nodes so it can hide new TEK from the revoked nodes.
The reason for encrypting the new TEK with the old TEK is to solve the key exhaustion problem
of GKMPAN. The nodes without knowledge of the old TEK are unable to retrieve the new TEK;
thus, it is unnecessary to consider whether previously eliminated nodes have some keys in their
key pool because they do not have knowledge about the old TEK. So, we only need to care about
the nodes which are revoked at current cycle (i.e., The KEKs that were excluded from the
rekeying stage can be used in the next cycle).
For instance, in the rekeying stage, node v uses the old TEK and Kb to share a new group key. Kb
is owned by node b which was revoked previously. Then, although revoked node b knows the Kb,
it cannot decrypt the new group key encrypted with both the Kb and the old TEK (i.e., {TEKnew-
}TEKold + Kb) because it does not know the old TEK which is used in current time-driven cycle. From
this point of view, we do not need to delete the Kb from the key pool even though the node b
leaved the group. It means that the lifetime of the keys in the key pool is semi-permanent.
This approach has several advantages. First, in the best case each node of a group needs to send
only one multicast message to propagate the new TEK and in the worst case each node is required
to send a message to each of the surrounding legitimate nodes (i.e., remaining node + Ljoin).
Second, this approach enables us to select a large l and small m, because we have already solved
the problem of key exhaustion. By encrypting the new TEK with the old TEK, the keys owned by
each group member are simply used to hide the new TEK from the nodes who have left current
rekeying stage.
The diagram in the Fig. 5 illustrates a simple case. In short, the GC/KS sends a new TEK
encrypted with the old TEK to surrounding nodes, then they subsequently forward the new TEK
to their neighbors securely by encrypting it with Km, which is K2 in this example. Here, we do not
consider other processes, such as key distribution or message verification, because these
properties can be referenced from GKMPAN [3].
4. PERFORMANCE EVALUATION
We evaluated our proposed method by calculating the number of messages a node v should send
to share a new group key in four different cases. In the Fig. 6, n, l, and m, respectively, denote the
number of surrounding nodes of a node v, the size of the key pool, and the number of keys
possessed by a node. In evaluation, we assume that there is no collision. Each node evenly has
random leaving and joining probability (0.1 to 0.9). We count all messages in the group and
compute average message counts from a node v. Moreover, in our and GKMPAN approach, some
nodes may not get a new TEK because the all keys possessed by a node could be excluded from
current rekeying stage due to the revoked nodes. So, we also evaluate failure ratio according to n,
l and m.
In Fig. 6, The red line shows the failure ratio based on the right-side y axis, and the black line
shows the number of required messages based on the left-side y axis. As shown in a and b, they
depict the number of messages and failure ratio when the number of neighbor nodes (n) and the
size of the key pool (l) are fixed. When m is small, the number of necessary messages decreases
as the value of m increases. However, as the value of m increases from a specific value, the
number of required messages increases. In c and d, they depict the number of messages and
failure ratio when the number of neighbor nodes (n) and the key set size of each node (m) are
8. 70 Computer Science & Information Technology (CS & IT)
fixed. The results of c and d are similar to those of a and b. As the value of l increases before the
specific value, the number of necessary messages decreases, but it increases thereafter.
For a large m and small l, the failure ratio is small, and for a small m and large l, the failure ratio
is large. In addition, a high probability of failure ratio means that legitimated nodes are less likely
to have the same keys, and this result can be seen in the above experiment.
The efficiency of the algorithm depends on how many legitimate nodes have the same keys. If m
is sufficiently small compared to l, we can see that the number of keys needed is reduced because
the number of keys that overlap each other increases. If m is large enough for l, the number of
keys that a node has is large, so there are more keys to be excluded for revoked nodes in the key
pool. This reduces the probability that the legitimated nodes will have the same key, which makes
the efficiency worse.
In the same simulation environment, when the population is 100, it is shown that a central node of
the time-driven LKH sends an average of 72 ~ 75 messages, and in case of 200, it sends close to
150 messages. It means (n * 3/4) messages are required to share a TEK in time-driven LKH in
normal cases. LKH is affected by the total number of nodes, but our approach is much more
effective than LKH because it is only affected by the number of neighbor nodes, not the whole.
5. RELATED WORKS
In Ad-hoc networks, we can categorize the GKMs into three sub groups; Centralized approach,
Decentralized approach, and Distributed approach. In Centralized approach, there is only one
GC/KS. This server is responsible for the generation, the distribution and the renewal of the group
key (TEK). Centralized approach is suffered from the O(n) problem and bottleneck problem.
GKMPAN [3], CKDS [12], Kaya et al. solution [13] belong this group. In Decentralized
approach, it divides the group into several sub groups to relieve O(n) problem and bottleneck
problem. However, this approach requires several decryption and re-encryption operations of
multicast message, when it passes from a sub-group to another. ILOUS [4], AKMP [9], and
BLADE [5] belong to this group.
The one of Decentralized approaches is BLADE [5]. Its basic idea is to divide the multicast group
dynamically into clusters. Each cluster is managed by a local controller which shares with its
local members a local cluster key. The multicast flow is encrypted by the source with the traffic
encryption key TEK and sent in multicast to all the group members. The source of the group and
the local controllers from a multicast GLC (Group of Local Controllers) share beforehand a
session key called KEKCCL. Each new local controller has to join this group and receive the
session key KEKCCL from the source of the group, encrypted with its public key. It could support
mobility and insure energy efficiency.
In Distributed approach, all group members cooperate and generate TEK, to establish secure
communications between them. This approach eliminates the bottleneck problem, but generally it
suffered from O(n) problem. Our approach is also included in this category without O(n) problem
with time-driven method. Several papers propose the solution for this category [9, 10, 14, 15, 16].
Chiang et al. [16] proposes a GKM protocol for MANETs based on GPS measures and on the
GDH (Group Diffie Hellman) [11]. In this protocol, during protocol initialization, each node in
the ad hoc network, floods its GPS information and its public key to all the others nodes, although
the authors assume that the protocol does not rely on any certification authority. Using the GPS
information received from others nodes, each group member can build the network topology.
When a source wants to multicast the data flow to the group members, it computes the minimum
multicast tree, based on the Prűfer algorithm and then sends message with GDH keys.
9. Computer Science & Information Technology (CS & IT) 71
6. CONCLUSIONS
The Ubiquitous Network Society suggests a world in which many ad-hoc environments can be
applied. Moreover, the number of applications, such as Vehicular ad hoc networks (VANETs) or
Smartphone ad hoc networks (SPANs), relying on the ad-hoc multicast technique is increasing.
However, no efficient GKM method has been suggested yet so, in this paper, we propose an
improved GKM scheme with the aim of enhancing the efficiency GKM in Ad-hoc network. The
dominant factor of our scheme is reduced frequency of rekeying operation, whereas other
schemes were more concerned with the cost. Considering the feasibility of GKM, we believe our
method is more practical than others. In the future, we devise modified version of extended
GKMPAN to apply this to wired networks as distributed GKM.
ACKNOWLEDGEMENTS
This work was supported by Institute for Information & communications Technology
Promotion(IITP) grant funded by the Korea government(MSIP) (No.2016-0-00078, Cloud based
Security Intelligence Technology Development for the Customized Security Service
Provisioning).
This research was supported by Basic Science Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2010-0020210).
REFERENCES
[1] C.K. Wong, M. Gouda, and S. S. Lam, Secure Group Communications Using Key Graphs, ACM
SIGCOMM (1998).
[2] S. Setia, S. Koussih, S. Jajodia, and E. Harder, Kronos: A Scalable Group Re-Keying Approch for
Secure Multicast, IEEE Symposium on Security and Privacy, 2000.
[3] Sencun Zhu, Sanjeev Setia, Shouhuai Xu, Sushil Jajodia, GKMPAN: An Efficient Group Rekeying
Scheme for Secure Multicast in Ad-Hoc Networks, IEEE, 2004.
[4] Suvo Mittra, Iolus: A Framework for Scalable Secure Multicasting, ACM Sigcomm, 1997.
[5] Mohaned-Salah Bouassida, Isabelle Chrisment, and Olivier Festor, Group Key Management in
MANETs, International Journal of Network Security, Vol.6, No.1, PP.67-79, Jan. 2008.
[6] Yacine Challal, Hamida Seba, Group Key Management Protocols: A Novel Taxonomy, International
Journal of Information Technology, Vol.2, 2005.
[7] Yan Sun, Wade Trappe, and K. J. Ray Liu, A Scalable Multicast Key Management Scheme for
Heterogeneous Wireless Networks, IEEE/ACM, Vol. 12, No 4, 2004
[8] W.T. Li, C.H. Ling, M.S. Hwang, Group Rekeying in Wireless Sensor Networks, International
Journal of Network Security, Vol, 16, No.6, 2014
[9] H. Bettahar, A. Bouabdallah, and Y. Challal, “An adaptive key management protocol for secure
multicast,” in 11th International Conference on Computer Communications and Networks ICCCN,
Florida USA, Oct. 2002.
[10] W. Diffie and M.E. Hellman,“New directions in cryptography,” IEEE Transactions on Information
Theory, vol. 22, no. 6, pp. 644-654, 1976.
10. 72 Computer Science & Information Technology (CS & IT)
[11] I. Ingemarson, D. Tang, and C. Wong, “A conference key distribution system,” IEEE Transactions on
Information Theory, pp. 714-720, Sep. 1982.
[12] M. Moharrun, R. Mukkalamala, and M. Eltoweissy, “Ckds: An efficient combinatorial key
distribution scheme for wireless Ad Hoc networks,” in IEEE International Conference on
Performance, Computing and Communications (IPCCC’04), pp. 631-636, Apr. 2004.
[13] T. Kaya, G. Lin, G. Noubir, and A. Yilmaz, “Secure multicast groups on Ad Hoc networks,” in
Proceedings of the 1st ACM Workshop on Security of Ad Hoc and Sensor Networks, pp. 94-102,
2003.
[14] S. Rahul, and Hansdah. “An Efficient Distributed Group Key Management Algorithm”. In
Proceedings Tenth International Conference on Parallel and Distributed Systems, 2004. ICPADS
2004, pages 230 -237, California.
[15] L.R. Dondeti, S. Mukherjee, and A. Samal. “Comparison of Hierarchical Key Distribution Schemes”.
IEEE Globcom Global Internet Symposium, 1999.
[16] T. Chiang and Y. Huang, “Group keys and the multicast security in Ad Hoc networks,” in
Proceedings of the 2003 International Conference on Parallel Processing Workshops, pp. 385, 2003.