This document discusses two approaches to peer-to-peer data mining: local algorithms and the Newscast model of computation. Local algorithms perform computations using only local communications between neighbors. The majority voting problem is presented as an example of an exact local algorithm. An approximate local algorithm for K-means clustering over a P2P network is also described. The Newscast model is then introduced as an alternative approach based on a gossip protocol that continuously rewires network connections, allowing data mining primitives to be computed in a decentralized manner even as the network dynamically changes.
Analytical Modelling of Localized P2P Streaming Systems under NAT ConsiderationIJCNCJournal
This document summarizes an analytical model for localized peer-to-peer (P2P) streaming systems that considers the impact of network address translation (NAT). It introduces theoretical boundaries for the number of peers that may be expelled from the system due to NAT incompatibility. It also presents a mathematical model for startup delay in P2P streaming that accounts for peers' NAT types. The document proposes a new neighbor selection algorithm that considers both autonomous system numbers and NAT types to improve connectivity while reducing transit traffic and startup delays.
Scale-Free Networks to Search in Unstructured Peer-To-Peer NetworksIOSR Journals
This document discusses using scale-free networks to improve search efficiency in unstructured peer-to-peer networks. It proposes the EQUATOR architecture, which creates an overlay network topology based on the scale-free Barabasi-Albert model. Simulation results show that EQUATOR achieves good lookup performance comparable to the ideal Barabasi-Albert network, with low message overhead even under node churn. The scale-free topology allows random walks to efficiently locate resources by directing searches to high-degree "hub" nodes with greater knowledge of the network.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
The document presents a compartmental model for characterizing the spread of malware in peer-to-peer (P2P) networks like Gnutella. The model partitions peers into compartments based on their state - those wishing to download (S), currently downloading (E), having downloaded (I), and no longer interested (R). Differential equations track changes between compartments over time. Simulation results show the model effectively captures the impact of parameters like peer online/offline switching rates and quarantine strategies on malware intensity. The model improves on prior work by incorporating user behavior dynamics and limiting malware spread to a node's time-to-live range.
UTILIZING XAI TECHNIQUE TO IMPROVE AUTOENCODER BASED MODEL FOR COMPUTER NETWO...IJCNCJournal
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoderbut on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94% and 0.969, respectively.
FUZZY LOGIC-BASED EFFICIENT MESSAGE ROUTE SELECTION METHOD TO PROLONG THE NET...IJCNCJournal
- The document discusses a fuzzy logic-based method for efficient message routing in wireless sensor networks to prolong the network lifetime. It aims to balance energy load across nodes by selectively tagging nodes at risk of energy exhaustion and rerouting messages around them.
- It proposes using fuzzy logic to evaluate nodes based on their potential importance, energy level, and event occurrence frequency to determine tagging. Tagged nodes avoid routing traffic but still detect and generate reports.
- The method was tested by applying it to a probabilistic voting-based filtering security scheme and was shown to improve energy efficiency, node survival rate, and report transmission success compared to not tagging nodes.
Analytical Modelling of Localized P2P Streaming Systems under NAT ConsiderationIJCNCJournal
This document summarizes an analytical model for localized peer-to-peer (P2P) streaming systems that considers the impact of network address translation (NAT). It introduces theoretical boundaries for the number of peers that may be expelled from the system due to NAT incompatibility. It also presents a mathematical model for startup delay in P2P streaming that accounts for peers' NAT types. The document proposes a new neighbor selection algorithm that considers both autonomous system numbers and NAT types to improve connectivity while reducing transit traffic and startup delays.
Scale-Free Networks to Search in Unstructured Peer-To-Peer NetworksIOSR Journals
This document discusses using scale-free networks to improve search efficiency in unstructured peer-to-peer networks. It proposes the EQUATOR architecture, which creates an overlay network topology based on the scale-free Barabasi-Albert model. Simulation results show that EQUATOR achieves good lookup performance comparable to the ideal Barabasi-Albert network, with low message overhead even under node churn. The scale-free topology allows random walks to efficiently locate resources by directing searches to high-degree "hub" nodes with greater knowledge of the network.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
The document presents a compartmental model for characterizing the spread of malware in peer-to-peer (P2P) networks like Gnutella. The model partitions peers into compartments based on their state - those wishing to download (S), currently downloading (E), having downloaded (I), and no longer interested (R). Differential equations track changes between compartments over time. Simulation results show the model effectively captures the impact of parameters like peer online/offline switching rates and quarantine strategies on malware intensity. The model improves on prior work by incorporating user behavior dynamics and limiting malware spread to a node's time-to-live range.
UTILIZING XAI TECHNIQUE TO IMPROVE AUTOENCODER BASED MODEL FOR COMPUTER NETWO...IJCNCJournal
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoderbut on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94% and 0.969, respectively.
FUZZY LOGIC-BASED EFFICIENT MESSAGE ROUTE SELECTION METHOD TO PROLONG THE NET...IJCNCJournal
- The document discusses a fuzzy logic-based method for efficient message routing in wireless sensor networks to prolong the network lifetime. It aims to balance energy load across nodes by selectively tagging nodes at risk of energy exhaustion and rerouting messages around them.
- It proposes using fuzzy logic to evaluate nodes based on their potential importance, energy level, and event occurrence frequency to determine tagging. Tagged nodes avoid routing traffic but still detect and generate reports.
- The method was tested by applying it to a probabilistic voting-based filtering security scheme and was shown to improve energy efficiency, node survival rate, and report transmission success compared to not tagging nodes.
Privacy Preserving Reputation Calculation in P2P Systems with Homomorphic Enc...IJCNCJournal
This document discusses a method for privacy-preserving reputation calculation in peer-to-peer systems using homomorphic encryption. Specifically, it proposes:
1) Extending the EigenTrust reputation system to calculate node reputations in a distributed manner while preserving evaluator privacy. It does this by successively updating encrypted reputation values through calculation to reflect trust values without disclosing the original values.
2) Improving calculation efficiency by offloading parts of the task to participating nodes and using different public keys during calculation to improve robustness against node churn.
3) Evaluating the performance of the proposed method, finding it reduces maximum circulation time for aggregating multiplication results by half, reducing computation time per round. The privacy preservation cost scales
MEKDA: Multi-Level ECC based Key Distribution and Authentication in Internet ...IJCNCJournal
The Internet of Things (IoT) is an extensive system of networks and connected devices with minimal human interaction and swift growth. The constraints of the System and limitations of Devices pose several challenges, including security; hence billions of devices must protect from attacks and compromises. The resource-constrained nature of IoT devices amplifies security challenges. Thus standard data communication and security measures are inefficient in the IoT environment. The ubiquity of IoT devices and their deployment in sensitive applications increase the vulnerability of any security breaches to risk lives. Hence, IoT-related security challenges are of great concern. Authentication is the solution to the vulnerability of a malicious device in the IoT environment. The proposed Multi-level Elliptic Curve Cryptography based Key Distribution and Authentication in IoT enhances the security by Multi-level Authentication when the devices enter or exit the Cluster in an IoT system. The decreased Computation Time and Energy Consumption by generating and distributing Keys using Elliptic Curve Cryptography extends the availability of the IoT devices. The Performance analysis shows the improvement over the Fast Authentication and Data Transfer method.
Ontology-Based Routing for Large-Scale Unstructured P2P Publish/Subscribe Systemtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMSijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly
choose logical neighbours without any knowledge about underlying physical topology. This may cause
inefficient communications among peers. This topology mismatch problem may result in poor
performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of
Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network
topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options
supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection,
our approach outperforms traditional P2P systems using random peer selection. Our approach only
incurs a low overhead and can be deployed easily in various P2P systems.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
A COOPERATIVE LOCALIZATION METHOD BASED ON V2I COMMUNICATION AND DISTANCE INF...IJCNCJournal
Relative positions are recent solutions to overcome the limited accuracy of GPS in urban environment.
Vehicle positions obtained using V2I communication are more accurate because the known roadside unit
(RSU) locations help predict errors in measurements over time. The accuracy of vehicle positions depends
more on the number of RSUs; however, the high installation cost limits the use of this approach. It also
depends on nonlinear localization nature. They were neglected in several research papers. In these studies,
the accumulated errors increased with time due to the linearity localization problem. In the present study,
a cooperative localization method based on V2I communication and distance information in vehicular
networks is proposed for improving the estimates of vehicles’ initial positions. This method assumes that
the virtual RSUs based on mobility measurements help reduce installation costs and facilitate in handling
fault environments. The extended Kalman filter algorithm is a well-known estimator in nonlinear problem,
but it requires well initial vehicle position vector and adaptive noise in measurements. Using the proposed
method, vehicles’ initial positions can be estimated accurately. The experimental results confirm that the
proposed method has superior accuracy than existing methods, giving a root mean square error of
approximately 1 m. In addition, it is shown that virtual RSUs can assist in estimating initial positions in
fault environments.
Mobile Hosts Participating in Peer-to-Peer Data Networks: Challenges and Solu...Zhenyun Zhuang
Wireless Networks (2010)
http://dl.acm.org/citation.cfm?id=1873504
Peer-to-peer (P2P) data networks dominate
Internet traffic, accounting for over 60% of the overall
traffic in a recent study. In this work, we study the
problems that arise when mobile hosts participate in
P2P networks. We primarily focus on the performance
issues as experienced by the mobile host, but also study
the impact on other fixed peers. Using BitTorrent as a
key example, we identify several unique problems that
arise due to the design aspects of P2P networks being
incompatible with typical characteristics of wireless
and mobile environments. Using the insights gained
through our study, we present a wireless P2P (wP2P)
client application that is backward compatible with existing
fixed-peer client applications, but when used on
mobile hosts can provide significant performance improvements.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
A Cooperative Peer Clustering Scheme for Unstructured Peer-to-Peer Systemsijp2p
This document summarizes a research paper that proposes a cooperative peer clustering scheme for unstructured peer-to-peer networks. The proposed scheme aims to improve search performance by identifying critical links between peers and allowing local reconfiguration while incorporating a retaliation rule to encourage cooperation. Simulation results indicate the proposed scheme improves search hit rates over previous schemes, and cooperative peers receive higher profits than selfish peers.
Online stream mining approach for clustering network trafficeSAT Journals
Abstract A large number of research have been proposed on intrusion detection system, which leads to the implementation of agent based intelligent IDS (IIDS), Non – intelligent IDS (NIDS), signature based IDS etc. While building such IDS models, learning algorithms from flow of network traffic plays crucial role in accuracy of IDS systems. The proposed work focuses on implementing the novel method to cluster network traffic which eliminates the limitations in existing online clustering algorithms and prove the robustness and accuracy over large stream of network traffic arriving at extremely high rate. We compare the existing algorithm with novel methods to analyse the accuracy and complexity. Keywords— NIDS, Data Stream Mining, Online Clustering, RAH algorithm, Online Efficient Incremental Clustering algorithm
Transfer reliability and congestion control strategies in opportunistic netwo...IEEEFINALYEARPROJECTS
The document discusses transfer reliability and congestion control strategies in opportunistic networks. It begins by stating that opportunistic networks have unpredictable node contacts and rarely have complete end-to-end paths. It then discusses how modified TCP protocols are ineffective for these networks and they require different approaches than intermittently connected networks. The document surveys proposals for transfer reliability using hop-by-hop custody transfer and end-to-end receipts. It also categorizes storage congestion control based on single or multiple message copies. It identifies open research issues including replication management and drop policies for multiple copies.
JAVA 2013 IEEE NETWORKING PROJECT Transfer reliability and congestion control...IEEEGLOBALSOFTTECHNOLOGIES
The document discusses transfer reliability and congestion control strategies in opportunistic networks. It begins by stating that opportunistic networks have unpredictable node contacts and rarely have complete end-to-end paths. It notes that modified TCP protocols are ineffective for these networks. The document then surveys proposals for ensuring reliable data transfer and avoiding network congestion in opportunistic networks. It categorizes existing proposals and identifies mechanisms like hop-by-hop custody transfer and end-to-end receipts for reliability. For congestion control, it discusses replication management, drop policies, and considering message copy numbers. The document concludes by identifying open research challenges.
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...csandit
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...cscpconf
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive. In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner. We present proofs of our algorithms as well as practical results and evaluations.
A P2P Job Assignment Protocol For Volunteer Computing SystemsAshley Smith
This document proposes a peer-to-peer job assignment protocol for volunteer computing systems. It introduces a distributed algorithm that aims to efficiently distribute jobs to workers in a decentralized manner. The key aspects are:
1) Jobs are described by job adverts that are distributed to multiple job assigners by a job manager.
2) Workers request jobs by querying job assigners, who match workers to available job adverts.
3) Input data is retrieved in a similar decentralized fashion through data queries and responses from data centers caching input files.
4) A simulation study shows this decentralized approach can improve performance metrics like overall job completion time and network load balancing, compared to a centralized job assignment system.
Algorithm selection for sorting in embedded and mobile systemsJigisha Aryya
Algorithm selection aims to solve problems like sorting using the most efficient algorithm by analyzing data characteristics. For resource-constrained embedded systems, this can improve energy efficiency by reducing computation time and eliminating unnecessary software components. The document proposes using algorithm selection with a sliding window approach for data stream mining to sample and analyze data on-board instead of transmitting all data over bandwidth-limited wireless networks. This allows for sorting and other computations to be performed locally, saving energy compared to sending all data for remote processing. Eliminating the random number generator used for sorting could also improve energy efficiency.
IDENTIFICATION OF EFFICIENT PEERS IN P2P COMPUTING SYSTEM FOR REAL TIME APPLI...ijp2p
Currently the Peer-to-Peer computing paradigm rises as an economic solution for the large scale
computation problems. However due to the dynamic nature of peers it is very difficult to use this type of
systems for the computations of real time applications. Strict deadline of scientific and real time
applications require predictable performance in such applications. We propose an algorithm to identify the
group of reliable peers, from the available peers on the Internet, for the processing of real time
application’s tasks. The algorithm is based on joint evaluation of peer properties like peer availability,
credibility, computation time and the turnaround time of the peer with respect to the task distributor peer.
Here we also define a method to calculate turnaround time (distance) on task distributor peers at
application level.
IDENTIFICATION OF EFFICIENT PEERS IN P2P COMPUTING SYSTEM FOR REAL TIME APPLI...ijp2p
Currently the Peer-to-Peer computing paradigm rises as an economic solution for the large scale
computation problems. However due to the dynamic nature of peers it is very difficult to use this type of
systems for the computations of real time applications. Strict deadline of scientific and real time
applications require predictable performance in such applications. We propose an algorithm to identify the
group of reliable peers, from the available peers on the Internet, for the processing of real time
application’s tasks. The algorithm is based on joint evaluation of peer properties like peer availability,
credibility, computation time and the turnaround time of the peer with respect to the task distributor peer.
Here we also define a method to calculate turnaround time (distance) on task distributor peers at
application level.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Privacy Preserving Reputation Calculation in P2P Systems with Homomorphic Enc...IJCNCJournal
This document discusses a method for privacy-preserving reputation calculation in peer-to-peer systems using homomorphic encryption. Specifically, it proposes:
1) Extending the EigenTrust reputation system to calculate node reputations in a distributed manner while preserving evaluator privacy. It does this by successively updating encrypted reputation values through calculation to reflect trust values without disclosing the original values.
2) Improving calculation efficiency by offloading parts of the task to participating nodes and using different public keys during calculation to improve robustness against node churn.
3) Evaluating the performance of the proposed method, finding it reduces maximum circulation time for aggregating multiplication results by half, reducing computation time per round. The privacy preservation cost scales
MEKDA: Multi-Level ECC based Key Distribution and Authentication in Internet ...IJCNCJournal
The Internet of Things (IoT) is an extensive system of networks and connected devices with minimal human interaction and swift growth. The constraints of the System and limitations of Devices pose several challenges, including security; hence billions of devices must protect from attacks and compromises. The resource-constrained nature of IoT devices amplifies security challenges. Thus standard data communication and security measures are inefficient in the IoT environment. The ubiquity of IoT devices and their deployment in sensitive applications increase the vulnerability of any security breaches to risk lives. Hence, IoT-related security challenges are of great concern. Authentication is the solution to the vulnerability of a malicious device in the IoT environment. The proposed Multi-level Elliptic Curve Cryptography based Key Distribution and Authentication in IoT enhances the security by Multi-level Authentication when the devices enter or exit the Cluster in an IoT system. The decreased Computation Time and Energy Consumption by generating and distributing Keys using Elliptic Curve Cryptography extends the availability of the IoT devices. The Performance analysis shows the improvement over the Fast Authentication and Data Transfer method.
Ontology-Based Routing for Large-Scale Unstructured P2P Publish/Subscribe Systemtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMSijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly
choose logical neighbours without any knowledge about underlying physical topology. This may cause
inefficient communications among peers. This topology mismatch problem may result in poor
performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of
Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network
topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options
supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection,
our approach outperforms traditional P2P systems using random peer selection. Our approach only
incurs a low overhead and can be deployed easily in various P2P systems.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
A COOPERATIVE LOCALIZATION METHOD BASED ON V2I COMMUNICATION AND DISTANCE INF...IJCNCJournal
Relative positions are recent solutions to overcome the limited accuracy of GPS in urban environment.
Vehicle positions obtained using V2I communication are more accurate because the known roadside unit
(RSU) locations help predict errors in measurements over time. The accuracy of vehicle positions depends
more on the number of RSUs; however, the high installation cost limits the use of this approach. It also
depends on nonlinear localization nature. They were neglected in several research papers. In these studies,
the accumulated errors increased with time due to the linearity localization problem. In the present study,
a cooperative localization method based on V2I communication and distance information in vehicular
networks is proposed for improving the estimates of vehicles’ initial positions. This method assumes that
the virtual RSUs based on mobility measurements help reduce installation costs and facilitate in handling
fault environments. The extended Kalman filter algorithm is a well-known estimator in nonlinear problem,
but it requires well initial vehicle position vector and adaptive noise in measurements. Using the proposed
method, vehicles’ initial positions can be estimated accurately. The experimental results confirm that the
proposed method has superior accuracy than existing methods, giving a root mean square error of
approximately 1 m. In addition, it is shown that virtual RSUs can assist in estimating initial positions in
fault environments.
Mobile Hosts Participating in Peer-to-Peer Data Networks: Challenges and Solu...Zhenyun Zhuang
Wireless Networks (2010)
http://dl.acm.org/citation.cfm?id=1873504
Peer-to-peer (P2P) data networks dominate
Internet traffic, accounting for over 60% of the overall
traffic in a recent study. In this work, we study the
problems that arise when mobile hosts participate in
P2P networks. We primarily focus on the performance
issues as experienced by the mobile host, but also study
the impact on other fixed peers. Using BitTorrent as a
key example, we identify several unique problems that
arise due to the design aspects of P2P networks being
incompatible with typical characteristics of wireless
and mobile environments. Using the insights gained
through our study, we present a wireless P2P (wP2P)
client application that is backward compatible with existing
fixed-peer client applications, but when used on
mobile hosts can provide significant performance improvements.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
A Cooperative Peer Clustering Scheme for Unstructured Peer-to-Peer Systemsijp2p
This document summarizes a research paper that proposes a cooperative peer clustering scheme for unstructured peer-to-peer networks. The proposed scheme aims to improve search performance by identifying critical links between peers and allowing local reconfiguration while incorporating a retaliation rule to encourage cooperation. Simulation results indicate the proposed scheme improves search hit rates over previous schemes, and cooperative peers receive higher profits than selfish peers.
Online stream mining approach for clustering network trafficeSAT Journals
Abstract A large number of research have been proposed on intrusion detection system, which leads to the implementation of agent based intelligent IDS (IIDS), Non – intelligent IDS (NIDS), signature based IDS etc. While building such IDS models, learning algorithms from flow of network traffic plays crucial role in accuracy of IDS systems. The proposed work focuses on implementing the novel method to cluster network traffic which eliminates the limitations in existing online clustering algorithms and prove the robustness and accuracy over large stream of network traffic arriving at extremely high rate. We compare the existing algorithm with novel methods to analyse the accuracy and complexity. Keywords— NIDS, Data Stream Mining, Online Clustering, RAH algorithm, Online Efficient Incremental Clustering algorithm
Transfer reliability and congestion control strategies in opportunistic netwo...IEEEFINALYEARPROJECTS
The document discusses transfer reliability and congestion control strategies in opportunistic networks. It begins by stating that opportunistic networks have unpredictable node contacts and rarely have complete end-to-end paths. It then discusses how modified TCP protocols are ineffective for these networks and they require different approaches than intermittently connected networks. The document surveys proposals for transfer reliability using hop-by-hop custody transfer and end-to-end receipts. It also categorizes storage congestion control based on single or multiple message copies. It identifies open research issues including replication management and drop policies for multiple copies.
JAVA 2013 IEEE NETWORKING PROJECT Transfer reliability and congestion control...IEEEGLOBALSOFTTECHNOLOGIES
The document discusses transfer reliability and congestion control strategies in opportunistic networks. It begins by stating that opportunistic networks have unpredictable node contacts and rarely have complete end-to-end paths. It notes that modified TCP protocols are ineffective for these networks. The document then surveys proposals for ensuring reliable data transfer and avoiding network congestion in opportunistic networks. It categorizes existing proposals and identifies mechanisms like hop-by-hop custody transfer and end-to-end receipts for reliability. For congestion control, it discusses replication management, drop policies, and considering message copy numbers. The document concludes by identifying open research challenges.
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...csandit
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...cscpconf
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive. In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner. We present proofs of our algorithms as well as practical results and evaluations.
A P2P Job Assignment Protocol For Volunteer Computing SystemsAshley Smith
This document proposes a peer-to-peer job assignment protocol for volunteer computing systems. It introduces a distributed algorithm that aims to efficiently distribute jobs to workers in a decentralized manner. The key aspects are:
1) Jobs are described by job adverts that are distributed to multiple job assigners by a job manager.
2) Workers request jobs by querying job assigners, who match workers to available job adverts.
3) Input data is retrieved in a similar decentralized fashion through data queries and responses from data centers caching input files.
4) A simulation study shows this decentralized approach can improve performance metrics like overall job completion time and network load balancing, compared to a centralized job assignment system.
Algorithm selection for sorting in embedded and mobile systemsJigisha Aryya
Algorithm selection aims to solve problems like sorting using the most efficient algorithm by analyzing data characteristics. For resource-constrained embedded systems, this can improve energy efficiency by reducing computation time and eliminating unnecessary software components. The document proposes using algorithm selection with a sliding window approach for data stream mining to sample and analyze data on-board instead of transmitting all data over bandwidth-limited wireless networks. This allows for sorting and other computations to be performed locally, saving energy compared to sending all data for remote processing. Eliminating the random number generator used for sorting could also improve energy efficiency.
IDENTIFICATION OF EFFICIENT PEERS IN P2P COMPUTING SYSTEM FOR REAL TIME APPLI...ijp2p
Currently the Peer-to-Peer computing paradigm rises as an economic solution for the large scale
computation problems. However due to the dynamic nature of peers it is very difficult to use this type of
systems for the computations of real time applications. Strict deadline of scientific and real time
applications require predictable performance in such applications. We propose an algorithm to identify the
group of reliable peers, from the available peers on the Internet, for the processing of real time
application’s tasks. The algorithm is based on joint evaluation of peer properties like peer availability,
credibility, computation time and the turnaround time of the peer with respect to the task distributor peer.
Here we also define a method to calculate turnaround time (distance) on task distributor peers at
application level.
IDENTIFICATION OF EFFICIENT PEERS IN P2P COMPUTING SYSTEM FOR REAL TIME APPLI...ijp2p
Currently the Peer-to-Peer computing paradigm rises as an economic solution for the large scale
computation problems. However due to the dynamic nature of peers it is very difficult to use this type of
systems for the computations of real time applications. Strict deadline of scientific and real time
applications require predictable performance in such applications. We propose an algorithm to identify the
group of reliable peers, from the available peers on the Internet, for the processing of real time
application’s tasks. The algorithm is based on joint evaluation of peer properties like peer availability,
credibility, computation time and the turnaround time of the peer with respect to the task distributor peer.
Here we also define a method to calculate turnaround time (distance) on task distributor peers at
application level.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Cloud Camp Milan 2K9 Telecom Italia: Where P2P?Gabriele Bozzi
1. The document discusses the potential for peer-to-peer (P2P) computing as an alternative or complement to the traditional client-server model, especially in the context of cloud computing.
2. It notes challenges with P2P such as lack of centralized control and potential for freeloading, but also advantages like harnessing unused resources.
3. Emerging technologies like autonomic and cognitive networking aim to address P2P challenges by enabling self-configuration and optimization of distributed resources.
1. The document discusses the potential for peer-to-peer (P2P) computing as an alternative or complement to the traditional client-server model, especially in the context of cloud computing.
2. P2P systems offer access to distributed resources but lack centralized control, which makes it difficult to ensure reliability, performance, and security.
3. Autonomic and cognitive approaches may help address issues with P2P by enabling self-configuration, healing, optimization and protection of distributed resources.
4. Future networking approaches like DirecNet envision high-speed mobile mesh networks that could further enable wide-scale distributed computing architectures.
This paper presents a technique for end hosts to detect if intermediaries like routers are applying compression to traffic flows without the end hosts' knowledge. The technique is non-intrusive and only uses packet inter-arrival times for detection, requiring no changes to or cooperation from intermediaries. Simulations and internet experiments show the approach can accurately detect compression applied by intermediaries. The technique could help end hosts optimize their own use of compression resources by avoiding redundant compression when intermediaries are already compressing traffic.
The document outlines a final project to design and implement a basic node-based networking system without internet service providers. It proposes using GPS coordinates to assign addresses and simulating data transfer between nodes to test performance. The key steps are:
1) Assign each node a unique address based on its GPS coordinates.
2) Specify node properties like transfer rate and range.
3) Simulate data transfer between random node pairs to determine latency and throughput.
4) Analyze the results to find worst-case performance and compare to typical internet speeds.
The goal is to evaluate if a decentralized mesh network could feasibly replace traditional internet infrastructure. The document describes addressing schemes, node specifications, and the process
In the last decade Peer to Peer technology has been thoroughly explored, becauseit overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner.
We present proofs of our algorithms as well as experimental results and evaluations.
Node selection in p2 p content sharing service in mobile cellular networks wi...Uvaraj Shan
This document discusses node selection algorithms for peer-to-peer content sharing over mobile cellular networks that consider downlink bandwidth limitations. It proposes two novel algorithms (DBaT-B and DBaT-N) that select peer nodes to maximize load balancing across cells while meeting the requesting peer's bandwidth needs. DBaT-B selects peers to satisfy the requesting peer's minimum bandwidth requirement, while DBaT-N selects a certain number of peers as requested. Both algorithms first choose peers in the least busy cell to improve load balancing.
Node selection in p2 p content sharing service in mobile cellular networks wi...Uvaraj Shan
The document discusses node selection algorithms for peer-to-peer content sharing over mobile cellular networks that consider downlink bandwidth limitations. It proposes two algorithms: DBaT-B selects peers to meet a minimum requested bandwidth sum, prioritizing load balancing across cells. DBaT-N selects a requested number of peers where the bandwidth sum exceeds the downlink limit, again balancing cell loads. Both aim to satisfy bandwidth demands while distributing traffic evenly across the network. The paper then evaluates the algorithms' performance through simulation.
Node selection in p2 p content sharing service in mobile cellular networks wi...Uvaraj Shan
This document discusses node selection algorithms for peer-to-peer content sharing over mobile cellular networks that consider downlink bandwidth limitations. It proposes two novel algorithms (DBaT-B and DBaT-N) that select peer nodes to maximize load balancing across cells while meeting the requesting peer's bandwidth needs. DBaT-B selects peers to satisfy the requesting peer's minimum bandwidth requirement, while DBaT-N selects a certain number of peers as requested. Both algorithms first choose peers in the least busy cell to improve load balancing.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
This document discusses big data mining and the Internet of Things. It first presents challenges with big data mining including modeling big data characteristics, identifying key challenges, and issues with statistical analysis of IoT data. It then describes an architecture called IOT-StatisticDB that provides a generalized schema for storing sensor data from IoT devices and a distributed system for parallel computing and statistical analysis of IoT big data. The system includes query operators for data retrieval and statistical analysis of IoT data in areas like transportation networks.
Similar to Exploring Peer-To-Peer Data Mining (20)
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
2. 166 Computer Science & Information Technology (CS & IT)
Distributed Data Mining deals with data analysis in those environments in which data are
distributed as for peer-to-peer networks and offers an alternative way to address this problem.
Researchers have developed several approaches for computing primitive operations (sum,
average, max) on P2P networks. In this report we are going to introduce two different
approaches: the first one is based on the concept of local algorithms[1], algorithms computing
their results just with communications between immediate neighbors; the second one is based on
the Newscast model of computation [4][3], a probabilistic epidemic protocol for information and
membership dissemination.
In the next sections we will first give a brief overview on P2P data mining, its motivations and
goals; then (section 3) we will introduce the concept of local algorithms [1] giving some
examples; in section 4 we will introduce the Newscast model and give an idea of how it works
along with some practical primitive implementations; finally in the last section we will draw
some conclusions.
2. GOAL IN PEER-TO-PEER DATA MINING
One of the main goal of P2P data mining is to achieve as closer as possible the same results that
can be obtained with a centralized approach without moving any data from the original location.
That's why such algorithms must be highly scalable, tolerant to crashes and to peer “churn"2
and
mainly they must be able to calculate the results in-network instead of loading all the data in an
unique system and then apply to itself the traditional data mining techniques.
As just said, there are several properties that are required by a peer-to-peer data mining
algorithm:
scalability has been already mentioned and it is the foremost requirement; algorithms for peer-to-
peer networks must be independent of the size of the network or at most they must be dependent
of the log of the size;
anytimeness means that, since in some application the rate of the data change may be higher than
the rate of computation, the algorithm must be able to provide a good and ad hoc solution at any
time;
asynchronism is also crucial requirements for P2P algorithms: P2P networks are often huge, that
means that any attempt to synchronize between the entire network is vane due to network latency
or limited bandwidth;
decentralization means that the computations must be done in network, hence no centralized
coordination must be used;
fault-tolerance is an other issue we have already mentioned; in a large P2P network can often
happen that nodes crash or that they want to leave or join the network. That's why P2P algorithms
________________
2
By "Churn" we mean the situation in which some nodes leave the network and are suddenly replaced with
brand new ones.
3. Computer Science & Information Technology (CS & IT) 167
should be able to recover from these situations.
In the next sections we will present some primitive designed for Peer-to-Peer Data Mining
purposes which show to comply with the above mentioned needs.
3. LOCAL ALGORITHMS
Approaches to P2P data mining have focused on developing data mining primitive operations
over the network as well as more complicated data analysis algorithms.
Datta in [1] proposes some algorithms for calculating such primitives as sum and average, basing
on the definition of local algorithms.
Given a constant k, for any network dimension, we can say that an algorithm is a local algorithm
if there is a part of the input for which the algorithm terminates with communications expended
per peer no greater than k and on the rest of the inputs, the communication expended per peer is
of the order of the network size. Them can be divided into two categories: exact local algorithm
and approximate local algorithms: the former always terminate granting the same results that can
be obtained with centralized methods; the latter instead, they can not give such level of accuracy.
Of course exact local algorithms can give better results, but it is not possible to develop them for
every kind of situation.
In the next subsections will be given examples of both exact an approximate local algorithms.
3.1 Exact local algorithms (The Majority voting problem)
The Majority voting problem is a typical example of exact local algorithm which represents a
situation in which each peer (Pi) of a P2P network has a number bi which may be 0 or 1 and a
threshold ( each node has the same ). Peers want to collectively determine if
the sum of all bi is greater than , where is the number of peers in the network.
Addressing this problem can serve as a primitive for several kind of data mining algorithms and
can be used as a primitive for more complicated exact local algorithms.
Pi is a generic node in the network, N eii is one of its neighbors, Ci represents an estimate of the
number of the nodes in the network and Si is an estimate of the global sum. All the peers can
only communicate with its neighbors and it is through this way that the just mentioned estimates
are updated. We also define the threshold belief of Pi when such peer believes or not the majority
threshold is met
Hence this threshold belief depends on the exchange of informations between neighbors. The
crucial point of this approach concerns in deciding whether Pi needs to send a message to its
neighbor Pj from which it has just received information on C and S. Pi will not send such a
message if and only if it can be certain that such information cannot modify the threshold belief
_________________
3
Actually in the original paper [1] was just indicated
4. 168 Computer Science & Information Technology (CS & IT)
of Pj . On the other end, if it cannot be certain of this, a message must be sent. This decision is
taken on the basis of the estimate Pi makes on the values of Cj and Sj together with its values (Ci
and Si). When a node Pi decides to send a message, then it sends all of its informations about S
and C, excluding those sent from Pj .
This approach is considered to be robust to data and network change: when a peer Pi changes its
data, then Pi recomputes Ci and Si and applies those conditions to all of its neighbors; if a peer Pj
leaves the network, then Pi recomputes Si and Ci without taking into account the informations
from Pj .
Discussion
Exact local algorithms can be very useful in solving data mining problems in P2P networks but
unfortunately they are very limited. The scope of such algorithms is restricted to functions that
can have a local representation in the given network and they are limited to those problems which
can be reduced to threshold predicates (as for the majority voting problem). An example of
application is given in [1], where an exact local algorithm (based on the majority voting problem)
is used for monitoring a K-means clustering of data distributed over a peer-to-peer network. In
this application K-means clustering is performed in the traditional way (on a centralized system).
After this, results are sent to the peers of the network and the local algorithm for monitoring the
K-means clustering is executed: this algorithm just raises an alert if the centroids needs to be
updated.
It is quite impossible to develop an exact local algorithm to compute the average of a set of data
distribute over the network, that is why it is impossible to solve some data mining problems with
exact local algorithms (as for example P2P K-means clustering). For addressing this, two
different mechanism are proposed: the first one (section 3.2) describes an approximate local
algorithm to perform the K-means clustering over a P2P network [2]; the second one (section 4)
describes the Newscast model of computation for calculating means over data distributed on P2P
overlay networks.
3.2 Approximate local algorithms (P2P K-means clustering)
The P2P K-means clustering algorithm [2] is an iterative algorithm requiring only local
communications and synchronization at each iteration: nodes exchange messages and
synchronize only with their neighbors. The goal is for each node to converge on a set of centroids
that are as close as possible to the centroids that would have been produced if the data from all
nodes was first centralized, then K-means was run.
The algorithm is initiated with a set of starting centroid selected at random. P1, P2,...., Pn denote
the nodes in the network and Xi
denotes the data set held by each node; the global data set is
denoted as a X which equals to the list of neighbors of a generic node i is denoted by
Each node stores: a set indicating the centroids (local centroids) held by
the node i at the beginning of cycle l; a termination threshold and a cluster count
which is the number of tuple in for which is closer than any other
5. Computer Science & Information Technology (CS & IT) 169
Each iteration of the algorithm is divided in two steps: the first one is similar to the centralized K-
means in which peer Pi assigns each of its points to the nearest centroid; in the second one peer Pi
sends a poll message to its immediate neighbors and waits for a respond. This request consists of
a pair (id, current iteration number) which is done in order to make the neighbors to respond
with their local centroids and cluster count for iteration l. Once they have all responded, Pi
updates its jth
centroid at the beginning of iteration l + 1. The update is a weighted average4
of the
local centroids and counts received from all immediate neighbors (for their iteration l). Then it
moves to the next iteration of the K-means algorithm and repeats the whole process. If the
maximum change in position of the new centroid after an iteration remains above the defined
threshold then Pi goes on iteration l + 1.
The key point is how do the peers respond to those requests. Suppose peer Pi receives a poll
message from node k at its iteration Pi sends its local centroid and cluster count
to peer Pk; if that means Pi does not have local centroids for iteration and in such case,
the poll message is put in the poll table of checks if it contains local centroids and
cluster counts for Pk , if so, they are sent to Pk, else the poll message is put in the poll table.
Finally Pi will check its poll table at each iteration and will respond to any message it can.
At the end of each iteration l, if no important changes are detected on the cluster centroids (the
maximum change in position is below the user defined threshold ). each node could enter a
termination state. In that case, such peer no longer updates its centroids and sends poll messages.
However, it does responds to polling messages from its neighbors. Once all peers enter into the
terminated state, the algorithm is terminated.
Experiments results and discussion
The P2P K-means clustering algorithm [1] presented in the previous section is a very important
example which gives the idea of how is important to implement good primitives for data mining
in distributed environments. The algorithm is in fact based on a primitive derived from the
majority voting problem which is a primitive developed for peer-to-peer systems.
Datta in its works [1][2] performed several experiments with its P2P K-means clustering
algorithm which seems to achieve good results. Experiments were conducted in both static and
dynamic environments with a network of 1000 nodes: in both cases was calculated the accuracy
with respect to the classic centralized K-means and the communication cost. In the static
environment high accuracy is found (less than 3% of points per node misclassified on average)
while no significant impact on this has had the method of assigning data points to node
(uniformly or non-uniformly). However this has had a significant impact on communication
costs: the number of bytes received per node increases slowly with network size for uniform
assignment; the cost increases more sharply for non-uniform assignment.
Experiments were also conducted on a dynamic environment (with nodes leaving and joining the
network) and even here good accuracy was found (less than 3:5% misclassified on average)
which remains stable when the network evolves. Even increasing the network size did not seems
to change the accuracy significantly: this proves that the algorithm is highly scalable.
______________________
4
Such weighted average is calculated by primitive implemented with local algorithms.
6. 170 Computer Science & Information Technology (CS & IT)
4. DATA MINING THROUGH THE NEWCAST MODEL OF
COMPUTATION
As already said, researchers working on distributed data mining have been focusing on
investigating techniques for calculating primitives as average, maximum etc.. in distributed
environment. Even Kowalczyk and Jelasity in [4] focused on this, but their approach is slightly
different from the Datta's one [2].
First of all they adopted two important constraints: the first is that all the nodes (peers) store as
few as one single data instances (in [1] each peer held several data instances); the second is that
there is practically no limit on the number of nodes (even in [1] there was no potential limit on
this, but experiments were always conducted on small networks of the order of thousand nodes).
Furthermore, as in [1] nodes can leave and join the network as in a dynamic environment. For
this, in this approach are needed resources that scale directly with the size of the network, which
is a feature distinguishing it from local algorithms.
In the next subsections we are going to first introduce the Newscast model of computation, then
we will propose some primitive for distributed Data Mining within this model and finally we will
draw some conclusions.
4.1 The Newscast model (an overview)
Newscast [3] is a gossip-based topology manager protocol. Its aim is to continoulsy rewire the
(logical) connections between hosts. The rewiring process is designed in such a way that the
resulting overlay is very close to a random graph. The generated topology is thus very stable and
provides robust connectivity.
As in any large P2P system, a node only knows about a small fixed set of other nodes (due to
scalability issues), called neighbors. In Newscast, the neighborhood is represented by a partial,
fixed c size view of node descriptors composed by a node address and a logical time-stamp (e.g.,
the descriptor creation time).
The protocol behavior performs the following actions: selects first a neighbour from the local
view, exchanges the view with the neighbor, then both participants update their actual view
according to the received view. The data actually sent over the network by any Newscast node is
represented by the node's own descriptor plus its local view.
In Newscast, the neighbor selection process is performed in a random fashion by the
SELECTPEER() method. The UPDATE() method is the Newscast core behavior. It merges
a received view (sent by a node using SENDSTATE()) with the current peer view
in a temporary view list. Finally, Newscast trims this list to obtain the new c size view. The node
descriptors discarded are chosen from the most old" ones, according to the descriptor time-
stamp. This approach changes continuously the node descriptors hold in each node view; this
implies a continuous rewiring of the graph defined by the set of all node views.
The protocol always tends to inject new informations in the system and allows an automatic
elimination of old node descriptors using the aging approach. This feature is particularly
desirable to remove crashed node descriptors and thus to repair the overlay with minor efforts.
7. Computer Science & Information Technology (CS & IT) 171
Newscast is also cheap in terms of network communication. The traffic generated by the protocol
involves the exchange of a few hundred bytes per cycle for each peer.
Figure 1: Conceptual model of the collective of agents and the news agency (from [4]).
Trying to talk about this model on an higher level, we can say that it is based on two main
concepts: the collective of agents and the news agency (see Figure 1). The agents communicate
through the news agency. Although the news agency plays the role of a server orchestrating the
communication schedule, it is a virtual entity implemented in a fully distributed P2P solution.
The communication schedule is organized into cycles and in each of them the news agency
collect exactly one news item from all the agents. At the same time it prepares for each agent a
randomly chosen set of news item from the previous cycle and delivers these set to the agents. In
the next subsection we present an example of averaging primitives implemented within the
model.
4.2 Basic Statistics
The ability of calculating the mean is central for implementing some basic data mining
algorithms in Newscast. As we said the Newscast communication schedule is organized into
cycles so what we want is an algorithm able to calculate in few cycles the average of the values
held by the nodes of the Newscast Network. In this section we present three averaging algorithms
developed on Newscast.
4.2.1 Basic Averaging (BA)
The Basic Average algorithm [4] is the simplest way to achieve this. During the first cycle each
agent publishes its value so that the news agency get a copy of all the values that must be
averaged. Next all agents whenever they receive news, they make the average of these values and
then publish it. An important observation must be done: in every cycle the news agency receives
a set of values that on average has the same mean of the original set, but the variance will be
getting smaller and smaller with the number of cycles. This is the most important result of this
algorithm. From the experiments performed, it can be shown that after k iterations of the
"averaging operations", the variance drops to of its initial value.
8. 172 Computer Science & Information Technology (CS & IT)
As already said this is probably the simplest averaging algorithm that we can think on the
Newscast model, but its simplicity pays the lack of adaptation. In fact if we think to a network
where nodes leave and join, change their values and where nodes can temporary or permanently
crash, the BA is not able to fit with these dynamical needs. To address this, the Systematic
Averaging algorithm [4] is proposed (next section).
4.2.2 Systematic Averaging (SA)
The SA algorithm [4] achieves adaptation by constantly propagating agents current values and
temporal averages through the news agency. Therefore, any change in the incoming data will
quickly affect the final result.
A small positive integer d is fixed and it is used to control the depth of the propagation process.
The news items are vectors of d + 1 elements: the first element of a news item X is x0 and
contains the agent value (called 0-order estimate of the mean); x1 is the means of two 0-order
estimates and it is called 1-order estimation mean; xd, which is the last element, is the average of
two estimates of order d -1 and is called an estimate of order d. In this way consecutive elements
of X will be balanced. The result of this propagation is represented by xd.
Even the SA algorithm has the ability to drop the variance: it decreases in an exponential way.
Moreover the system can react to changes in the input data within d iterations.
4.2.3 Cumulative Averaging (CA)
The two algorithm we have just seen have the ability of reducing the variance very fast, but, due
to the randomness characterizing the Newscast engine, the output value could be different from
the true mean.
This problem is solved by the CA algorithm [4]. It runs two processes in parallel: in the first one
agents updates their estimate of the mean of the incoming data, while in the second one the mean
of these estimates is collectively calculated by a BA procedure.
4.3 Experiments configurations and results
The experiments we are going to describe [4] relates to the three averaging algorithms we have
just mentioned. These are based on tests with different network sizes (from 10000 to 50000) and
different data set. For each configuration were executed 100 independent runs. Were also used
three different kind of data sets: Gaussian, where the value of each agent was taken independently
from a Gaussian distribution; half-half, where half of the agents had value 0 and the other half 1;
and peak where all but one agents hold the value 0.
With respect to the convergence rate the BA algorithm was fastest (20-30 iteration), the SA was
slower (50 iterations) and the CA was the slowest (100 iterations). With respect to accuracy, the
situation was inverted: BA was worst, SA better and CA best. The deviation from the true mean
depends on the used distribution. Peak distribution has the intent of showing the "true power of
the averaging algorithm", in fact we can note from the results with such distribution that the mean
and variance with BA are respectively 0.935 and 0.656, while with SA they are 0.98 and 0.265.
9. Computer Science & Information Technology (CS & IT) 173
4.4 Applications
As already seen with the primitives calculated with local algorithms, even here the averaging
primitives implemented through the Newscast model can be used in several data mining tasks as
for example for classification techniques. In [4] is reported an example in which the above
mentioned averaging algorithms, with a little modification, are used for finding the Naive Bayes
classifier for data that are arbitrarily distributed among the nodes in a P2P Network.
Kowalczyk et al. have implemented the Naive Bayes algorithm using BA as the base averaging
method. As they had to maintain estimates of several means, they had to represent news items by
vectors of the same length as the number of the means and to run BA on all coordinates at the
same time. They have then tested the performances of that algorithm performing several
experiments and then comparing the results with a classical Naive Bayes centralized algorithm. In
most cases, although the model parameters were slightly different, no difference in the
classification rate were found.
Most statistics that are used by other classification algorithms are defined in terms of ratios (or
probabilities) that have the same form as described above. Consequently they can be
implemented within the newscast framework.
5. CONCLUDING DISCUSSION
In this paper we have described in brief two interesting approaches to Data Mining in Peer-to-
Peer Systems. The first one from Datta et al. [1] is based on the concept of local algorithms and
the second one from Kowalczyk et al. [4] uses the Newscast model of computation.
Both the authors are intent to supply techniques for calculating primitives for P2P networks,
primitives that form the basis for more complicated Data Mining algorithms. Both the approaches
result very interesting even though they differs in some aspects.
Calculating primitives through the Newscast model of computation, resulted a winning strategy.
The main task the authors wanted to deal with, was seeking a model for data spread over a
number of agents; this was addressed through Newscast which is based on an epidemic protocol
for disseminating information and group membership. The fact that the model lies on such robust
and highly scalable protocol, states the goodness of the model itself, which inherits all these good
features.
One important thing the two approaches have in common is that in both cases peers
communicates only with their immediate neighbors but the second one (Newscast) does this in an
epidemic-style manner so that results can be spread very quickly and all the agents can hear about
the final solution in a short amount of time.
An important difference between the Newscast model and local algorithms which derives from
what we have just said, is that in the Newscast model the terminations is reached once all agents
have heard about the final results: although there is no signal that informs the agents that the
result is found, using the theory of epidemic algorithms (which works as a broadcasting
mechanism), all agents will hear about the final solution very quickly. Local algorithms instead,
they terminates once a certain threshold is met: when an agent has reached an user defined
10. 174 Computer Science & Information Technology (CS & IT)
threshold (in the P2P K-means clustering it related the change in position of the centroids), it
enters the terminated state; once all agents have reached such state, the whole process stops.
An other main dfference between the two approaches is that the News cast model requires
resources that scale directly with the size of the network. So the resources required by the
algorithm are dependent from the size of the system. In spite of this, the model results very
scalable and robust. Local algorithms instead, computes their results using informations from a
handful of nearby neighbors which leads to a good level of scalability too. It has also been proved
that they are very good at adjusting to failure and changes in the input locally (see section 3.1).
Even with the Newscast model it is possible to achieve these properties: we have seen the
Systematic Average algorithm which is able to adjust to changes in the value of the agents on-the-
fly and we have also seen that the tendency of the protocol to insert new informations in the
system, allowing an automatic elimination of old node descriptors, is particularly desirable to
remove crashed node descriptors and thus to repair the overlay with minor efforts.
In light of this, we can't say if one of the two approaches is better than the other one. We can
certainly state (basing on the provided results) that both are able to fit with the peer-to-peer
networks requirements: they are highly scalable, robust to node crashes and data set changes,
decentralized and asynchronous. Once applied to real P2P Data Mining algorithms as K-means
clustering and Naive Bayes classification, they have also shown a good level of accuracy and
convergence to the results obtained with traditional centralized techniques.
In spite of this, data analysis in P2P systems still offers lot of challenges for the researchers. The
experiments on the primitive and the distributed data mining algorithms we have described in this
report, have shown good results but they come from lots of simulations done on P2P networks
testbeds, hence we have “no mathematical proofs" of their absolute validity. As future work, it
would be interesting and challenging at the same time to test this approaches on a platform like
PlanetLab (a reliable testbed for overlay networks)5
, or even better on a real Peer-to-Peer Overlay
Network.
REFERENCES
[1] Datta, S., Bhaduri, K., Giannella, C., Wol, R. (2005) Distributed Data Mining in Peer-to-Peer
Networks. Invited submission to the IEEE Internet Computing special issue on Distributed Data
Mining.
[2] Datta, S., Giannella, C., Kargupta, H. (2006) K-Means Clustering Over a Large, Dynamic Network.
Accepted paper in SIAM2006 Data Mining Conference.
[3] Jelasity, M., van Steen, M. (2002) Large-Scale Newscast Computing on the Internet. Internal Report
IR-503, Vrije Universiteit Amsterdam, Department of Computer Science, Amsterdam, The
Netherlands.
[4] Kowalczyk, W., Jelasity, M., Eiben, A. (2003) Towards Data Mining in Large and Fully Distributed
Peer-to-Peer Overlay Networks. Technical Report IR-AI-003, Vrije Univeriteit Amsterdam,
Department of Computer Science, Amsterdam, The Netherland.
_____________
5
http://www.planet-lab.org, Last visited March 2016