Peer-to-peer swarming protocols have been proven to be very efficient for content replication over Internet.
This fact has certainly motivated proposals to adapt these protocols to meet the requirements of on-demand
streaming system. The vast majority of these proposals focus on modifying the piece and peer selection
policies, respectively, of the original protocols. Nonetheless, it is true that more attention has often been
given to the piece selection policy rather than to the peer selection policy. Within this context, this article
proposes a simple algorithm to be used as basis for peer selection policies of BitTorrent-like protocols,
considering interactive scenarios. To this end, we analyze the client’s interactive behaviour when accessing
real multimedia systems. This analysis consists of looking into workloads of real content providers and
assessing three important metrics, namely temporal dispersion, spatial dispersion and object position
popularity. These metrics are then used as the main guidelines for writing the algorithm. To the best of our
knowledge, this is the first time that the client’s interactive behaviour is specially considered to derive an
algorithm for peer selection policies. Finally, the conclusion of this article is drawn with key challenges
and possible future work in this research field.
IMPROVING BITTORRENT’S PEER SELECTION FOR MULTIMEDIA CONTENT ON-DEMAND DELIVERYIJCNCJournal
The great efficiency achieved by the BitTorrent protocol for the distribution of large amounts of data inspired its adoption to provide multimedia content on-demand delivery over the Internet. As it is not designed for this purpose, some adjustments have been proposed in order to meet the related QoS requirements like low startup delay and smooth playback continuity. Accordingly, this paper introduces a BitTorrent-like proposal named as Quota-Based Peer Selection (QBPS). This proposal is mainly based on the adaptation of the original peer-selection policy of the BitTorrent protocol. Its validation is achieved by means of simulations and competitive analysis. The final results show that QBPS outperforms other recent proposals of the literature. For instance, it achieves a throughput optimization of up to 48.0% in low-provision capacity scenarios where users are very interactive.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMSijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly
choose logical neighbours without any knowledge about underlying physical topology. This may cause
inefficient communications among peers. This topology mismatch problem may result in poor
performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of
Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network
topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options
supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection,
our approach outperforms traditional P2P systems using random peer selection. Our approach only
incurs a low overhead and can be deployed easily in various P2P systems.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
A QUERY LEARNING ROUTING APPROACH BASED ON SEMANTIC CLUSTERSijait
Peer-to-peer systems have recently a remarkable success in the social, academic, and commercial communities. A fundamental problem in Peer-to-Peer systems is how to efficiently locate appropriate peers to answer a specific query (Query Routing Problem). A lot of approaches have been carried out to enhance search result quality as well as to reduce network overhead. Recently, researches focus on methods based on query-oriented routing indices. These methods utilize the historical information of past queries and query hits to build a local knowledge base per peer, which represents the user's interests or profile. When a peer forwards a given query, it evaluates the query against its local knowledge base in order to select a set of relevant peers to whom the query will be routed. Usually, an insufficient number of relevant peers is selected from the current peer's local knowledge base thus a broadcast search is investigated which badly affects the approach efficiency. To tackle this problem, we introduce a novel method that clusters peers having similar interests. It exploits not only the current peer's knowledge base but also that of the others in
the cluster to extract relevant peers. We implemented the proposed approach, and tested (i) its retrieval effectiveness in terms of recall and precision, (ii) its search cost in terms of messages traffic and visited peers number. Experimental results show that our approach improves the recall and precision metrics while reducing dramatically messages traffic.
A Survey Of Free-Ridingbehaviour In Peer-To-Peer Networksijiert bestjournal
This paper present the extensive survey of P2P file sharing network with different protocols.This paper presents the most highlighted issue in the network that is Free-Riding issue. Free riding is an important issue for any pe er-to-peer system. An extensive analysis of user traffic on Gnutella shows a significant amount of free riding in the system. Reseachers have analyse Free-Riding extensively and give vari ous results. They found that nearly 70% of Gnutella users share no files,and nearly 50% of all responses are returned by the top 1% of sharing hosts.In this paper we present Bittorent Protocol that can overcome Free-Riding to a greater extent,and many other methodology. While it is well-known that BitTorrent is vulnerable to selfish behaviour of the networkbehav ior
IMPROVING BITTORRENT’S PEER SELECTION FOR MULTIMEDIA CONTENT ON-DEMAND DELIVERYIJCNCJournal
The great efficiency achieved by the BitTorrent protocol for the distribution of large amounts of data inspired its adoption to provide multimedia content on-demand delivery over the Internet. As it is not designed for this purpose, some adjustments have been proposed in order to meet the related QoS requirements like low startup delay and smooth playback continuity. Accordingly, this paper introduces a BitTorrent-like proposal named as Quota-Based Peer Selection (QBPS). This proposal is mainly based on the adaptation of the original peer-selection policy of the BitTorrent protocol. Its validation is achieved by means of simulations and competitive analysis. The final results show that QBPS outperforms other recent proposals of the literature. For instance, it achieves a throughput optimization of up to 48.0% in low-provision capacity scenarios where users are very interactive.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMSijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly
choose logical neighbours without any knowledge about underlying physical topology. This may cause
inefficient communications among peers. This topology mismatch problem may result in poor
performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of
Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network
topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options
supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection,
our approach outperforms traditional P2P systems using random peer selection. Our approach only
incurs a low overhead and can be deployed easily in various P2P systems.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
A QUERY LEARNING ROUTING APPROACH BASED ON SEMANTIC CLUSTERSijait
Peer-to-peer systems have recently a remarkable success in the social, academic, and commercial communities. A fundamental problem in Peer-to-Peer systems is how to efficiently locate appropriate peers to answer a specific query (Query Routing Problem). A lot of approaches have been carried out to enhance search result quality as well as to reduce network overhead. Recently, researches focus on methods based on query-oriented routing indices. These methods utilize the historical information of past queries and query hits to build a local knowledge base per peer, which represents the user's interests or profile. When a peer forwards a given query, it evaluates the query against its local knowledge base in order to select a set of relevant peers to whom the query will be routed. Usually, an insufficient number of relevant peers is selected from the current peer's local knowledge base thus a broadcast search is investigated which badly affects the approach efficiency. To tackle this problem, we introduce a novel method that clusters peers having similar interests. It exploits not only the current peer's knowledge base but also that of the others in
the cluster to extract relevant peers. We implemented the proposed approach, and tested (i) its retrieval effectiveness in terms of recall and precision, (ii) its search cost in terms of messages traffic and visited peers number. Experimental results show that our approach improves the recall and precision metrics while reducing dramatically messages traffic.
A Survey Of Free-Ridingbehaviour In Peer-To-Peer Networksijiert bestjournal
This paper present the extensive survey of P2P file sharing network with different protocols.This paper presents the most highlighted issue in the network that is Free-Riding issue. Free riding is an important issue for any pe er-to-peer system. An extensive analysis of user traffic on Gnutella shows a significant amount of free riding in the system. Reseachers have analyse Free-Riding extensively and give vari ous results. They found that nearly 70% of Gnutella users share no files,and nearly 50% of all responses are returned by the top 1% of sharing hosts.In this paper we present Bittorent Protocol that can overcome Free-Riding to a greater extent,and many other methodology. While it is well-known that BitTorrent is vulnerable to selfish behaviour of the networkbehav ior
A study of index poisoning in peer topeerIJCI JOURNAL
P2P file sharing systems are the most popular forms of file sharing to date. Its client-server architecture
attains faster file transfers, however with its peer anonymity and lack of authentication it has become a
gold mine for malicious attacks. One of the leading sources of disruptions in the P2P file sharing systems is
the index poisoning attacks. This attack seeks to corrupt the indexes used to reference files available for
download in P2P systems with false data. In order to protect the users from these attacks it is important to
find solutions to eliminate or mitigate the effects of index poisoning attacks. This paper will analyze index
poisoning attacks, their uses and solutions proposed to defend against them.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
A Proximity-Aware Interest-Clustered P2P File Sharing System 1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document summarizes a thesis on developing a distributed architecture for web browsing troubleshooting. The architecture integrates network measurement and analysis techniques through real-time data sharing across peers. It applies a dynamic distributed K-means clustering algorithm to partition browsing data from different locations into clusters representing performance issues. Experiments using the architecture identified local and non-local network problems affecting different peers. Future work could include testing with more peers, optimizing resource usage, and developing a user-friendly interface.
ANALYSIS STUDY ON CACHING AND REPLICA PLACEMENT ALGORITHM FOR CONTENT DISTRIB...ijp2p
Recently there has been significant research focus on distributed computing network massively caching
and replica placement problems for content distribution in globally. Peer-to-Peer (P2P) network provides
dynamically decentralized, self organized, scalable objects in distributed computing system. However such
networks suffer from high latency, network traffic and cache update problems. The existing caching and
replica placement techniques for placing objects across peer-to-peer network have no complete solution to
these problems. This paper presents an overview of the current challenges present in P2P overlay
networks, followed by describes briefly the analysis study of the existing algorithms and their merits and
demerits. And also suggest a new popularity based QoS-aware(Quality of Service) smart replica
placement algorithm for content distribution in peer- to-peer overlay networks which overcomes the access
latency, fault tolerance, network traffic and redundancy problems with low cost. The new algorithm
suggested is based on the outcome of the analysis study
Textual based retrieval system with bloom in unstructured Peer-to-Peer networksUvaraj Shan
This document summarizes a research article about a textual retrieval system using Bloom filters in unstructured peer-to-peer networks. It discusses how Bloom Cast replicates document content across the network using Bloom filters to encode documents. This allows for efficient full-text searches with guaranteed recall rates while reducing communication costs compared to replicating raw documents. The system samples nodes randomly using a lightweight distributed hash table to support searches in an unstructured P2P network where the network size is unknown.
Maximizing p2 p file access availability in mobile ad hoc networks though rep...LeMeniz Infotech
Maximizing p2 p file access availability in mobile ad hoc networks though replication for efficient file sharing
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Heterogeneous Device-to-Device mobile networks
are characterised by frequent network disruption and unreliability
of peers delivering messages to destinations. Trust-based
protocols has been widely used to mitigate the security and
performance problems in D2D networks. Despite several efforts
made by previous researchers in the design of trust-based routing
for efficient collaborative networks, there are fewer related
studies that focus on the peers’ neighbourhood as a routing
metrics’ element for a secure and efficient trust-based protocol.
In this paper, we propose and validate a trust-based protocol
that takes into account the similarity of peers’ neighbourhood
coefficients to improve routing performance in mobile HetNets
environments. The results of this study demonstrate that peers’
neighbourhood connectivity in the network is a characteristic
that can influence peers’ routing performance. Furthermore, our
analysis shows that our proposed protocol only forwards the
message to the companions with a higher probability of delivering
the packets, thus improving the delivery ratio and minimising
latency and mitigating the problem of malicious peers ( using
packet dropping strategy).
Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though Repl...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Trust Based Content Distribution for Peer-ToPeer Overlay NetworksIJNSA Journal
In peer-to-peer content distribution the lack of a central authority makes authentication difficult. Without authentication, adversary nodes can spoof identity and falsify messages in the overlay. This enables malicious nodes to launch man-in-the-middle or denial-of-service attacks. In this paper, we present a trust based content distribution for peer-to-peer overlay networks, which is built on the trust management scheme. The main concept is, before sending or accepting the traffic, the trust of the peer must be validated. Based on the success of data delivery and searching time, we calculate the trust index of a node. Then the aggregated trust index of the peers whose value is below the threshold value is considered as distrusted and the corresponding traffic is blocked. By simulation results we show that our proposed scheme achieves increased success ratio with reduced delay and drop.
A MALICIOUS USERS DETECTING MODEL BASED ON FEEDBACK CORRELATIONSIJCNC
The trust and reputation models were introduced to restrain the impacts caused by rational but selfish
peers in P2P streaming systems. However, these models face with two major challenges from dishonest
feedback and strategic altering behaviors. To answer these challenges, we present a global trust model
based on network community, evaluation correlations, and punishment mechanism. We also propose a
two-layered overlay to provide the function of peers’ behaviors collection and malicious detection.
Furthermore, we analysis several security threats in P2P streaming systems, and discuss how to defend
with them by our trust mechanism. The simulation results show that our trust framework can successfully
filter out dishonest feedbacks by using correlation coefficients. It can effectively defend against the
security threats with good load balance as well.
We looked at the data. Here’s a breakdown of some key statistics about the nation’s incoming presidents’ addresses, how long they spoke, how well, and more.
The document discusses how startup entrepreneurs think and operate. It notes that startups like Airbnb and Uber were started due to identifying shortages or problems. It emphasizes that startups focus on providing customer benefit, eliminating waste, and creating value. It also highlights that startups operate with speed, embracing failure fast and pivoting quickly, with transparency and by breaking rules. Startups succeed by moving rapidly, with minimal processes and instead prioritizing speed above all else.
32 Ways a Digital Marketing Consultant Can Help Grow Your BusinessBarry Feldman
How can a digital marketing consultant help your business? In this resource we'll count the ways. 24 additional marketing resources are bundled for free.
This document discusses how emojis, emoticons, and text speak can be used to teach students. It provides background on the origins of emoticons in 1982 as ways to convey tone and feelings in text communications. It then suggests that with text speak and emojis, students can translate, decode, summarize, play with language, and add emotion to language. A number of websites and apps that can be used for emoji-related activities, lessons, and discussions are also listed.
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
ON THE OPTIMIZATION OF BITTORRENT-LIKE PROTOCOLS FOR INTERACTIVE ON-DEMAND ST...IJCNCJournal
This paper proposes two novel optimized BitTorrent-like protocols for interactive multimedia streaming: the Simple Interactive Streaming Protocol (SISP) and the Exclusive Interactive Streaming Protocol (EISP). The former chiefly seeks a trade-off between playback continuity and data diversity, while the latter is mostly focused on playback continuity. To assure a thorough and up-to-date approach, related work is carefully examined and important open issues, concerning the design of BitTorrent-like algorithms, are analyzed as well. Through simulations, in a variety of near-real file replication scenarios, the novel protocols are evaluated using distinct performance metrics. Among the major findings, the final results show that the two novel proposals are efficient and, besides, focusing on playback continuity ends up being the best design concept to achieve high quality of service. Lastly, avenues for further research are included at the end of this paper as well
A study of index poisoning in peer topeerIJCI JOURNAL
P2P file sharing systems are the most popular forms of file sharing to date. Its client-server architecture
attains faster file transfers, however with its peer anonymity and lack of authentication it has become a
gold mine for malicious attacks. One of the leading sources of disruptions in the P2P file sharing systems is
the index poisoning attacks. This attack seeks to corrupt the indexes used to reference files available for
download in P2P systems with false data. In order to protect the users from these attacks it is important to
find solutions to eliminate or mitigate the effects of index poisoning attacks. This paper will analyze index
poisoning attacks, their uses and solutions proposed to defend against them.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
A Proximity-Aware Interest-Clustered P2P File Sharing System 1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document summarizes a thesis on developing a distributed architecture for web browsing troubleshooting. The architecture integrates network measurement and analysis techniques through real-time data sharing across peers. It applies a dynamic distributed K-means clustering algorithm to partition browsing data from different locations into clusters representing performance issues. Experiments using the architecture identified local and non-local network problems affecting different peers. Future work could include testing with more peers, optimizing resource usage, and developing a user-friendly interface.
ANALYSIS STUDY ON CACHING AND REPLICA PLACEMENT ALGORITHM FOR CONTENT DISTRIB...ijp2p
Recently there has been significant research focus on distributed computing network massively caching
and replica placement problems for content distribution in globally. Peer-to-Peer (P2P) network provides
dynamically decentralized, self organized, scalable objects in distributed computing system. However such
networks suffer from high latency, network traffic and cache update problems. The existing caching and
replica placement techniques for placing objects across peer-to-peer network have no complete solution to
these problems. This paper presents an overview of the current challenges present in P2P overlay
networks, followed by describes briefly the analysis study of the existing algorithms and their merits and
demerits. And also suggest a new popularity based QoS-aware(Quality of Service) smart replica
placement algorithm for content distribution in peer- to-peer overlay networks which overcomes the access
latency, fault tolerance, network traffic and redundancy problems with low cost. The new algorithm
suggested is based on the outcome of the analysis study
Textual based retrieval system with bloom in unstructured Peer-to-Peer networksUvaraj Shan
This document summarizes a research article about a textual retrieval system using Bloom filters in unstructured peer-to-peer networks. It discusses how Bloom Cast replicates document content across the network using Bloom filters to encode documents. This allows for efficient full-text searches with guaranteed recall rates while reducing communication costs compared to replicating raw documents. The system samples nodes randomly using a lightweight distributed hash table to support searches in an unstructured P2P network where the network size is unknown.
Maximizing p2 p file access availability in mobile ad hoc networks though rep...LeMeniz Infotech
Maximizing p2 p file access availability in mobile ad hoc networks though replication for efficient file sharing
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Heterogeneous Device-to-Device mobile networks
are characterised by frequent network disruption and unreliability
of peers delivering messages to destinations. Trust-based
protocols has been widely used to mitigate the security and
performance problems in D2D networks. Despite several efforts
made by previous researchers in the design of trust-based routing
for efficient collaborative networks, there are fewer related
studies that focus on the peers’ neighbourhood as a routing
metrics’ element for a secure and efficient trust-based protocol.
In this paper, we propose and validate a trust-based protocol
that takes into account the similarity of peers’ neighbourhood
coefficients to improve routing performance in mobile HetNets
environments. The results of this study demonstrate that peers’
neighbourhood connectivity in the network is a characteristic
that can influence peers’ routing performance. Furthermore, our
analysis shows that our proposed protocol only forwards the
message to the companions with a higher probability of delivering
the packets, thus improving the delivery ratio and minimising
latency and mitigating the problem of malicious peers ( using
packet dropping strategy).
Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though Repl...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Trust Based Content Distribution for Peer-ToPeer Overlay NetworksIJNSA Journal
In peer-to-peer content distribution the lack of a central authority makes authentication difficult. Without authentication, adversary nodes can spoof identity and falsify messages in the overlay. This enables malicious nodes to launch man-in-the-middle or denial-of-service attacks. In this paper, we present a trust based content distribution for peer-to-peer overlay networks, which is built on the trust management scheme. The main concept is, before sending or accepting the traffic, the trust of the peer must be validated. Based on the success of data delivery and searching time, we calculate the trust index of a node. Then the aggregated trust index of the peers whose value is below the threshold value is considered as distrusted and the corresponding traffic is blocked. By simulation results we show that our proposed scheme achieves increased success ratio with reduced delay and drop.
A MALICIOUS USERS DETECTING MODEL BASED ON FEEDBACK CORRELATIONSIJCNC
The trust and reputation models were introduced to restrain the impacts caused by rational but selfish
peers in P2P streaming systems. However, these models face with two major challenges from dishonest
feedback and strategic altering behaviors. To answer these challenges, we present a global trust model
based on network community, evaluation correlations, and punishment mechanism. We also propose a
two-layered overlay to provide the function of peers’ behaviors collection and malicious detection.
Furthermore, we analysis several security threats in P2P streaming systems, and discuss how to defend
with them by our trust mechanism. The simulation results show that our trust framework can successfully
filter out dishonest feedbacks by using correlation coefficients. It can effectively defend against the
security threats with good load balance as well.
We looked at the data. Here’s a breakdown of some key statistics about the nation’s incoming presidents’ addresses, how long they spoke, how well, and more.
The document discusses how startup entrepreneurs think and operate. It notes that startups like Airbnb and Uber were started due to identifying shortages or problems. It emphasizes that startups focus on providing customer benefit, eliminating waste, and creating value. It also highlights that startups operate with speed, embracing failure fast and pivoting quickly, with transparency and by breaking rules. Startups succeed by moving rapidly, with minimal processes and instead prioritizing speed above all else.
32 Ways a Digital Marketing Consultant Can Help Grow Your BusinessBarry Feldman
How can a digital marketing consultant help your business? In this resource we'll count the ways. 24 additional marketing resources are bundled for free.
This document discusses how emojis, emoticons, and text speak can be used to teach students. It provides background on the origins of emoticons in 1982 as ways to convey tone and feelings in text communications. It then suggests that with text speak and emojis, students can translate, decode, summarize, play with language, and add emotion to language. A number of websites and apps that can be used for emoji-related activities, lessons, and discussions are also listed.
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
ON THE OPTIMIZATION OF BITTORRENT-LIKE PROTOCOLS FOR INTERACTIVE ON-DEMAND ST...IJCNCJournal
This paper proposes two novel optimized BitTorrent-like protocols for interactive multimedia streaming: the Simple Interactive Streaming Protocol (SISP) and the Exclusive Interactive Streaming Protocol (EISP). The former chiefly seeks a trade-off between playback continuity and data diversity, while the latter is mostly focused on playback continuity. To assure a thorough and up-to-date approach, related work is carefully examined and important open issues, concerning the design of BitTorrent-like algorithms, are analyzed as well. Through simulations, in a variety of near-real file replication scenarios, the novel protocols are evaluated using distinct performance metrics. Among the major findings, the final results show that the two novel proposals are efficient and, besides, focusing on playback continuity ends up being the best design concept to achieve high quality of service. Lastly, avenues for further research are included at the end of this paper as well
The adaptation of the BitTorrent protocol to multimedia on-demand streaming systems essentially lies on
the modification of its two core algorithms, namely the piece and the peer selection policies, respectively.
Much more attention has though been given to the piece selection policy. Within this context, this article
proposes three novel peer selection policies for the design of BitTorrent-like protocols targeted at that type
of systems: Select Balanced Neighbour Policy (SBNP), Select Regular Neighbour Policy (SRNP), and
Select Optimistic Neighbour Policy (SONP). These proposals are validated through a competitive analysis
based on simulations which encompass a variety of multimedia scenarios, defined in function of important
characterization parameters such as content type, content size, and client´s interactivity profile. Service
time, number of clients served and efficiency retrieving coefficient are the performance metrics assessed in
the analysis. The final results mainly show that the novel proposals constitute scalable solutions that may
be considered for real project designs. Lastly, future work is included in the conclusion of this paper.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMSijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly
choose logical neighbours without any knowledge about underlying physical topology. This may cause
inefficient communications among peers. This topology mismatch problem may result in poor
performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of
Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network
topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options
supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection,
our approach outperforms traditional P2P systems using random peer selection. Our approach only
incurs a low overhead and can be deployed easily in various P2P systems.
EFFECTIVE TOPOLOGY-AWARE PEER SELECTION IN UNSTRUCTURED PEER-TO-PEER SYSTEMS ijp2p
Peer-to-Peer systems form logical overlay networks on top of the Internet. Essentially, peers randomly choose logical neighbours without any knowledge about underlying physical topology. This may cause inefficient communications among peers. This topology mismatch problem may result in poor performance and scalability for Peer-to-Peer systems. A possible way to improve the performance of Peer-to-Peer systems is the overlay network construction based on the knowledge of the physical network topology. In this paper, we will propose the use of the “Record Route” and “Timestamp” options supported in the IP protocol to explore the paths between peers. By the topology-aware peer selection, our approach outperforms traditional P2P systems using random peer selection. Our approach only incurs a low overhead and can be deployed easily in various P2P systems.
SECURITY PROPERTIES IN AN OPEN PEER-TO-PEER NETWORKIJNSA Journal
This paper proposes to address new requirements of confidentiality, integrity and availability properties fitting to peer-to-peer domains of resources. The enforcement of security properties in an open peer-topeer network remains an open problem as the literature have mainly proposed contribution on availability of resources and anonymity of users. That paper proposes a novel architecture that eases the administration of a peer-to-peer network. It considers a network of safe peer-to-peer clients in the sense that it is a commune client software that is shared by all the participants to cope with the sharing of various resources associated with different security requirements. However, our proposal deals with possible malicious peers that attempt to compromise the requested security properties. Despite the safety of an open peer-to-peer network cannot be formally guaranteed, since a end user has privileges on the target host, our solution provides several advanced security enforcement. First, it enables to formally define the requested security properties of the various shared resources. Second, it evaluates the trust and the reputation of the requesting peer by sending challenges that test the fairness of its peer-to-peer security policy. Moreover, it proposes an advanced Mandatory Access Control that enforces the required peer-to-peer security properties through an automatic projection of the requested properties onto SELinux policies. Thus, the SELinux system of the requesting peer is automatically configured with respect to the required peer-to-peer security properties. That solution prevents from a malicious peer that could use ordinary applications such as a video reader to access confidential files such as a video requesting fee paying. Since the malicious peer could try to abuse the system, SELinux challenges and traces are also used to evaluate the fairness of the requester. That paper ends with different research perspectives such as a dedicated MAC system for the peer-to-peer client and honeypots for testing the security of the proposed peer-to-peer infrastructure.
Adaptive Sliding Piece Selection Window for BitTorrent SystemsWaqas Tariq
Peer to peer BitTorrent (P2P BT) systems are used for video-on-Demand (VoD) services. Scalability problem could face this system and would cause media servers not to be able to respond to the users’ requests on time. Current sliding window methods face problems like waiting for the window pieces to be totally downloaded before sliding to the next pieces and determining the window size that affects the video streaming performance. In this paper, a modification is developed for BT systems to select video files based on sliding window method. Developed system proposes using two sliding windows, High and Low, running simultaneously. Each window collects video pieces based on the user available bandwidth, video bit rate and a parameter that determines media player buffered seconds. System performance is measured and evaluated against other piece selection sliding window methods. Results show that our method outperforms the benchmarked sliding window methods
FUTURE OF PEER-TO-PEER TECHNOLOGY WITH THE RISE OF CLOUD COMPUTINGijp2p
Peer-to-Peer (P2P) networking emerged as a disruptive business model displacing the server based
networks within a point in time.P2P technologies are on the edge of becoming all-purpose in developing
several applications for social networking. In the past seventeen years, research on P2P computing and
systems has received enormous amount of attention in the areas of academia and the industry. P2P rose to
triumphant profit-making systems in the internet. It represents the best incarnation of the end to end
argument, the frequently disputed design philosophies that guided the design of the internet. The doubting
factor then is why is research on P2P computing now fading from the spotlight and suffering a nose dive
fall as dramatic as its rise to its popularity. This paper is going to capture a quick look at past results in
peer-to-peer computing with focus on understanding what led to its rise, what contributed to its commercial
success and what has led to its lack of interest. The insight of this paper introduces cloud computing as a
paradigm to peer-to-peer computing.
FUTURE OF PEER-TO-PEER TECHNOLOGY WITH THE RISE OF CLOUD COMPUTINGijp2p
Peer-to-Peer (P2P) networking emerged as a disruptive business model displacing the server based
networks within a point in time.P2P technologies are on the edge of becoming all-purpose in developing
several applications for social networking. In the past seventeen years, research on P2P computing and
systems has received enormous amount of attention in the areas of academia and the industry. P2P rose to
triumphant profit-making systems in the internet. It represents the best incarnation of the end to end
argument, the frequently disputed design philosophies that guided the design of the internet. The doubting
factor then is why is research on P2P computing now fading from the spotlight and suffering a nose dive
fall as dramatic as its rise to its popularity. This paper is going to capture a quick look at past results in
peer-to-peer computing with focus on understanding what led to its rise, what contributed to its commercial
success and what has led to its lack of interest. The insight of this paper introduces cloud computing as a
paradigm to peer-to-peer computing.
During the last years, file sharing of copyright-protected material, particularly in peer- to-peer (P2P) networks, has been a serious threat to the established business models of the content industry. There have been numerous discussions about possible counter- measures, some of which have already been implemented. This white paper aims to provide an as objective as possible assessment of the countermeasures for P2P from the perspective of a network device vendor with particular experience with Internet traffic management solutions.
Several solutions exist for file storage, sharing, and synchronization. Many of them involve a
central server, or a collection of servers, that either store the files, or act as a gateway for them
to be shared. Some systems take a decentralized approach, wherein interconnected users form a
peer-to-peer (P2P) network, and partake in the sharing process: they share the files they
possess with others, and can obtain the files owned by other peers.
In this paper, we survey various technologies, both cloud-based and P2P-based, that users use
to synchronize their files across the network, and discuss their strengths and weaknesses.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A Brief Note On Peer And Peer ( P2P ) Applications Have No...Brenda Thomas
The document discusses peer-to-peer (P2P) networks and server-based client/server networks. In a P2P network, all computers have equal privileges to share and access information directly without restrictions. P2P networks are easier to set up but provide less security. In a client/server network, file storage and management is centralized on a server. This provides better security but is more complex to set up and manage. The document explores the advantages and disadvantages of each type of network for different usage contexts.
Implementing a Caching Scheme for Media Streaming in a Proxy ServerAbdelrahman Hosny
In the past few years, websites have moved from being
static web pages into rich media applications that use audio,
images and videos heavily in their interaction with users. This
change has made a dramatic change in network traffics
nowadays. Organizations spend a lot of effort, time and money
to improve response time and design intermediary systems that
enhance overall user experience. Media traffic represents
about 69.9-88.8% of all traffic. Therefore, enhancing networks
to accommodate this large traffic is a major trend. Content
Distribution Networks (CDNs) are now largely deployed for a
faster delivery of media. Redundancy and caching are also
implemented to decrease response time.
In this project, we are implementing a caching scheme for
media streaming in a proxy server. Unlike CDNs, which
require huge infrastructure, our caching proxy server will be
as simple as a piece of software that is portable and can be
installed in small as well as large scales. It may be deployed in
a university network, company’s private network or on ISPs
servers. This caching scheme, specially tailored for media
streaming, will reduce traffic and enhance network efficiency
in general.
Index Terms – Proxy servers, Caching, Media streaming
International Journal of Peer to Peer Networks (IJP2P) Vol.6, No.2, August 20...ijp2p
In recent years, research efforts tried to exploit peer-to-peer (P2P) systems in order to provide Live
Streaming (LS) and Video-on-Demand (VoD) services. Most of these research efforts focus on the
development of distributed P2P block schedulers for content exchange among the participating peers and
on the characteristics of the overlay graph (P2P overlay) that interconnects the set of these peers.
Currently, researchers try to combine peer-to-peer systems with cloud infrastructures. They developed
monitoring and control architectures that use resources from the cloud in order to enhance QoS and
achieve an attractive trade-off between stability and low cost operation. However, there is a lack of
research effort on the congestion control of these systems and the existing congestion control architectures
are not suitable for P2P live streaming traffic (small sequential non persistent traffic towards multiple
network locations). This paper proposes a P2P live streaming traffic aware congestion control protocol
that: i) is capable to manage sequential traffic heading to multiple network destinations , ii) efficiently
exploits the available bandwidth, iii) accurately measures the idle peer resources, iv) avoids network
congestion, and v) is friendly to traditional TCP generated traffic. The proposed P2P congestion control
has been implemented, tested and evaluated through a series of real experiments powered across the
BonFIRE infrastructure
This document summarizes a research paper that proposes a congestion control protocol for peer-to-peer live streaming systems. The protocol is designed to handle sequential traffic heading to multiple destinations efficiently, measure available bandwidth accurately, avoid network congestion, and be friendly to traditional TCP traffic. It was implemented and evaluated on a testbed. The key aspects of the proposed protocol are that it controls transmission rates based on acknowledgments from receivers to keep bottleneck queue sizes stable, and aims to fully utilize bandwidth while avoiding bufferbloat.
In recent years, research efforts tried to exploit peer-to-peer (P2P) systems in order to provide Live Streaming (LS) and Video-on-Demand (VoD) services. Most of these research efforts focus on the development of distributed P2P block schedulers for content exchange among the participating peers and on the characteristics of the overlay graph (P2P overlay) that interconnects the set of these peers.Currently, researchers try to combine peer-to-peer systems with cloud infrastructures. They developed monitoring and control architectures that use resources from the cloud in order to enhance QoS and achieve an attractive trade-off between stability and low cost operation. However, there is a lack of
research effort on the congestion control of these systems and the existing congestion control architectures are not suitable for P2P live streaming traffic (small sequential non persistent traffic towards multiple network locations). This paper proposes a P2P live streaming traffic aware congestion control protocol that: i) is capable to manage sequential traffic heading to multiple network destinations , ii) efficiently exploits the available bandwidth, iii) accurately measures the idle peer resources, iv) avoids network congestion, and v) is friendly to traditional TCP generated traffic.The proposed P2P congestion control has been implemented, tested and evaluated through a series of real experiments powered across the BonFIRE infrastructure.
In recent years, research efforts tried to exploit peer-to-peer (P2P) systems in order to provide Live
Streaming (LS) and Video-on-Demand (VoD) services. Most of these research efforts focus on the
development of distributed P2P block schedulers for content exchange among the participating peers and
on the characteristics of the overlay graph (P2P overlay) that interconnects the set of these peers.
Currently, researchers try to combine peer-to-peer systems with cloud infrastructures. They developed
monitoring and control architectures that use resources from the cloud in order to enhance QoS and
achieve an attractive trade-off between stability and low cost operation. However, there is a lack of
research effort on the congestion control of these systems and the existing congestion control architectures
are not suitable for P2P live streaming traffic (small sequential non persistent traffic towards multiple
network locations). This paper proposes a P2P live streaming traffic aware congestion control protocol
that: i) is capable to manage sequential traffic heading to multiple network destinations , ii) efficiently
exploits the available bandwidth, iii) accurately measures the idle peer resources, iv) avoids network
congestion, and v) is friendly to traditional TCP generated traffic. The proposed P2P congestion control
has been implemented, tested and evaluated through a series of real experiments powered across the
BonFIRE infrastructure.
A Survey Of Free-Ridingbehaviour In Peer-To-Peer Networks ijiert bestjournal
This paper present the extensive survey of P2P file sharing network with different protocols.This paper presents the most highlighted issue in the network that is Free-Riding issue. Free riding is an important issue for any pe er-to-peer system. An extensive analysis of user traffic on Gnutella shows a significant amount of free riding in the system. Reseachers have analyse Free-Riding extensively and give vari ous results. They found that nearly 70% of Gnutella users share no files,and nearly 50% of all responses are returned by the top 1% of sharing hosts.In this paper we present Bittorent Protocol that can overcome Free-Riding to a greater extent,and many other methodology. While it is well-known that BitTorrent is vulnerable to selfish behaviour of the networkbehav ior
IEEE 2014 DOTNET NETWORKING PROJECTS A proximity aware interest-clustered p2p...IEEEMEMTECHSTUDENTPROJECTS
This document describes a proposed proximity-aware and interest-clustered peer-to-peer (P2P) file sharing system (PAIS) that forms physically close nodes into clusters and further groups nodes with common interests into subclusters. It aims to improve file searching efficiency by creating replicas of frequently requested files within subclusters. The system analyzes user interests and file sharing behaviors to construct the network topology and uses an intelligent file replication algorithm. The experimental results show this approach improves file searching performance compared to existing P2P systems.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Similar to On client’s interactive behaviour to design peer selection policies for bittorrent like protocols (20)
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
On client’s interactive behaviour to design peer selection policies for bittorrent like protocols
1. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
DOI : 10.5121/ijcnc.2013.5511 141
ON CLIENT’S INTERACTIVE BEHAVIOUR TO
DESIGN PEER SELECTION POLICIES FOR
BITTORRENT-LIKE PROTOCOLS
Marcus V. M. Rocha1
and Carlo Kleber da S. Rodrigues2,3
1
Minas Gerais State Assembly, MG, Brazil
marcus@melorocha.org
2
Electrical and Electronics Department, Polytechnic School of Army, Sangolquí, Ecuador
3
University Center of Brasilia, Brasília, DF, Brazil
carlokleber@gmail.com
ABSTRACT
Peer-to-peer swarming protocols have been proven to be very efficient for content replication over Internet.
This fact has certainly motivated proposals to adapt these protocols to meet the requirements of on-demand
streaming system. The vast majority of these proposals focus on modifying the piece and peer selection
policies, respectively, of the original protocols. Nonetheless, it is true that more attention has often been
given to the piece selection policy rather than to the peer selection policy. Within this context, this article
proposes a simple algorithm to be used as basis for peer selection policies of BitTorrent-like protocols,
considering interactive scenarios. To this end, we analyze the client’s interactive behaviour when accessing
real multimedia systems. This analysis consists of looking into workloads of real content providers and
assessing three important metrics, namely temporal dispersion, spatial dispersion and object position
popularity. These metrics are then used as the main guidelines for writing the algorithm. To the best of our
knowledge, this is the first time that the client’s interactive behaviour is specially considered to derive an
algorithm for peer selection policies. Finally, the conclusion of this article is drawn with key challenges
and possible future work in this research field.
KEYWORDS
BitTorrent, multimedia, streaming, protocols, interactivity, reciprocity.
1. INTRODUCTION
On-demand streaming systems ideally let their clients select multimedia objects, remotely stored
in a server or group of servers, to be played at any instant of time with a satisfactory quality of
service (QoS), mainly observed in terms of high playout continuity and low latency. The
interactive access, by its turn, allows the clients to execute DVD-like actions during the object
playback. These types of systems have become extraordinarily popular due to the recent and
permanent advances in the field of network technologies, especially those resulting in more
bandwidth capacity, reliability, confidentiality and scalability [1, 2].
For implementing on-demand streaming systems, the peer-to-peer (P2P) swarming architecture
has already been proven to be a more effective solution than the classical client-server
architecture, even if deploying content distribution networks (CDN) and IP multicast [1, 3]. The
swarming architecture dictates that each peer is directly involved in the object delivery and
contributes with its own resources to the streaming session (i.e. perpendicular bandwidth), thus
2. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
142
provoking a load balance over the entire network and making the system scale well. Hence,
increasing the peer population not only increases the workload, but also produces a concomitant
increase in service capacity to process the request workload.
The client-server architecture, by its turn, deals with serious limitations due to the bandwidth
bottleneck at the server or group of servers from which all clients request the objects. That is, an
increase in the client population also increases the workload, inducing the scalability problem and
drastically degrading system performance [3, 4, 5, 6].
P2P swarming protocols are essentially based on the definition of two main policies: piece
selection and peer selection. The former refers to how the pieces of a multimedia object are going
to be chosen by a peer (client, user, node, etc.) for download, while the latter dictates to which
peers the pieces of the object should be uploaded to [3]. These two policies must stimulate the
cooperation and reciprocation between peers to make the system scale well. With this in mind,
there have been studies focusing on incentive politics to encourage peer contribution and to
guarantee service fairness as well [3, 4, 5, 6]. Besides, the growing recognition of reputation
systems [7] has generated significant research in this field too. Reputation-based schemes
determine service priorities based on the histories of peer contributions to the system: reputable
peers receive high priority and are rewarded more resources to compensate for their eventual
contributions [8, 5].
The great efficiency of P2P swarming protocols for object (content) replication has surely
attracted a lot of interests from academia and industry, and has notably motivated a number of
proposals to adapt these protocols to meet the requirements of on-demand streaming systems. In
general, these requirements refer to satisfying time constraints, namely: the client wants to play
the object as soon as he arrives to the system and does not tolerate any discontinuities during the
object playout [3, 9, 10].
The vast majority of these proposals of adaptation consists of modifying the piece selection and
peer selection policies, respectively, of the original P2P swarming protocols. Nevertheless, the
research community has often paid more attention to the piece selection policy rather than to the
peer selection policy [11, 9, 12, 13]. Additionally, most of the research done so far has a limited
point of view since it seldom takes into account the client’s interactive behaviour nor differentiate
whether the system being accessed is mostly academic or non-academic. Academic systems are
those whose stored objects mostly refer to classes, lectures and tutorials, while non-academic
systems are those whose most of the objects are movies, clips, songs and radio podcasts [10, 14,
15, 8, 5].
Within this context, this article proposes a simple algorithm to be used as basis for implementing
peer selection policies of BitTorrent-like protocols, considering interactive scenarios. To this end,
we analyze the client’s interactive behaviour when accessing real multimedia systems. This
analysis consists of looking into workloads of real content providers and assessing three
important metrics, namely temporal dispersion, spatial dispersion and object position popularity.
These metrics are then used as the main guidelines for writing the algorithm itself. To the best of
our knowledge, our approach is unique since it is the very first to specially consider the client’s
interactive behaviour to derive an algorithm for peer selection policies targeted at interactive
scenarios. Finally, the conclusion of this article is drawn with key challenges and future work in
this subject matter.
The remainder of this text is organized as follows. In Section 2, we explain the P2P swarming
paradigm and briefly review the operation of the BitTorrent protocol. Besides, we have a general
discussion of the reciprocity principle, which plays a very important role when adapting
3. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
143
BitTorrent’s peer selection policy to the on-demand streaming scenario. Section 3 presents the
state-of-art proposals of this research field. Analysis and results all lie in Section 4. Therein we
include a thorough discussion of the metrics used in the analysis and present the algorithm itself.
At last, conclusions and future work are included in Section 5.
2. BASIS
2.1. Swarming systems and the BitTorrent protocol
A swarm is a set of peers concurrently sharing content of common interest. Content might be part
of an object, an object or even a bundle of objects that are distributed together. The content is
divided into pieces that peers upload to and download from each other. By leveraging resources
provided by peers (i.e. users, clients, etc.), such as bandwidth and disk space, P2P swarming
reduces the load on and costs to content providers and is certainly a scalable, robust and efficient
solution for content replication [11, 8, 4].
BitTorrent is one of the most popular P2P swarming protocol. Objects are split into pieces
(typically 32 – 256 kB in size) and each piece is split into blocks (typically 16 kB in size).
Breaking pieces into blocks allows the protocol to always keep several requests (typically five)
pipelined at once. Every time a block arrives, a new request is sent. This helps to avoid a delay
between pieces being sent (i.e. request response latency), which is disastrous for transfer rates.
Blocks are therefore the transmission unit on the network, but the protocol only accounts for
transferred pieces. In particular, partially received pieces cannot be served by a peer, only
complete pieces can [16, 17].
Peers who are still downloading content may serve the pieces they already have to others.
Leechers are peers that only have some or none of the data, while seeds are peers that have all the
data but stay in the system to let other peers download from them. Thus, seeds only perform
uploading while leechers download pieces that they do not have and upload pieces that they have
[16, 17].
To participate in the system (i.e. swarm), a peer firstly has to download a static file with the
extension .torrent from an ordinary web server. The .torrent file has metadata (name, length,
hashing information, etc.) of the wanted object. The peer is then able to contact a centralized
entity called tracker. The main responsibility of this entity is to keep track of the participants of
the distributed system and to provide the peer with a random list of other peers (both seeds and
leechers) that already belong to the swarm. The tracker receives updates from peers periodically
(typically at every 30 minutes) and when peers join or leave the swarm as well.
Each peer then attempts to establish bidirectional persistent connections (TCP connections) with a
random set of peers (typically between 40 and 80) from the list just mentioned above, called its
neighbourhood, but uploads data to only a subset of this neighbourhood. If the number of
neighbours of a peer ever dips below 20, the node contacts the tracker again to obtain a list of
additional peers it could connect to. The sequence of events just described is succinctly depicted
in Figure 1 [2, 9].
4. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
144
Figure 1. Sequence of events when a peer wants to join a swarm.
In a more detailed view, we have that each peer divides its upload capacity into a number of
uploads slots. The decision to allocate an upload slot to a peer is termed an unchoke and there are
two types of upload slots: regular unchoke slots and optimistic unchoke slots. Regular unchoke
slots are assigned according to the tit-for-tat policy, i.e. peers prefer other peers that have recently
provided data at the highest speeds. Still, each peer re-evaluates the allocation of its regular
unchoke slots every unchoke interval δ (generally 10 seconds).
Different from the regular unchoke slots, the optimistic unchoke slots are assigned to randomly
selected peers. Their allocation is re-evaluated every optimistic unchoke interval, which is
generally set to 3δ (generally 30 seconds). Optimistic unchoke slots serve the purposes of (i)
having peers discover other new, potentially faster, peers (nodes, users, clients, etc.) to unchoke
so as to be reciprocated, and (ii) bootstrapping newcomers (i.e. peers with no pieces yet) in the
system. Peers that are currently assigned an upload slot (whether regular or optimistic), from a
node p, are said to be unchoked by node p; all the others are said to be choked by node p [2, 9]. In
general, each peer limits the number of concurrent uploads (regular unchokes) to a small number,
typically five and, even though seeds have nothing to download, they also limit the number of
concurrent upload to typically five. As for the optimistic unchoke, there is usually a single current
upload at a time [16, 17].
Lastly, each peer maintains its neighbourhood informed about the pieces it owns. The information
received from its neighbourhood is used to request object pieces according to the local rarest first
policy. This policy determines that each peer requests the pieces that are the rarest among its
neighbours. The emergent effect of this policy is that less-replicated pieces get replicated fast
among peers and possibly each peer obtains first the pieces that are most likely to interest its
neighbours [17, 2, 9].
2.2. Reciprocity
2.2.1. Concepts
Reciprocity is one of the fundamental pillars supporting P2P systems. In essence, the principle of
reciprocity states that participants must contribute to the system with resources, such as
bandwidth or memory, in order to accomplish their tasks [18, 19].
Reciprocity mechanisms broadly classify in two types: direct and indirect. In the case of direct
reciprocity, peers (users, clients, nodes, etc.) follow the principle of I scratch your back and you
5. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
145
scratch mine. In the case of indirect reciprocity, the return may come from a peer other than the
recipient, i.e. peer A provides content to B at a rate equal to that at which content is being
provided to it by peer C, regardless of whether or not B and C are the same individual. In this
case, peers follow the principle of give and you shall be given. Both direct and indirect reciprocity
are used in P2P systems [18, 20]. This idea is depicted in Figure 2.
Figure 2. Direct and indirect reciprocities.
Thus, when a peer uses peer selection based on direct reciprocity, it will then upload to other
peers that have recently uploaded to it at the highest rates. And when a peer uses peer selection
based on indirect reciprocity, it will upload to other peers that have recently forwarded data to
others at the highest rates [1].
2.2.2. Reciprocity in BitTorrent
As already mentioned, the BitTorrent’ peer selection policy implements a direct reciprocity: tit-
for-tat strategy. Under this strategy, peers prefer uploading to the peers that have recently
contributed to them at the highest speeds. The emergent effect of this policy is that peers having
higher uploads capacities also typically have higher download speeds [16], and hence more
incentives for cooperation. Although these are desirable properties for the scenario of traditional
file transfer, this policy has the two following drawbacks for on-demand streaming [2, 32, 33]:
i) direct reciprocity is less feasible in on-demand streaming, since it is difficult for younger peers
(peers that arrived later) to reciprocate older peers (peers that arrived earlier). This is because
peers that arrived later have made less progress on their downloads and have a playback position
behind that of peers who have been in the system for longer; as a result, using tit-for-tat in on-
demand streaming does not produce the same incentives as in traditional transfer;
ii) tit-for-tat was designed to maximize the download speed of peers. However, in on-demand
streaming systems, peers gain little utility from having a download speed higher than the
necessary to essentially preserve stream continuity. The use of tit-for-tat, in an on-demand
streaming system with heterogeneous peers, may then lead to situations where, even if there is
enough aggregate upload capacity to serve all system peers, not all peers experience a satisfactory
QoS. This is because high-capacity peers may eventually receive download rates substantially
higher than the object playback rate, while low-capacity peers may experience download rates
that are too low to meet the on-demand streaming requirements.
From above, it becomes clear that the BitTorrent’s peer selection policy result inadequate for on-
demand streaming, even worse if we consider interactive scenarios [18, 1, 32, 33].
6. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
146
2.3. Aspects and Metrics
In this section, we succinctly debate on general aspects and metrics affecting the performance of
P2P on-demand systems as well as its protocols. This surely helps to more adequately understand
and clarify the workload analysis we carry out later in this text in order to better understand the
client’s behaviour when accessing real multimedia servers.
In general, we have the influence of the following six aspects affecting P2P VoD (video on-
demand) systems [1, 6, 9]: (i) freeriding (i.e. the act of not sharing the upload capacity); (ii)
malicious attacks; (iii) heterogeneous peer bandwidths (i.e. heterogeneity); (iv) presence of
unconnectable peers (i.e. peers that do not possess a globally reachable address); (v) coexistence
of peers doing VoD with peers doing traditional file transfer; and (vi) watching behaviour.
For the studies and analysis to happen herein, we conjecture that these same aspects may be taken
into account when considering P2P on-demand systems. This is quite reasonable because a VoD
system might be seen as a particular class of more general on-demand systems, in which the
objects are specifically videos (i.e. a recording of moving pictures and sound).
With this in mind, the question that immediately follows is which of these aspects should be then
evaluated when thinking of the design of peer selection policies for on-demand multimedia
systems. The answer is that aspects (i), (iii) and (vi) are to be considered since they are the ones
intrinsically related to the peer itself; the other aspects are mostly related to system and network
infrastructure, security or service. This understanding is depicted in Figure 3.
As for performance metrics, there are plenty of them [1, 6, 9]. Just to name a few, we have: (i)
continuity index (i.e. the ratio of pieces received before their deadline over the total number of
pieces); (ii) startup delay (i.e. the time a client has to wait before playback starts); (iii) mean time
to return (after an interruption); (iv) average number of interruptions (during the object playout);
(v) total download time (of the object); (vi) bootstrap time (i.e. the time a client has to wait to
obtain its first piece); (vii) link utilization; (viii) and fairness.
Similarly, the question that follows is which of the above metrics should be considered to design
peer selection policies for P2P protocols. The answer is simple: all of the metrics listed above
may be considered. This understanding is depicted in Figure 3. Nevertheless, we certainly
recognize the high complexity involved with this thought, since the resulting values mostly come
from simultaneous influences of the two policies that constitute the basis for P2P protocols: peer
selection and piece selection. Moreover, it is intricate or maybe impossible to isolate each of these
influences.
Figure 3. Aspects and metrics that may be considered for the design of peer selection policies.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
147
3. STATE-OF-ART PEER SELECTION POLICIES
We recall that the peer selection policy determines how a peer (node, client, user) selects another
peer to upload data to. In BitTorrent-like systems, this policy implicitly has the role of
incentivizing peer cooperation. Hence, it is usually designed to favour good uploaders.
In what follows, we briefly discuss several important proposals of peer selection policies used by
BitTorrent-like protocols targeted at VoD systems. Because of being recent and have been proven
to be efficient [21, 22, 23, 2], these proposals certainly represent the state-of-art schemes of the
literature. We mainly focus on how different the utilization of the regular and optimistic unchoke
slots may be. Knowing these proposals certainly helps to better understand the workload analysis
and the algorithm that appear in Section 4.
Shah et al. [21] proposes to use the optimistic unchoke slots more frequently. More precisely,
every time a piece is played, peers perform a new optimistic unchoke. The idea is therefore to
behave more altruistically. Problem is that frequent unchoke rounds may weaken incentives for
cooperation; besides, a peer may compromise its own QoS by being too altruistic.
Mol et al. [22] dictates that, for the regular unchoke slots, each peer simply selects the nodes that
have forwarded pieces received from it to other peers at the highest speed. Herein, we thus have
the principle of indirect reciprocity clearly being applied. The optimistic unchoke slots are
assigned in a similar way as in BitTorrent. However, as the reward given to peers is still similar
to that of the tit-for-tat policy, it follows that heterogeneity can still lead to poor QoS for some
peers, even in the presence of enough capacity to serve all.
Yang et al. [23] study six schemes to answer the question referred as the peer request problem: to
which peer should a node send a request for a data piece, among all neighbours which have that
piece? For instance, simply picking such a neighbour at random has the disadvantage that older
peers receive more requests from many younger peers. Solving the peer request problem is
therefore important to balance the load among the peers themselves and increase the likelihood of
receiving the needed pieces before their deadlines. Both regular and optimistic unchoke slots are
assigned in function of these schemes.
The first scheme of Yang et al. [23] is Least Loaded Peer (LLP): each node sends a request to the
neighbour with the shortest queue size, among all those that have the needed data piece. The
second scheme is Least Requested Peer (LRP): for each neighbour, each node counts how many
requests it has sent to that peer and picks the one with the smallest count. The third scheme is
Tracker Assistant (Tracker): the tracker sorts peers according to their arrival times. Whenever a
node requests the list of available nodes from the tracker (e.g., upon arrival or when lacking peers
due to peer departures), it will receive a list of nodes which have the closest arrival times to its
own.
The fourth scheme of Yang et al. [23] is Youngest-N Peers (YNP): each node sorts its neighbours
according to their age, where a peer’s age can be determined from its join time (available at the
tracker). YNP then randomly picks a peer among the N > 1 youngest peers which have the piece
of interest and requests that piece from that neighbour. This approach tries to send requests to
younger peers as they are less likely to be overloaded. The fifth scheme is Closest-N Peers
(CNP): this is similar to YNP but instead sorts the neighbours based on how close they are to a
node’s own age, and then randomly picks from the N closest age peers that have the needed piece.
D’Acunto et al. [2] propose three schemes. They all make use of the indirect reciprocity principle:
for the regular unchoke slots, each peer simply selects the nodes that have forwarded pieces
8. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
148
received from it to other peers at the highest speed. In addition to that, these three schemes are
also based on the idea of adjusting the number of upload slots a peer opens.
More precisely, the first scheme provides mathematical formulas to calculate the number of
upload slots, as well as the number of optimistic unchoke slots. These formulas consider, for
example, the peer’s upload capacity as well as the video playback rate. The core of the second
scheme of D’Acunto et al. [2] is to have peers dynamically adjusting the number of their
optimistic slots to their current QoS. To measure the QoS, it is used the sequential progress,
defined as the speed at which the index of the first missing piece grows. Finally, for the third
scheme, they propose a mechanism where nodes give priority to newcomers when performing
optimistic unchoke. In this way, newcomers will not need to wait too long before being unchoked
for the first time.
From above, it seems that the recent proposals for peer selection policies are progressively
evolving towards the explicit use of indirect reciprocity as well as the idea of continuously
monitoring the QoS that high-capacity peers individually receive. In case the QoS is higher than
necessary to preserve stream continuity, a number of altruistic decisions may be taken to favour
low-capacity peers and thus provoke a more global satisfactory system QoS.
4. ANALYSIS
4.1. Workload characterization
In a sequential media workload, objects are completely and sequentially retrieved without any
interruptions. A user (client) session refers to the period the client remains active in the system,
i.e. making requests and receiving data. Note that, for this type of workload, the user session has
just a single request: to start the content delivery. Also, if all peers (nodes) participating in the
P2P system buffer all of the content received, then a newcomer may potentially select as
neighbour any peer that has already received some content. This is because a peer that has already
received some content is certain to receive the entire object. This scenario significantly facilitates
the neighbour selection procedure and, consequently, the overall content sharing process.
On the other hand, in an interactive media workload, a user session consists of a number of
independent requests to object segments. The object may be neither sequentially nor completely
retrieved. Also, the time between consecutive requests may vary greatly. A number of
characteristics of these workloads (see Table 1) may therefore have direct impact on content
sharing among peers. For example, as a peer may retrieve only part of the object, there is less
content to share with other peers. Besides, since a peer may request any object segment at any
time, it is hard to predict if and when this peer will have the wanted content available to share
with other peers. Peer selection strategies thus become very relevant since a poor selection may
lead to a very poor content sharing.
9. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
149
Table 1. Content-sharing impacting characteristics.
Characteristic Definition
Request rate (N) Rate at which requests from all users arrive during a period of time
equal to the object duration. It is measured in requests per object. In
other words it is the request rate normalized by the object duration.
Request Number (R) The number of interactive requests executed by a user in a session.
Start position (PS) Position in object where the request starts, measured in time units.
End position (PE) Position in object where the request ends, measured in time units.
Request duration (DR) Amount of content transferred in a request, measured in time units.
Interaction type (IT) The type a request may be: pause, jump forwards, jump backwards, etc.
Duration of client
inactivity (1/R)
Period in a session between the client’s requests, measured in time
units.
Jump distance (JD) Amount of content skipped in a jump, either forwards or backwards,
measured in time units.
Session duration (DS) Period between the arrival of the first client’s request and the end of the
last client’s request in a session, measured in time units.
Our analysis considers interactive workloads from three different application domains. The
domains are non-academic audio, non-academic video and academic video. For the academic
domain, we look into workloads from eTeach [24], a video system from the University of
Wisconsin-Madison, with 46,958 requests to announcement videos of up to five minutes as well
as educational videos of 50-60 minutes. Additionally, we consider a three-year log from Manic
[25], an educational video system from the University of Massachusetts with 25.833 requests.
For the non-academic domain, we analyze audio and video, typically under 10 minutes, from two
major Latin America content providers. The first is UOL [26], with 5,385,822 requests to online
radio objects and 1,453,117 requests to video objects. The second, for confidentiality denoted as
ISP (Internet Service Provider), has 4,160,889 requests to online radio objects. All workloads just
mentioned were also considered in the works of Costa et al. [27] and Rocha et al. [28], but with
focus on interactive user access to exclusively client-server architectures.
We now use the characteristics mentioned in Table 1 to group our workloads in three interactivity
profiles: High interactivity (HI): short request duration (under 20% of the object length), at least
three requests in a session (i.e. R > 2), typical for academic videos; Low interactivity (LI): longer
requests (at least 20% of multimedia object length), less than two requests in a session (i.e. R <
2), typical for non-academic audio and very short videos; Medium interactivity (MI): shorter
request duration (under 20% of multimedia object length), less than three requests in a session
(i.e. R < 3), typical for non-academic videos.
Figure 4 depicts the interactivity profile for three distinct real workloads. The graphs show the
start position and the end position of each request. Requests are ordered by its start position. More
precisely, Figure 4(a) shows the interactivity profile for a typical audio workload from UOL. This
is a low interactivity profile, where most of the clients (users) retrieve all of the object. Figure
4(b) shows the interactivity profile for a typical video from Manic. This is a medium interactivity
profile. Many clients also retrieve the entire object as there are many requests for object
segments. Finally, Figure 4(b) shows a workload of a typical long video from eTeach. In such a
high interactivity profile, the start positions and corresponding durations vary significantly.
10. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
150
(a) LI, object duration = 0.23 min. (b) MI, object duration = 1.3 min. (c) HI, object duration = 12 min.
Figure 4. Interactivity profiles.
In total, we have 40 distinct workloads. Considering the interactivity profile, they are distributed
as follows: 19 HI workloads; 8 MI workloads; and 13 LI workloads. On the other hand,
considering the application domain, they are distributed as follows: 8 commercial audio
workloads, 10 commercial video workloads; and 22 educational video workloads.
4.2. Dispersion and popularity
Section 2.3 shows potential aspects and metrics to be taken into account for the evaluation of peer
selection strategies. Additionally, Table 1 brings several workload characteristics that may also
have direct impact on content sharing between peers. Considering all of these evaluation
parameters separately in our analysis is certainly prohibitive since it would become very intricate.
Hence, to simplify we proceed as it follows.
First, we exclusively focus on the workload generated by a set of interactive clients (users) that
are interested in sharing contents. Second, we conjecture that we may efficiently decide on the
neighbour selection if we may evaluate the following values: (1) how much the requests diverge
from each other considering the arrival times; (2) how much of the retrieved object segments may
be effectively shared; (3) how often each object position is requested. So, to evaluate these three
values, we define three distinct metrics: temporal dispersion, spatial dispersion and object
position popularity.
Temporal dispersion is just another way to denote the request rate (N) and may be used to
calculate the value (1). Low temporal dispersion implies high request rates and, consequently,
more content available for sharing. On the other hand, high temporal dispersion implies low
request rates and, consequently, less content available for sharing. By its turn, spatial dispersion
refers to the amount of content two consecutive distinct users’ requests have in common. It is
used to calculate the value (2). As long as the requests have content in common, this content can
be potentially shared. If this overlapping of consecutive requests increases, then the spatial
dispersion decreases. On the other hand, if this overlapping of consecutive requests decreases,
then spatial dispersion increases.
Figure 5 depicts the concept of both temporal and spatial dispersions, respectively. It shows four
workloads especially devised for this example, each with ten requests for the same 25-minute
object during the same time interval. More precisely, in Figure 5(a), we have a sequential
workload. Each request retrieves the entire object. There is no spatial dispersion since all of the
requests overlap. There are contents available for sharing. In Figure 5(b), spatial dispersion
increases as the overlap between successive requests is reduced. As the overlap reduces and
spatial dispersion increases, contents available for sharing are reduced. In Figure 5(c), spatial
11. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
151
dispersion reduces as the overlap increases. Thus, there are contents available for sharing. Finally,
in Figure 5(d), the interactive request rate N increases and so decreases the temporal dispersion.
There are contents available for sharing as well.
(a) Low spatial dispersion (b) Spatial dispersion increases
(c) Reduced spatial dispersion (d) Reduced temporal dispersion
Figure 5. Dispersion in workloads.
Measuring temporal dispersion is straightforward, as it is inversely proportional to the request
rate N. To measure spatial dispersion we proceed as follows.
Let Qp to be the number of times an object position p is requested by the system clients during the
whole time of observation. Hence, Qp is the popularity of the object position p (see Section
4.3.3). Any object position p, received by a peer, is potentially available to be shared with any
other peers. It follows that the potential for content sharing P is thus defined as it follows.
(1)
Now, to measure the spatial dispersion D, we compare the amount of content that is retrieved M
with the potential for content sharing P as it follows.
(2)
Note that P < M and D lies in the interval [0, 1). For example, if we have an object with a single
position p being requested 100 times, it follows that Qp = 100 and P = 99. Since M = 100, it
follows that D = (1-(99/100)) = 0.01. Still, note that the variable Qp is used to quantify the value
(3) and is simple to be calculated. As already said, all we have to do is to count the number of
times the object position is requested.
Additionally, we outline that, for ease of discussion, we consider the following intervals for a
numerical dispersion categorization: below 0.1 (low spatial dispersion); (0.1, 0.5] (intermediate
spatial dispersion); above 0.5 (high spatial dispersion).
To conclude this section, we outline that spatial dispersion, temporal dispersion and object
position popularity are notably able to capture the influences coming from the aspects, metrics
12. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
152
and workload characteristics presented in Figure 3 and Table 1, what makes the analysis we want
much simpler and thus definitely viable. With this in mind, we have Figure 6. The aspects,
metrics and characteristics, shown in Figure 3 and Table 1, have their corresponding influences
grouped under the concept of spatial dispersion, temporal dispersion, popularity and/or system
(i.e. related to system itself and network infrastructure, security or service). Note that a given
aspect, metric or workload characteristic may simultaneously belong to more than one of those
concepts.
4.3. Results
4.3.1. Temporal dispersion
Most of our media workloads present high temporal dispersion. This is explained by the fact that
in real multimedia servers the clients do not come in crowds since they individually have different
needs and time availabilities for accessing the available contents.
For example, in academic systems, the contents being accessed mainly refer to video classes
which may be seen at various distinct times during the day and night, since students are not
confined to specific time schedules at all. Moreover, considering P2P swarming systems, the
large measurement reported in [32] indicates that, for the vast majority of the datasets studied
therein, around 40% to 70% of the swarms have only three or less peers and more than 70% of
the swarms are of size smaller than ten [33]. These numbers reinforce the general belief that most
of scenarios present workloads with high temporal dispersion.
Figure 6. Aspects, workload characteristics and metrics affecting peer selection policies.
Nonetheless, a peer using the BitTorrent protocol stores all of the retrieved content (media) in
local buffers. This buffering makes the time difference between clients’ requests, i.e. the temporal
dispersion, have little impact on content sharing. That is, it does not matter when the content is
retrieved but that it is available for sharing when it is needed. This condition notably minimizes
the impact of high temporal dispersion on content sharing.
On the other hand, if we have two workloads, with different request rates, soliciting data over the
same amount of time, the one with higher request rate, i.e. with lower temporal dispersion, may
potentially request more data and hence incur in lower spatial dispersion. That is, temporal
dispersion may have some indirect impact on the spatial dispersion D of a workload.
13. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
153
Figure 7 provides examples of the above situation. It refers to three of our workloads (for a 25-
minute object). More precisely, in Figure 7(a), we have 10 requests at a rate of 4.55 and a spatial
dispersion of 0.87. In Figure 7(b), we have 10 requests at a rate of 6.77 and a spatial dispersion of
0.87. Note that the overlapping between consecutive requests does not change as temporal
dispersion varies. Finally, in Figure 7(c), we have a workload with 20 requests and a request rate
of 9.09, twice the one observed in Figure 7(a). In this example, we have more requests and more
content retrieved. The spatial dispersion reduces to 0.77. Note that the reduction is not due to
temporal dispersion but to the increased number of requests instead.
0
5
10
15
20
25
0 10 20 30 40 50
PositioninMedia(min)
Time (min)
0
5
10
15
20
25
0 10 20 30 40 50
PositioninMedia(min)
Time (min)
0
5
10
15
20
25
0 10 20 30 40 50
PositioninMedia(min)
Time (min)
(a) N = 4.55; D = 0.87 (b) N = 6.77; D = 0.87 (c) N = 9.09; D = 0.77
Figure 7. Impact of temporal dispersion.
Thus, instead of having a quantitative assessment of the metric temporal dispersion, we simply
consider the following general thought: as low temporal dispersion may to some extent favour
content sharing, due to its eventual impact on spatial dispersion, one should select as neighbours
those peers with the lowest possible temporal dispersion.
4.3.2. Spatial Dispersion
Figure 8(a) groups the workloads by their interactivity profile and the goal is to see how spatial
dispersion varies for each of these profiles. The y-axis has the dispersion of each workload. The
x-axis shows an arbitrary index of the workload within its interactivity profile. Note that LI
profiles provide very low values of dispersion, MI profiles present intermediate values of
dispersion, and HI profiles have high values of dispersion.
Figure 8(b) has the workloads grouped by the application domain and the goal is to see how the
spatial dispersion varies within each of these domains. The y-axis brings the dispersion of each
workload. The x-axis shows an arbitrary index of the workload within its domain of application:
academic video, commercial video or commercial audio. Even though the dispersion may vary
within a same domain, we may notice a tendency that allows us to conjecture that the dispersion
is high for educational videos, medium for commercial videos, and low for commercial audio.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 2 4 6 8 10 12 14 16 18 20
Dispersion
Workload index
HI
MI
LI
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 5 10 15 20 25
Dispersion
Workload index
Comm Video
Educ Video
Comm Audio
(a) Grouped by workload interactive profile (b) Grouped by domain of application
Figure 8. Spatial dispersion in workloads.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
154
Figure 9 also groups the workloads by the application domain as done in Figure 8(b), but this
time the goal is to see how the spatial dispersion varies within a same domain and considering
distinct interactivity profiles. The y-axis has the dispersion of each workload and the x-axis an
arbitrary index of the workload. More precisely, Figure 9(a) shows that most of the commercial
audio workloads are of low-interactivity profile and have low dispersion. In Figure 9(b), we have
that most of the workloads are of high-interactivity profile and have medium-to-high dispersion.
Finally, from Figure 9(c), we see that the workloads are mainly from the low interactivity profile
and medium interactivity profile. Note that, for the low interactivity profile, we have low
dispersion and, for the medium interactivity profile, we have medium dispersion.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 2 3 4 5 6 7
Dispersion
Workload index
HI
LI
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 2 4 6 8 10 12 14 16 18
Dispersion
Workload index
HI
MI
LI
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 2 3 4 5
Dispersion
Workload index
HI
MI
LI
(a) Commercial audio (b) Educational Video (c) Commercial Video
Figure 9. Spatial dispersion in workloads of an application domain.
Figure 10 illustrates the spatial dispersion for the workloads grouped by the object length. The
goal now is to see how this type of dispersion varies in function of this size within the application
domains. The y-axis shows the dispersion of each workload and the x-axis has an arbitrary index
of the workload. More precisely, Figure 10(a) clearly shows that most of the content (media) of
less than five minutes has low dispersion. In other words, no matter the domain of application is,
clients tend to retrieve shorter objects entirely. Figure 10(b) shows dispersion for 5-to-20 minute
objects. Most of the objects falling in this category are from educational video domain and show a
high dispersion. Finally, Figure 10(c) presents dispersion for objects of length longer than 20
minutes. In this case, we have that dispersion varies from medium to high
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 2 3 4 5 6 7
Dispersion
Workload index
Comm Video
Educ Video
Comm Audio
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 2 3 4 5
Dispersion
Workload index
Comm Video
Educ Video
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1 3 5 7 9 11 13
Dispersion
Workload index
Comm Video
Educ Video
Comm Audio
(a) Less than 5 minutes (b) Between 5 and 20 minutes (c) Above 20 minutes
Figure 10. Spatial dispersion in workloads considering objects of different lengths.
From the results just observed, we may conclude that: (i) the greater the interactivity is, the
greater the corresponding dispersion is; (ii) the interactivity profile has more influence on the
dispersion value than the application domain; (iii) the longer the object is, the greater the
dispersion value is.
15. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
155
Thus, considering the spatial dispersion metric, we have the following general thought: as low
spatial dispersion makes more likely to have content sharing, one should always select as
neighbours those peers with the lowest possible spatial dispersion.
4.3.3. Object Position Popularity
The popularity of an object position p, denoted by Qp, as previously defined in Section 4.2, is the
number of times a position p is requested by the clients, considering the whole time of
observation. Figure 11 then shows the popularity of object positions for three workloads, one of
each interactivity profile. The goal is to evaluate how the interactivity profile influences the
object position popularity. The x-axis has each object (media) position in seconds. The y-axis
brings the number of times each object position is solicited.
0
5000
10000
15000
20000
25000
30000
35000
40000
0 20 40 60 80 100 120 140 160
Numberoftimesrequested
Media position (s)
0
5000
10000
15000
20000
25000
0 200 400 600 800 1000 1200
Numberoftimesrequested
Media position (s)
(a) LI (b) MI (c) HI
Figure 11. Popularity of object (media) positions.
From the results, for all interactivity profiles, the most popular object positions are often located
in the beginning of the object. Also, the more interactive the workload is, the less uniform the
popularity distribution is.
Thus, considering the object position popularity metric, we have the following general thought: as
the first positions of the object are more often requested, one should select as neighbours those
peers that have already retrieved most of the first object positions, since this will result in more
opportunities for content sharing.
Lastly, it is important to clarify that the popularity definition is not related to the rarest-first
definition used for piece selection in the traditional BitTorrent protocol. While the rarest-first
definition refers to the number of times an object data unit is replicated among neighbours,
position popularity specifically refers to the number of times an object position has been
requested.
4.4. Algorithm
From the results and conclusions of last section, the design of efficient peer selection policies
must dictate that the nodes to be selected as neighbours should help to reduce dispersion
(temporal and spatial) as well take into account object position popularity, thus providing more
opportunities for content sharing and, hence, optimizing the P2P system itself. To this end, we
propose to observe the points that follow:
1) As in the traditional BitTorrent, a newcomer receives a list of possible neighbours from the
tracker;
2) All retrieved contents are stored in local buffers. So there is no need for a peer to request the
same contents twice;
16. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
156
3) Each node keeps a record of the number of times each object position is requested, i.e. the
object position popularity. This record of popularity can be used as a basis to evaluate the
potential for content sharing P in Equation 1 and, hence, to evaluate dispersion in Equation 2.
Besides this record, each node also keeps a record of the buffer contents, as in the traditional
BitTorrent protocol;
4) After receiving the list of possible neighbours, the peer exchanges its record of popularity
with these possible neighbours. The record of buffer contents is also exchanged;
5) The peer can then assess the influence of each possible neighbour with respect to dispersion.
This way, a peer can select its neighbours in such a way to have the neighbour set with
minimum dispersion considering these steps:
Step 1: Let C be the set of peers received from the tracker and S the set of selected
neighbours. Note that S is initially set empty;
Step 2: For each node c of C, evaluate the workload dispersion considering the peer set which
has c and all of the nodes already in S, as follows:
Step 2.1: The node c that results in the lowest workload dispersion is selected as a new
neighbour;
Step 2.2: This recently selected node is then removed from C and included in S;
Step 2.3: Stop when either: C is exhausted or S has the number of neighbours
prescribed by the original BitTorrent protocol.
Finally, as complementary guidelines for the algorithm above, we still consider the points that
follow:
a) For LI workloads, where most clients retrieve the object completely, it is preferable to
select as neighbours those peers which have already started receiving data since they will
more likely have the wanted contents available;
b) It is preferable to choose as neighbours those peers with highest request rates N, i.e. with
the lowest temporal dispersion. Peers with higher request rates are more likely to fill their
buffers fast and, thus, are more likely to have contents to share;
c) After forming the neighbour set as described above, the overall system upload and
download capacities should be assessed. In case these capacities are satisfactory, nothing
else needs to be done. In case these values are not satisfactory, the neighbourhood set
needs to be re-evaluated: besides considering the dispersion (temporal and spatial) and
position popularity metrics, the concept of indirect reciprocity should also be deployed
when selecting the new neighbour in Step 2.1 above.
To conclude this section, we outline that the proposed algorithm is complete and rather robust
since it is able to consider the three metrics herein defined in addition to the concept of indirect
reciprocity. That is, the proposed algorithm indirectly covers the aspects, metrics and
characteristics shown in Figure 3 and Table 1, besides the reciprocity concept, what certainly
makes it applicable to the design of peer selection policies targeted at any kind of interactive
scenario.
5. CONCLUSION AND FUTURE WORK
This article introduced an algorithm to be used as basis for implementing peer selection policies
of BitTorrent-like protocols in interactive scenarios. To this end, we analyzed the client’s
interactive behaviour when accessing real multimedia systems. Our analysis considered
workloads of real content providers and assessed three metrics herein defined: temporal
17. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
157
dispersion, spatial dispersion and object position popularity. These metrics were then used the
main guidelines for writing the proposed algorithm.
Among the most important results herein obtained, we may outline the following ones: (i) most of
the media workloads analyzed presented high temporal dispersion. This is because the clients do
not come in crowds since they individually have different needs and time availabilities for
accessing the available contents. Nonetheless, since a peer using the BitTorrent protocol stores all
of the retrieved content in local buffers, high temporal dispersion has little impact on content
sharing; (ii) the spatial dispersion is a direct function of the client’s interactivity level as well as
of the object length; besides, it suffers more influences from the interactivity profile than the
application domain; (iii) the most popular object positions are often located in the beginning of
the object, and the more interactive the workload is, the less uniform the popularity distribution
is; (iv) the proposed algorithm is rather complete in the sense that it is able to consider the three
metrics herein defined besides the concept of indirect reciprocity.
Lastly, we believe future work in this field of study may specially include: (i) to quantify the
optimization provided by the algorithm design insights revealed in this work. To this end, for
example, it would be necessary to consider what has been revealed in this work along with what
has already been proven efficient for piece selection policies [13, 8], resulting in novel proposals
for BitTorrent-like protocols. These proposals should then guide the elaboration of models for
extensive trace-driven simulations so that numerical observations could therefore be obtained; (ii)
to study live streaming on the Internet to evaluate the eventual changes or adaptations that should
occur in BitTorrent’s peer selection and piece selection policies, respectively. This would
certainly help the design of novel BitTorrent-like protocols targeted at the requirements of live
streaming services [1, 29]; lastly, to analyse the influences of the personalization and adaptation
to the client´s capability profile as well as of the Network Address Translation (NAT) service on
the design of P2P streaming protocols [34, 35].
REFERENCES
[1] D'Acunto, L., Chiluka, N., Vinkó, T. & Sips, H. (2013) “BitTorrent-like P2P approaches for VoD:
A comparative study”, Computer Networks, Vol. 57, No. 5, pp 1253 – 1276.
[2] D’Acunto, L., Andrade, J. & Sips, H. (2010) “Peer selection strategies for improved QoS in
heterogeneous BitTorrent-like VoD systems”, IEEE International Symposium on Multimedia,
Taichung, Taiwan.
[3] Ramzan, N., Park, H. & Izquierdo, E. (2012) “Video streaming over P2P networks: Challenges
and opportunities”, Signal processing: Image Communication, Vol. 27, pp 401 – 411.
[4] Hoßfeld, T., Lehrieder, F., Hock, D., Oechsner, S., Despotovic, Z., Kellerer, W. & Michel, M.
(2011) “Characterization of BitTorrent swarms and their distribution in the Internet”, Computer
Networks, Vol. 55, No. 5, pp 1197 – 1215.
[5] Hu, Chih-Lin, Chen, Da-You, Chang, & Chen, Yu-Wen. (2010) “Fair Peer Assignment Scheme
for Peer-to-Peer File Sharing”, KSII Transactions on Internet and Information Systems, Vol. 4,
No. 5, pp 709 – 736.
[6] Bharambe, A., Herley, C. & Padmanabhan, V. (2006) “Analyzing and improving a BitTorrent
network’s performance mechanisms”, 25th IEEE International Conference on Computer
Communications, Barcelona, Catalunya, Spain.
[7] Fun, X., Li, M., Ma, J., Ren, Y., Zhao, H. & Su, Z. (2012) “Behavior-based reputation
management in P2P file-sharing networks”, Journal of Computer and System Sciences, Vol. 78,
No. 6, pp 1737 – 1750.
[8] Menasché, D., Rocha, A., Souza e Silva, E., Towsley, D. & Leão, R. (2011) “Implications of peer
selection strategies by publishers on the performance of P2P swarming systems”, ACM
SIGMETRICS Performance Evaluation Review, Vol. 39, No. 3, pp 55 – 57.
18. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
158
[9] Hoffmann, L. J., Rodrigues, C.K.S. & Leão, R. M. M. (2011) “BitTorrent-like protocols for
interactive access to VoD systems”, European Journal of Scientific Research, Vol. 58, No. 4, pp
550-569.
[10] Varvello, M., Steiner, M. & Laevens, K. (2012) “Understanding BitTorrent: a reality check from
the ISP’s perspective”, Computer Networks, Vol.56, No. 40, pp 1054 – 1065.
[11] Menasché, D., Rocha, A., Souza e Silva, E., Leão, R., Towsley, D. & Venkataramani, A. (2010)
“Estimating self-sustainability in peer-to-peer swarming systems”, Performance Evaluation, Vol.
67, pp 1243 – 1258.
[12] Chuntao, L., Huyin, Z. & Lijun, S. (2009) “Research and Design on Peer Selection Strategy of
P2P Streaming”, 5th International Conference on Wireless communications, networking and
mobile computing, Beijing, China.
[13] Romero, P., Aomoza, F. & Rodrígues-Bocca, P. (2013) “Optimum piece selection strategies for a
peer-to-peer video streaming platform”, Computers & Operations Research, v. 40: 1289 – 1299.
[14] Ye, L., Zhang, H., Li, F. & Su, M. (2010) “A measurement study on BitTorrent system”,
International Journal of Communications, Network and System Sciences, Vol. 3, pp 916 – 924.
[15] Wu, T., Li, M. & Qi, M. (2010) “Optimizing peer selection in BitTorrent networks with genetic
algorithms”, Future Generation Computer Systems, Vol. 26, No. 8, pp 1151 – 1156.
[16] Cohen, B. (2003) “Incentives build robustness in BitTorrent”, First Workshop on Economics of
Peer-to-Peer Systems, Berkeley, EUA.
[17] Legout, A., Urvoy-Keller, G. & Michiardi, P. (2006) “Rarest first and choke algorithms are
enough”, 6th ACM SIGCOM Conference on Internet Measurement, Rio de Janeiro, Brazil.
[18] Menasché, D., Massoulié, L. & Towsley, D. (2010) “Reciprocity and barter in peer-to-peer
systems”, 29th conference on Information communications, San Diego, CA, USA.
[19] Nowak, M. (2006) “Five rules for the evolution of cooperation”, Science, Vol. 314, pp 1560 –
1563.
[20] Nowak, M & Sigmund, K. (2005) “Evolution of indirect reciprocity”, Nature, Vol. 437, pp 1291–
1298.
[21] Shah, P. & Pâris, J.-F. (2007) “Peer-to-Peer multimedia streaming using BitTorrent”, IEEE
International Performance, Computing, and Communications Conference – IPCCC, New Orleans,
EUA.
[22] Mol, J., Pouwelse, J., Meulpolder, M., Epema, D. & Sips, H. (2008) “Give-to-Get: Free-riding-
resilient video-on-demand in P2P systems”, SPIE MMCN, San Jose, California, USA.
[23] Yang, Y., Chow, A., Golubchik, L. & Bragg, D. (2010) “Improving QoS in Bittorrent-like VoD
systems”, Proceedings of the IEEE INFOCOM, San Diego, CA, USA.
[24] http://eteach.cs.wisc.edu/index.html
[25] RIPPLES/MANIC. http://manic.cs.umass.edu.
[26] Universo Online. http://www.uol.com.br.
[27] Costa, C., Cunha, I., Borges, A., Ramos, C., Rocha, M., Almeida, J., & Ribeiro-Neto, B. (2004)
“Analyzing Client Interactive Behavior on Streaming Media Servers”, 13th WWW Conf., New
York, USA.
[28] Rocha, M., Maia, M., Cunha, I., Almeida, J. & Campos, S. (2005) “Scalable Media Streaming to
Interactive Users”, ACM MULTIMEDIA, Singapore, Singapore.
[29] Mol, J., Bakker, A., Pouwelse, J., Epema, D. & Sips, H. (2009) “The Design and Deployment of a
BitTorrent Live Video Streaming Solution”, 11th IEEE International Symposium on Multimedia,
San Diego, CA, USA.
[30] Rocha, M. (2007) “Impactos da interatividade na escalabilidade de protocolos para mídia
contínua”, doctoral dissertation, Universidade Federal de Minas Gerais, Dept. Ciências da
Computação, Belo Horizonte, MG, Brazil.
[31] Liu, Y., Guo, Y. & Liang, C. (2008) “A survey on peer-to-peer video streaming systems”. In
Journal of Peer-to-peer Networking and Applications. Springer.
[32] Hossfeld, T., Lehrieder, F., Hock, D., Oechsner, S., Despotovic, Z., Kellerer & Michel, M. (2011)
“Characterization of BitTorrent swarms and their distribution in the Internet”, Computer Networks,
Vol.55, No. 5, pp 1197 – 1215.
[33] de Souza e Silva, E., Leão, R., Menasché, D. & Rocha, A. (2013) “On the interplay between
content popularity and performance in P2P systems”, 10th International Conference, QEST 2013,
Buenos Aires, Argentina.
19. International Journal of Computer Networks & Communications (IJCNC) Vol.5, No.5, September 2013
159
[34] Zerkouk, M., Mhamed, A. & Messabih, B. (2013) “A user profile based access control model and
architecture”, International Journal of Computer Networks & Communications (IJCNC), Vol. 5,
No. 1, pp 171 – 181.
[35] Masoud, M. Z. M (2013) “Analytical modelling of localized P2P streaming systems under NAT
consideration”, International Journal of Computer Networks & Communications (IJCNC), Vol. 5,
No. 3, pp 73 – 89.
Authors
Carlo Kleber da S. Rodrigues received the B.Sc. degree in Electrical Engineering from the
Federal University of Paraiba in 1993, the M.Sc. degree in Systems and Computation from
IME in 2000, the D.Sc. degree in System Engineering and Computation from the Federal
University of Rio de Janeiro in 2006. Currently he is Military Assessor of the Brazilian
Army in Ecuador, Professor at the Polytechnic School of Army in Ecuador, and Professor
at the University Center UniCEUB in Brazil. His research interests include the areas of
computer networks and multimedia systems.
Marcus Vinícius de Melo Rocha received the B.Sc. degree in Computer Sciences from the
Federal University of Minas Gerais in 1984, the M.Sc. degree in Sciences from the Federal
University of Minas Gerais in 2002, and the D.Sc. degree in Sciences from the Federal
University of Minas Gerais in 2007. Currently works with internet and streaming
infrastructure in Minas Gerais State Assembly (MG/Brazil). His research interests include
the areas of computer networks and streaming systems.