Decentralizing Trust for Cooperative Backups
Upcoming SlideShare
Loading in...5
×
 

Decentralizing Trust for Cooperative Backups

on

  • 314 views

 

Statistics

Views

Total Views
314
Views on SlideShare
314
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Decentralizing Trust for Cooperative Backups Decentralizing Trust for Cooperative Backups Document Transcript

  • First New York City Peer-to-Peer Workshop May 25, 2007 @ Polytechnic University in Brooklyn LC 400 in Dibner Building (white building with triangle above entrance) 9:45 – 10:00: Welcome and Introductions 10:00 – 11:30: Systems and Measurement Decentralizing Trust for Cooperative Backups Jinyang Li, NYU Nguyen Tran, NYU Abstract: Users today are generating and storing increasing amounts of irreplaceable data such as digital photos and videos. Online backup is an emerging solution to preserve such valuable data. However, existing systems are unsatisfactory: commercial companies provide reliable but expensive backup service while peer-to-peer based systems are cheap but have limited assurance for data reliability. This paper proposes Friendstore, a system that provides inexpensive and reliable backup by storing data only on nodes that each user trusts to be reliable based on existing social relationships. Because of Friendstore's reliance on social relationships among nodes, it can use much simpler solutions to deter nodes from free-riding and denying restore service. But the fact that each node can only store data on a small subset of nodes limits the system's ability to use idle storage space efficiently. We propose a simple XOR-based coding scheme to double the system's capacity to store information. A Friendstore prototype has backup and restore throughput up to 3.3M bytes/sec and 10.6M bytes/sec with and without XOR turned on. Simulations using long term node activity traces show that each node can reliably maintain up to 80G bytes data in a hypothetical deployment involving low bandwidth cable modem links. Inferring Network-Wide Quality in P2P Live Streaming Systems Xiaojun Hei, HKUST Yong Liu, Polytechnic University Keith W. Ross, Polytechnic University Abstract: This paper explores how to remotely monitor network-wide quality in mesh- pull P2P live streaming systems. Peers in such systems advertise to each other buffer maps which summarize the chunks of data that they currently have cached and make available for sharing. We show how buffer maps can be exploited to monitor network- wide quality. We show that information provided in a peer’s advertised buffer map correlates to that peer’s viewing-continuity and startup latency. Given this correlation, we can remotely harvest buffer maps from many peers and then process the harvested buffer maps to estimate ongoing quality. After having developed this methodology, we apply it
  • to a popular P2P live streaming system, namely, PPLive. To harvest buffer maps, we build a buffer-map crawler and also deploy passive sniffing nodes. We process the harvested buffer maps and present results for network-wide playback continuity, startup latency, playback lags among peers, and chunk propagation. The results show that this methodology can provide reasonable estimates of ongoing quality throughout the network. z2z: Zeroconf Service Discovery Beyond Local Link Jae Woo Lee, Columbia University Henning Schulzrinne, Columbia University Wolfgang Kellerer, DoCoMo Communications Labs Europe Zoran Despotovic, DoCoMo Communications Labs Europe Abstract: In recent years the Zeroconf technology, aka Apple Bonjour, has become the most prominent solution for service discovery in local area networks. It owes its success to a number of factors including the ease of use, having its root in the proven DNS technology, and not least, its adoption by a growing number of popular applications. Its multicast-based design, however, effectively confines its usage within a local link, making it unsuitable for many deployment scenarios which would otherwise benefit from the ease of use. In this talk, we present Zeroconf-to-Zeroconf proxy (z2z), a prototype tool we built as part of our on-going investigation of transient service discovery mechanisms in various scales. Z2z bridges multiple Zeroconf networks using OpenDHT. We will show a demo of iTunes sharing music across the Internet, and will discuss the challenges in using such mechanisms in the Internet-scale service discovery. Coffee Break: 11:30-11:45 11:45 – 12:45: P2P Media Streaming 1 Using Layered Video to Provide Incentives in P2P Live Streaming Zhengye Liu, Polytechnic Yanming Shen, Polytechnic Shivendra S. Panwar, Polytechnic Keith W. Ross, Polytechnic Yao Wang, Polytechnic Abstract. We design a distributed incentive mechanism for mesh-pull P2P live streaming networks. In our system, a video is encoded into layers with lower layers having more importance. The system is heterogeneous with peers having different uplink bandwidths. We design a distributed protocol in which a peer contributing more uplink bandwidth receives more layers and consequently better video quality. Previous approaches consider single-layer video, where each peer receives the same video quality no matter how much bandwidth it contributes to the system. The simulation results show that our approach can provide differentiated video quality commensurate with a peer's
  • contribution to other peers, and can also discourage free-riders. Furthermore, we also compare our layered approach with a multiple description coding (MDC) approach, and conclude that the layered approach is more promising, primarily due to its higher coding efficiency. A Hierarchical Characterization of a Live Streaming Media Workload Virgilio Almeida, Federal University of Minas Gerais, Brazil Abstract: In this talk we present a characterization of live streaming media content delivered over the Internet. Our characterization of over 3.5 million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/ passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is in contrast to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to the stickiness of clients to a particular live object, as opposed to structural (size) properties of objects. Second, by contrasting two live streaming workloads from two radically different applications, we conjecture that some characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. In our study, this dependence is clear from the strong temporal correlations observed in the traces, which we attribute to the synchronizing impact of live content on access characteristics. Lunch 12:45-2:00 2:00-3:30: P2P Media Streaming II Stored Media Streaming in BitTorrent-like P2P Networks Kyung-Wook Hwang, Columbia Vishal Misra, Columbia Dan Rubenstein, Columbia Abstract: Peer-to-peer(P2P) networks exist on the Internet today as a popular means of data distribution. However, conventional uses of P2P networking involve distributing stored files for use after the entire file has been downloaded. In this work, we investigate whether P2P networking can be used to provide real-time playback capabilities for stored media. For real-time playback, users should be able to start playback immediately or almost immediately after requesting the media and to have uninterrupted playback during the download. Achieving this goal requires download scheduling strategies that effectively maintain the balance between downloading pieces in playing order(earliest- first order), enabling uninterrupted playback, and rarest-first order, enabling high piece
  • diversity. We consider an approach to peer-assisted streaming that is based on the currently popular and efficient BitTorrent-like system. In particular, we show that dynamic adaptations of the probabilities of earliest-first and rarest-first, and coding techniques can offer significant improvements for real-time playback. Stochastic Fluid Theory for P2P Streaming Systems Rakesh Kumar, Bloomberg Young Liu, Polytechnic Keith W. Ross, Polytechnic Abstract: We develop a simple stochastic fluid model that seeks to expose the fundamental characteristics and limitations of P2P streaming systems. This model accounts for many of the essential features of a P2P streaming system, including the peers’ real-time demand for content, peer churn (peers joining and leaving), peers with heterogeneous upload capacity, limited infrastructure capacity, and peer buffering and playback delay. The model is tractable, providing closed-form expressions which can be used to shed insight on the fundamental behavior of P2P streaming systems. The model shows that performance is largely determined by a critical value. When the system is of moderate-to-large size, if a certain ratio of traffic loads exceeds the critical value, the system performs well; otherwise, the system performs poorly. Furthermore, large systems have better performance than small systems since they are more resilient to bandwidth fluctuations caused by peer churn. Finally, buffering can dramatically improve performance in the critical region, for both small and large systems. In particular, buffering can bring more improvement than can additional infrastructure bandwidth. The Pollution Attack in P2P Live Video Streaming: Measurement Results and Defenses Prithula Dhungel, Polytechnic Xiaojun Hei, Polytechnic Keith W. Ross, Polytechnic Nitesh Saxena, Polytechnic Abstract: P2P mesh-pull live video streaming applications – such as CoolStreaming, PPLive, and PPStream – have become popular in the recent years. In this paper, we examine the stream pollution attack, for which the attacker mixes polluted chunks into the P2P distribution, degrading the quality of the rendered media at the receivers. Polluted chunks received by an unsuspecting peer not only effect that single peer, but since the peer also forwards chunks to other peers, and those peers in turn forward chunks to more peers, the polluted content can potentially spread through much of the P2P network. The contribution of this paper is twofold. First, by way of experimenting and measuring a popular P2P live video streaming system, we show that the pollution attack can be devastating. Second, we evaluate the applicability of three possible defenses to the pollution attack: blacklisting, traffic encryption, and chunk signing. Among these, we conclude that the chunk signing solutions are most suitable.
  • Coffee Break 3:30-3:45 3:45 – 4:45: Architecting P2P P4P: Proactive Provider Participation for P2P Haiyong Xie, Yale Arvind Krishnamurthy, Yale Richard Yang, Yale Abstract: There have been increasing tensions between peer-to-peer (P2P) applications and network service providers. Without direct access to network information, P2P applications depend mainly on inefficient network inference and network-oblivious peering, leading to sub-optimal application performance. On the other hand, network providers only passively react to P2P applications by filtering or placing rate limits on P2P applications, leading to either unhappy customers or violation of network bandwidth usage. These tensions lead to significant inefficiencies for both P2P applications and network service providers. We propose a light-weight architecture called Proactive Provider Participation (P4P) where network providers proactively participate in the peering decisions of P2P applications. The design takes into account objectives of both network providers and P2P applications, as well as privacy and system scalability issues. We take BitTorrent as an example to demonstrate how the design works. Through simulation and experiments, we demonstrate the performance improvement for both network providers and P2P applications. We will also discuss how to apply the design to other P2P applications, and the incentive issues facing network providers and P2P applications to adopt the design. Peer-to-Peer Protocol (P2PP) Salman Abdul Baset, Columbia University Henning Schulzrinne, Columbia University Abstract: In this talk, I will present peer-to-peer protocol (P2PP), which provides flexibility to implement several structured and unstructured p2p protocols such as Chord, Kademlia, Pastry and Gia. It achieves this flexibility by separating protocol dependent mechanisms from protocol independent ones. It is an ongoing effort as part of IETF P2PSIP working group. I will also discuss the differences between peer-to-peer VoIP and file sharing systems and the issues involved in building a p2p VoIP system.