The impact of the peer-to-peer paradigm increases both in research and in industry. Still, serious applications for P2P-based systems are rare. On the other hand, Emergency Call Handling (ECH) is (or will be) a mandatory function for VoIP services. In this paper we investigate international legal and technical requirements of ECH and present ECHoP2P, a solution that fulfills these requirements. Based on Globase.KOM and HiPNOS.KOM, ECHoP2P provides the functionality to determine the closest and (geographically) responsible Emergency Station to a calling peer. Further, Emergency Calls are processed with highest priority in the overlay, so that quality of service guarantees are given. We evaluated ECHoP2P thoroughly and present the quality and costs analysis, identified tradeoffs and effects of optimization parameters. ECHoP2P provides a fully evaluated solution for Emergency Call Handling and for further location-aware applications.
On completion of the module one should be clear about the parameters required during drive test what does it mean and how much it is important.
Parameters regarding in windows like :
a) Current Channel
b) Radio parameters
c) Serving + Neighbors
Time: It is system time of computer.
Cell name: It displays the name of the sector which is serving according to the cellfile that is loaded in TEMS.
CGI : It stands for the Cell Global Identity which is unique for every sector of the site. It consists of MCC,MNC,LAC,CI.
Cell GPRS Support: Tells sector is having GPRS or not. Values are Yes or No .
Band : It tells in which Freq. Band mobile is operating e.g. GSM 900/ 1800.
BCCH ARFCN: It tells by which BCCH is the mobile station getting served.
TCH ARFCN: On which Traffic Freq. call is going on.
BSIC (Base Station Identity Code) : It is combination of Network Color Code (NCC) (0 – 7) & Base Station Color Code (BCC) (0 – 7). e.g. 62. It is decoded by mobile on every Sync. Channel Message.
Mode: It is shows in which state is mobile operating, Idle, Dedicated & Packet.
Time slot: On which time slot of current TCH call is going on. Viz. time slot no. of TRX.
ETFRC: Enhanced TFRC for Media Traffic over InternetCSCJournals
The evident increase in media traffic over Internet is expected to worsen its congestion state. TCP-friendly rate control protocol TFRC is one of the most promising congestion control techniques developed so far. TFRC has been thoroughly tested in terms of being TCP-friendly, responsive, and fair. Yet, its impact on the visual quality and the peak signal-to- noise ratio PSNR of the media traffic traversing Internet is still questionable. In this paper we aimed to point out the enhancements required for TFRC that enables producing the maximum PSNR value for Internet media traffic. Firstly, we suspected the default value of n that represents the number of loss intervals used in calculating the loss event rate in the TFRC equation. This value is recommended to be set to 8 according to the latest RFC of TFRC. We investigated the effect of modifying the TFRC mechanism on the resulting PSNR of the transmitted video over Internet using TFRC via switching n across the values from 2 to 16. We investigated the effect of such variation over a simulated network environment to study its effect on the resulting PSNR for a number of arbitrary video sequences. Our simulations results showed that running TFRC with n=11 led to reaching the maximum PSNR values among all the examined values of n including its default value. Secondly, we tested the impact on the PSNR of another modification in the TFRC mechanism via switching both values of n and Nfb which is frequency of feedback messages sent by TFRC receiver to its sender every round-trip time RTT. The default value of Nfb is 1; hence we scanned every possible combination of n and Nfb ranging from 2 to 16, and from 1 to 4, respectively and recorded the produced PSNR. It was obvious that several other combinations of n and Nfb produced higher PSNR values other than their default values in the request for comment RFC of TFRC. We hereby suggest using an enhanced TFRC that we abbreviated as ETFRC which has the values of n and Nfb value set to 4 and 11 respectively as a replacement for the traditional TFRC to enable reaching higher PSNR for media traffic over Internet.
Microsoft RemoteFX promises to enhance the user QoE for rich media applications running on remote desktops and IPQ can be a key technology to help deliver on that promise.
On completion of the module one should be clear about the parameters required during drive test what does it mean and how much it is important.
Parameters regarding in windows like :
a) Current Channel
b) Radio parameters
c) Serving + Neighbors
Time: It is system time of computer.
Cell name: It displays the name of the sector which is serving according to the cellfile that is loaded in TEMS.
CGI : It stands for the Cell Global Identity which is unique for every sector of the site. It consists of MCC,MNC,LAC,CI.
Cell GPRS Support: Tells sector is having GPRS or not. Values are Yes or No .
Band : It tells in which Freq. Band mobile is operating e.g. GSM 900/ 1800.
BCCH ARFCN: It tells by which BCCH is the mobile station getting served.
TCH ARFCN: On which Traffic Freq. call is going on.
BSIC (Base Station Identity Code) : It is combination of Network Color Code (NCC) (0 – 7) & Base Station Color Code (BCC) (0 – 7). e.g. 62. It is decoded by mobile on every Sync. Channel Message.
Mode: It is shows in which state is mobile operating, Idle, Dedicated & Packet.
Time slot: On which time slot of current TCH call is going on. Viz. time slot no. of TRX.
ETFRC: Enhanced TFRC for Media Traffic over InternetCSCJournals
The evident increase in media traffic over Internet is expected to worsen its congestion state. TCP-friendly rate control protocol TFRC is one of the most promising congestion control techniques developed so far. TFRC has been thoroughly tested in terms of being TCP-friendly, responsive, and fair. Yet, its impact on the visual quality and the peak signal-to- noise ratio PSNR of the media traffic traversing Internet is still questionable. In this paper we aimed to point out the enhancements required for TFRC that enables producing the maximum PSNR value for Internet media traffic. Firstly, we suspected the default value of n that represents the number of loss intervals used in calculating the loss event rate in the TFRC equation. This value is recommended to be set to 8 according to the latest RFC of TFRC. We investigated the effect of modifying the TFRC mechanism on the resulting PSNR of the transmitted video over Internet using TFRC via switching n across the values from 2 to 16. We investigated the effect of such variation over a simulated network environment to study its effect on the resulting PSNR for a number of arbitrary video sequences. Our simulations results showed that running TFRC with n=11 led to reaching the maximum PSNR values among all the examined values of n including its default value. Secondly, we tested the impact on the PSNR of another modification in the TFRC mechanism via switching both values of n and Nfb which is frequency of feedback messages sent by TFRC receiver to its sender every round-trip time RTT. The default value of Nfb is 1; hence we scanned every possible combination of n and Nfb ranging from 2 to 16, and from 1 to 4, respectively and recorded the produced PSNR. It was obvious that several other combinations of n and Nfb produced higher PSNR values other than their default values in the request for comment RFC of TFRC. We hereby suggest using an enhanced TFRC that we abbreviated as ETFRC which has the values of n and Nfb value set to 4 and 11 respectively as a replacement for the traditional TFRC to enable reaching higher PSNR for media traffic over Internet.
Microsoft RemoteFX promises to enhance the user QoE for rich media applications running on remote desktops and IPQ can be a key technology to help deliver on that promise.
Optimal Network and Frequency Planning for WLAN Abhishek Verma
The design of wireless communication system plays an important role in the performance of the network. There are basically two major parts in planning WLAN. First, finding the
optimum number and locations of wireless access points (APs) to achieve best coverage area (with few or no gaps) by maximizing the signal strength. Second, the allocation of
frequency channels to these APs that gives minimum channel interference and provides best throughput. In addition to these, the number of APs can be reduced to optimize the
installation cost of the network. In this paper we present a multi-objective optimization ILP model that maximizes the signal strength, minimizes the co-channel and adjacent channel interference by allocating channels to APs with appropriate distances and minimizes total cost by optimizing the number of APs installed. Each objective is weighted with trade-off parameters which can be tuned to generate designs suitable for different wireless applications.
Broadcasting protocols has the ability to improve the efficiency of VOD service by minimizing the bandwidth required to transfer video’s to client as requested. Harmonic broadcasting protocol is the most protocol among this those protocols. Here I present the characteristics and functionalities of Poly-harmonic Broadcast- ing Protocol. At the end of this paper , i also established a hypothesis to modify this Poly-harmonic Protocol so that client can receive the requested data at less waiting time along with less buffer storage.
Minimizing network delay or latency is a critical factor in delivering mobile broadband services; businesses and users expect network response will be close to instantaneous. Excess latency can have a profound effect on user experience—from excess delay during a simple phone conversation, reducing throughput at edge of cell coverage areas by reducing effectiveness of RAN optimization techniques, to slow- loading webpages and delays with streaming video. Response delays negatively impact revenue. In financial institutions, low latency networks have become a competitive advantage where even a few extra microseconds, can enable trades to execute ahead of the competition.
The direct correlation between delay and revenue in the web browsing experience is well documented. Amazon famously claimed that every 100 millisecond reduction in delay led to a one percent increase in sales. Google also stated that for every half second delay, it saw a 20 percent reduction in traffic.
For LTE network operators, control of latency is growing in importance as both an operational and business issue. Low latency is not only critical to maintaining the quality user experience (and therefore, the operator competitive advantage) of growing social, M2M, and real-time services, but latency reduction is fundamental to meeting the capacity expectations of LTE-A, where latency budgets will be cut in half and X2 will need to perform at microsecond speed.
Total network latency is the sum of delay from all the network components, including air interface, the processing, switching, and queuing of all network elements (core and RAN) along the path, and the propagation delay in the links. With ever tightening latency expectations, the relative contribution of any individual network element, such as a security gateway, must be minimized. For example, when latency budgets were targeting 150ms, a network node providing packet processing at 250μs was only adding 0.17% to the budget. However, in LTE-A, with latency targets slashed to 10ms, that same network node will consume almost 15x more of the budget. More important, when placed on the S1 with a target of only 1ms, 250 μs is 25% of the entire S1 latency allocation, and endangers meeting the microsecond latency needed at the X2. Clearly, operators need to apply stringent latency requirements for all network nodes, when designing LTE and LTE-A networks.
Optimal Network and Frequency Planning for WLAN Abhishek Verma
The design of wireless communication system plays an important role in the performance of the network. There are basically two major parts in planning WLAN. First, finding the
optimum number and locations of wireless access points (APs) to achieve best coverage area (with few or no gaps) by maximizing the signal strength. Second, the allocation of
frequency channels to these APs that gives minimum channel interference and provides best throughput. In addition to these, the number of APs can be reduced to optimize the
installation cost of the network. In this paper we present a multi-objective optimization ILP model that maximizes the signal strength, minimizes the co-channel and adjacent channel interference by allocating channels to APs with appropriate distances and minimizes total cost by optimizing the number of APs installed. Each objective is weighted with trade-off parameters which can be tuned to generate designs suitable for different wireless applications.
Broadcasting protocols has the ability to improve the efficiency of VOD service by minimizing the bandwidth required to transfer video’s to client as requested. Harmonic broadcasting protocol is the most protocol among this those protocols. Here I present the characteristics and functionalities of Poly-harmonic Broadcast- ing Protocol. At the end of this paper , i also established a hypothesis to modify this Poly-harmonic Protocol so that client can receive the requested data at less waiting time along with less buffer storage.
Minimizing network delay or latency is a critical factor in delivering mobile broadband services; businesses and users expect network response will be close to instantaneous. Excess latency can have a profound effect on user experience—from excess delay during a simple phone conversation, reducing throughput at edge of cell coverage areas by reducing effectiveness of RAN optimization techniques, to slow- loading webpages and delays with streaming video. Response delays negatively impact revenue. In financial institutions, low latency networks have become a competitive advantage where even a few extra microseconds, can enable trades to execute ahead of the competition.
The direct correlation between delay and revenue in the web browsing experience is well documented. Amazon famously claimed that every 100 millisecond reduction in delay led to a one percent increase in sales. Google also stated that for every half second delay, it saw a 20 percent reduction in traffic.
For LTE network operators, control of latency is growing in importance as both an operational and business issue. Low latency is not only critical to maintaining the quality user experience (and therefore, the operator competitive advantage) of growing social, M2M, and real-time services, but latency reduction is fundamental to meeting the capacity expectations of LTE-A, where latency budgets will be cut in half and X2 will need to perform at microsecond speed.
Total network latency is the sum of delay from all the network components, including air interface, the processing, switching, and queuing of all network elements (core and RAN) along the path, and the propagation delay in the links. With ever tightening latency expectations, the relative contribution of any individual network element, such as a security gateway, must be minimized. For example, when latency budgets were targeting 150ms, a network node providing packet processing at 250μs was only adding 0.17% to the budget. However, in LTE-A, with latency targets slashed to 10ms, that same network node will consume almost 15x more of the budget. More important, when placed on the S1 with a target of only 1ms, 250 μs is 25% of the entire S1 latency allocation, and endangers meeting the microsecond latency needed at the X2. Clearly, operators need to apply stringent latency requirements for all network nodes, when designing LTE and LTE-A networks.
This presentation introduces you to the web design trends for the year 2014 that are already making waves in the web development industry with the inception of new concepts and approaches.
HIGH ALTO DA LAPA, aptos de 62m2 privativos, com deposito, 2 dorms com suíte, 2 vagas, torre única, lazer em alto estilo, vista privilegiada
INFO: BERGEN, consultor Abyara, tel: 9.9143-4536, bergen@bergenimoveis.com.br
SEMPRE ligar ANTES, para agendar dia e hora de sua visita, pois NÃO fico nos plantões. Grato!!!
Peer-to-peer and mobile networks gained significant attention of both research community and industry. Applying the peer-to-peer paradigm in mobile networks lead to several problems regarding the bandwidth demand of peer-to-peer networks. Time-critical messages are delayed and delivered unacceptably slow. In addition to this, scarce bandwidth is wasted on messages of less priority. Therefore, the focus of this paper is on bandwidth management issues at the overlay layer and how they can be solved. We present HiPNOS.KOM, a priority based scheduling and active queue management system. It guarantees better QoS for higher prioritized messages in upper network layers of peer-to-peer systems. Evaluation using the peer-to-peer simulator PeerfactSim.KOM shows that HiPNOS.KOM brings significant improvement in Kademlia in comparison to FIFO and Drop-Tail, strategies that are used nowadays on each peer. User initiated lookups have in Kademlia 24% smaller operation duration when using HiPNOS.KOM.
Drive Tests and Propagation Prediction software are the two methods that are used to check the coverage area of a particular wireless system. Generally prediction software is used in conjunction with the radio signal measurements in order to determine an accurate picture of signal propagation. In some cases, field measurements may be needed to be taken in order to calibrate the prediction software.
The project was a study based report on the RAN evolution path of 2.5G EDGE Networks to HSDPA. HSDPA is a 3.5G wireless cellular system, a cost-efficient upgrade to UMTS systems and promises to deliver performance comparable to today’s wireless LAN services, but with the added benefit of mobility and ubiquitous coverage. It can offer data rates of up to 14.4 Mbps which is far beyond what 2.5G and 3G cellular systems could offer. The project focuses on a two-step upgrade, first from GSM towards the deployment of UMTS/WCDMA and then towards HSDPA. It begins a new era of “Mobile broadband” services and faces competition from “WiMAX” – but with GSM services having an obvious upgrade path to WCDMA, HSDPA seems to be leading the market in several parts of the world. HSDPA is an extremely cost-effective path to higher data rates and provides more efficient use of valuable spectrum. It enables operators to compete effectively in increasingly converged markets and satisfy the need for enhanced QoS in an efficient and cost-effective manner.
IEEE CRS 2014 - Secure Distributed Data Structures for Peer-to-Peer-based Soc...Kalman Graffi
Jens Janiuk, Alexander Mäcker, Kalman Graffi -
Secure Distributed Data Structures for Peer-to-Peer-based Social Networks
In IEEE CTS ’14: Proceedings of the IEEE International Conference on Collaboration Technologies and Systems, 2014.
Abstract—Online social networks are attracting billions of nowadays, both on a global scale as well as in social enter- prise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-table- based online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.
LibreSocial - P2P Framework for Social Networks - OverviewKalman Graffi
Digital social networks promise to activate the social participants and to support them in their interactivity patterns. Private relationships evolve to friendships, professional contacts define competence networks and political opinions emerge to revolutionary trends. Social networks often act as driving force to intensify the social and global relationships.
In future, using the „Peer-to-Peer Framework for Social Networks“ everybody may host easily and out-of-the-box his personal online social network, without operating costs and without security risks. The framework offers a large set of interactive apps, which can be are freely combinable and technically limitless in their applicability.
The operating costs for such a social network are a revolutionary: no expenses arise. Whether a network for 10 users or for a global network of Millions of users, one aspect is common: due to the peer-to-peer technology used, no expenses arise. Researchers led by Dr.-Ing. Kalman Graffi at the University of Paderborn combined in the framework the advantages of decentralized peer-to-peer applications, of an app market as well as the cloud principle.
The social network is maintained in a peer-to-peer fashion through the computational power of the users’ devices, expensive servers are not needed. Still the availability, retrievability and security of the users‘ data are guaranteed. Each user keeps total control on the access control rights of his data. Similar to the main property of the cloud, the network’s capabilities grow elastically with the number of users. Further plugins can be developed easily. An app market that is included allows to provide these plugins in order to extend the capabilities and applications in the social network on the fly.
Enormous application opportunities without operating costs are the main reason to use the „P2P Framework for Social Networks“ emphasize the researchers of the corresponding project group at the University of Paderborn. The software as a prototype is already in use. Contact us for more information.
Timo Klerx and Kalman Graffi. Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks. In IEEE P2P ’13: Proceedings of the International Conference on Peer-to-Peer Computing, 2013.
Abstract—Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer- to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.
Vitaliy Rapp and Kalman Graffi. Continuous Gossip-based Aggregation through Dynamic Information Aging. In IEEE ICCCN ’13: Proceedings of the International Conference on Computer Communications and Networks, 2013.
Abstract—Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.
IEEE ICC 2013 - Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia CaseKalman Graffi
Lars Bremer and Kalman Graffi. Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case. In IEEE ICC ’13: Proceedings of the International Conference on Communications, 2013.
Abstract—Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the per- formance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer- to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.
Abstract—Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.
IEEE HPCS 2013 - Comparative Evaluation of Peer-to-Peer Systems Using Peerfac...Kalman Graffi
Matthias Feldotto and Kalman Graffi. Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM. In IEEE HPCS’13: Proceedings of the International Conference on High Per- formance Computing and Simulation, 2013.
Kalman Graffi - Monitoring and Management of P2P Systems - 2010Kalman Graffi
This is the long presentation of the contributions made in the dissertation of Dr.-Ing. Kalman Graffi - Monitoring and Management of P2P Systems. The talk was given at 29. Sept. 2009 in Madrid / Spain.
IEEE CCNC 2011: Kalman Graffi - LifeSocial.KOM: A Secure and P2P-based Soluti...Kalman Graffi
The phenomenon of online social networks reaches millions of users in the Internet nowadays. In these, users present themselves, their interests and their social links which they use to interact with other users. We present in this paper LifeSocial.KOM, a p2p-based platform for secure online social networks which provides the functionality of common online social networks in a totally distributed and secure manner. It is plugin-based, thus extendible in its functionality, providing secure communication and access-controlled storage as well as monitored quality of service, addressing the needs of both, users and system providers. The platform operates solely on the resources of the users, eliminating the concentration of crucial operational costs for one provider. In a testbed evaluation, we show the feasibility of the approach and point out the potential of the p2p paradigm in the field of online social networks.
1. ECHoP2P: Emergency Call Handling over Peer-to-Peer Overlays Kalman Graffi , Aleksandra Kovacevic, Kyra Wulffert, Ralf Steinmetz
2.
3.
4.
5.
6.
7.
8.
9. Globase.KOM: Geographical LOcation BAsed SEarch = peers = superpeers A B Lower bound L1 Upper bound L2 Superpeer load Split zone to reduce load I A B J D K E H G C F C D E F G H I J K
10.
11.
12. Closest Emerg. Station: SOS Area Search r r new SOS O A B D E G H I SOS SOS SOS B C SOS H Emergency Stations G H Peer Super Peer Emergency station F J SOS G K
13. Responsible ES : SOS Jurisdiction Search SOS O A B D E G H I SOS SOS SOS C SOS I Emergency Stations I F SOS Peer Super Peer Emergency station Emergency st. coverage area J
14.
15. List of ESs: SOS Jurisdiction Search Extended SOS O A B D E G H I SOS SOS SOS G C SOS H I Emergency Stations G H I F SOS Peer Super Peer Emergency station Emergency st. coverage area J
16.
17.
18.
19.
20.
21.
22.
23.
24. Evaluation: Summary of Results not good / good / very good Yes No, if the responsible station collapses Yes, only few results Suitable for catastrophe scenarios? Yes Yes No, resp. areas needed Suitable for everyday emergency scenarios? 100% 100% 98% Successful operations High Low Medium Metric: # of results Medium High Low Metric: Distance to ES High Low Medium Cost: SOS traffic High Low Medium Cost: Contacted peers Low High Medium Cost: Operation time SOS Jurisdiction Search Extended SOS Jurisdiction Search SOS Area Search
25.
26.
27. Questions? Kalman Graffi Peer-to-Peer Research Group Dept. of Electrical Engineering and Information Technology Multimedia Communications Lab · KOM Merckstr. 25 · 64283 Darmstadt · Germany Phone (+49) 6151 – 16 49 59 Fax (+49) 6151 – 16 61 52 [email_address] Further information: http://www.KOM.tu-darmstadt.de/ Publications: http://www.KOM.tu-darmstadt.de/Research/Publications/publications.html
Editor's Notes
22 Minuten für alles, d.h. 15 Minuten Vortrag
Bootstrapping: just one zone, the whole world assigned to the first superpeer (Peers with high CPU power, with good network connection, and with a history of long online times, are marked as potential superpeer) Network grows: highly loaded areas clustered into rectangular zones using a clustering algorithm assigned to one peer (preferably to one in that area) Load of the peers… Next slide
Betrachten wir also Scheduling und Active Queue Management Verfahren. Was Active Queue Management ist, werde ich gleich beschreiben. Allgemein geht es darum, dass das System Anfragen an eine Ressource erhält und entscheiden muss, welchem Anfrager wann die Ressource zur Verfügung gestellt wird. Dieses Prinzip ist auf verschiedene Ressourcen anwendbar, wie z.B. Rechenkraft, Bandbreite oder Speicherzugriffe. Wir gehen davon aus, dass die Anfragen in einer Queue zwischengespeichert werden, währen die Ressource in Benutzung ist. Scheduling nennt man nun das Verfahren, dass bestimmt, welches der Anfrager aus der Queue als nächstes Zugriff auf die Ressource erhält, sie bestimmt also die Reihenfolge der Anfrager in der Queue. Die Referenzlösung dafür ist First-In-First-Out. Nun kann es vorkommen, dass die Queue des Systems an ihre Grenzen stößt, da auch sie nur endliche Länge hat. Für diesen Fall gibt es Verfahren, die man unter dem Sammelbegriff Active Queue Management zusammenfasst. Sie entscheiden welche Anfragen aus der Queue entfernt werden, wenn eine Überlast vorliegt. Typisches Beispiel für ein Active Queue Management Verfahren ist Drop Tail, dass alle neu ankommenden Verfahren bei Überlast verwirft. Wir sehen, dass die Scheduling und Active Queue Management Verfahren allgemein für die Ressourcennutzung anwendbar sind. Um nun aber tiefer in die Materie einzusteigen und zu erfahren welche Verfahren es gibt und wo ihre Vor- und Nachteile liegen, habe ich mich mit verschiedenen Lösungen auseinander gesetzt.