A Fairer, Faster Internet Protocol
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
The document provides a history of the development of the Internet from the 1970s to present day. It discusses early networks like ARPANET and the development of key protocols like TCP/IP. It also summarizes the creation of the World Wide Web in 1989 and the growth of graphical web browsers in the early 1990s that drove widespread adoption. The document then covers various uses and applications of the Internet like communication tools, file transfer, web publishing, and online searching.
This document provides information about a service called Upload.gs that allows for large file transfers over the internet. It discusses limitations of email and other methods for large file transfers. Upload.gs aims to have no file size limits, allow resuming of interrupted transfers, and be compatible with both Windows and Mac systems. It also details account pricing and system requirements. Security features prevent account holders from seeing each other's files. The service can also make uploaded files available to designated downloaders without resending the files.
The document provides an introduction to the basic concepts of the Internet. It explains that the Internet is a global system of interconnected computer networks that exchange data via standardized protocols. It consists of millions of private, public, academic, business, and government networks linked by a broad array of electronic and optical networking technologies. Users can access the Internet through Internet service providers and must pay fees for this connectivity. Common applications of the Internet include e-government, e-commerce, and e-learning.
FUNDAMENTALS OF INTERNET AND WORLD WIDE WEBMuniba Bukhari
The document discusses the history and evolution of the Internet and World Wide Web. It begins with the creation of ARPANET in 1969 by the U.S. Department of Defense to allow scientists at different locations to share information. ARPANET eventually grew into today's Internet after connections to other networks like NSFNET. The World Wide Web was developed in the 1990s, allowing for multimedia content and hyperlinks between documents through browsers and servers. Today, billions of users access the Internet for services like email, social media, e-commerce, and streaming media.
This document discusses a study of BitTorrent characteristics through packet analysis. It provides background on BitTorrent, including how it works, key terminology, and the basic structure of torrent files. The study captured packet traces of BitTorrent files from different scenarios to analyze characteristics like peer connections and bandwidth usage over time.
The document provides an introduction to TCP/IP networking and the Internet. It explains that the Internet is a global network of interconnected computers that allows users to access documents, images, videos, and more from anywhere in the world. It describes how the Internet works using TCP/IP, which breaks files into packets that are routed through networks and reassembled at their destination. The document contrasts TCP, which provides reliable connections, and UDP, which sends independent data packets without guarantees of delivery.
The document discusses the history and development of the internet. It began in the 1960s as ARPANET, a small network connecting computers funded by the US Department of Defense. In the 1970s, Vint Cerf and Bob Kahn created TCP/IP, the fundamental communication protocols that allowed different networks to interconnect and form the internet. Their work laid the foundation for how data is transmitted over the global network of interconnected networks that we now know as the internet.
The document provides a history of the development of the Internet from the 1970s to present day. It discusses early networks like ARPANET and the development of key protocols like TCP/IP. It also summarizes the creation of the World Wide Web in 1989 and the growth of graphical web browsers in the early 1990s that drove widespread adoption. The document then covers various uses and applications of the Internet like communication tools, file transfer, web publishing, and online searching.
This document provides information about a service called Upload.gs that allows for large file transfers over the internet. It discusses limitations of email and other methods for large file transfers. Upload.gs aims to have no file size limits, allow resuming of interrupted transfers, and be compatible with both Windows and Mac systems. It also details account pricing and system requirements. Security features prevent account holders from seeing each other's files. The service can also make uploaded files available to designated downloaders without resending the files.
The document provides an introduction to the basic concepts of the Internet. It explains that the Internet is a global system of interconnected computer networks that exchange data via standardized protocols. It consists of millions of private, public, academic, business, and government networks linked by a broad array of electronic and optical networking technologies. Users can access the Internet through Internet service providers and must pay fees for this connectivity. Common applications of the Internet include e-government, e-commerce, and e-learning.
FUNDAMENTALS OF INTERNET AND WORLD WIDE WEBMuniba Bukhari
The document discusses the history and evolution of the Internet and World Wide Web. It begins with the creation of ARPANET in 1969 by the U.S. Department of Defense to allow scientists at different locations to share information. ARPANET eventually grew into today's Internet after connections to other networks like NSFNET. The World Wide Web was developed in the 1990s, allowing for multimedia content and hyperlinks between documents through browsers and servers. Today, billions of users access the Internet for services like email, social media, e-commerce, and streaming media.
This document discusses a study of BitTorrent characteristics through packet analysis. It provides background on BitTorrent, including how it works, key terminology, and the basic structure of torrent files. The study captured packet traces of BitTorrent files from different scenarios to analyze characteristics like peer connections and bandwidth usage over time.
The document provides an introduction to TCP/IP networking and the Internet. It explains that the Internet is a global network of interconnected computers that allows users to access documents, images, videos, and more from anywhere in the world. It describes how the Internet works using TCP/IP, which breaks files into packets that are routed through networks and reassembled at their destination. The document contrasts TCP, which provides reliable connections, and UDP, which sends independent data packets without guarantees of delivery.
The document discusses the history and development of the internet. It began in the 1960s as ARPANET, a small network connecting computers funded by the US Department of Defense. In the 1970s, Vint Cerf and Bob Kahn created TCP/IP, the fundamental communication protocols that allowed different networks to interconnect and form the internet. Their work laid the foundation for how data is transmitted over the global network of interconnected networks that we now know as the internet.
This document provides an overview of computer networks and networking fundamentals. It discusses the basic components of a network including nodes, links, and different types of networks from personal area networks to wide area networks. Specific topics covered include network devices, topologies, local and wide area networks, internetworking, and storage area networks. The goal is to learn the basics of how computer networks work and operate at a fundamental level.
The document provides information about the history and workings of the internet. It discusses how the internet began as the ARPANET project in 1969 to connect universities and the US Defense department. It then explains how the internet works by breaking messages into packets that are sent and reassembled. The key protocols that allow the internet to function including TCP/IP and HTTP are described. The governance and standards bodies that coordinate the internet without a single authority are also outlined. Finally, popular uses of the internet such as email, search engines, and social networking are summarized.
The document provides an introduction and overview of the internet and the World Wide Web (WWW). It discusses the history and origins of the internet from ARPANET in 1969 to today with over 500 million host nodes. It describes how the internet works through internet service providers (ISPs) and domain names, and how people can connect via dial-up, DSL, cable or wireless. The document also summarizes the key components of the WWW including web browsers, web pages, websites, URLs, hyperlinks, search engines, and the 12 basic types of websites such as portals, news, business, educational, social networks and others.
This chapter examines how the Internet works as a network of interconnected networks operated by various organizations. It discusses the hierarchical structure of the Internet, with large national internet service providers (ISPs) connecting at internet exchange points (IXPs) and providing service to regional and local ISPs. Individuals and organizations connect to the Internet by establishing a connection between their device and their local ISP's point of presence (POP). The chapter also explores technologies for internet access like DSL and cable modems, and potential future directions for the Internet.
Mid-semester presentation for my Computers & Society course at Mount Royal University. Has some technical detail about how the internet works, web protocols, data centres, and typical security threats.
The document provides a summary of modern web development topics covered in 3 sentences or less:
Modern Web Development topics covered include the infrastructure of the internet, client-server communication models, the need for server-side programs, web architecture patterns, JavaScript's central role, front-end frameworks, cloud computing models, microservices architecture, and containers. Web development has become more complex with client-side logic, front-end frameworks, and the rise of cloud, microservices, and containers, which allow for more modular and scalable application development. Future trends discussed include progressive web apps, microservices architecture, and containers as a lightweight deployment mechanism for microservices.
This tutorial, produced in the framework of DC-NET project, gives basic information on Internet: How does it run? Which are the differences between Internet and the Web? What is an IP address? What is a router?
http://www.dc-net.org/index.php?en/196/tutorial
The document discusses the history and development of the Internet and World Wide Web. It begins with the origins of ARPANET in 1969 as a US military network and describes how it later expanded internationally through networks connecting universities, organizations and businesses. It then explains key aspects of how the Internet functions through servers, clients and protocols. Finally, it discusses the development of the World Wide Web by Tim Berners-Lee in 1989 and common web browsers and protocols that allow users to access and navigate websites through URLs.
Lesson 17 fundamental of internet (88 kb)IMRAN KHAN
The document discusses the history and fundamentals of the Internet. It begins by explaining how the US Defense Department funded ARPANET in 1969 to create a network that could withstand nuclear attacks. This led to the development of TCP/IP and connecting various networks, transforming ARPANET into today's Internet. The Internet provides key services like email, FTP, Telnet, and the World Wide Web. It allows for fast, global communication and access to information.
Chapter 2 The Internet & World Wide WebPatty Ramsey
The document discusses the history and components of the Internet and World Wide Web. It describes how the Internet originated as a US military network called ARPANET and has since grown significantly. No single organization controls the Internet, but several set standards. Users can connect to the Internet via various technologies like dial-up, DSL, cable or wireless. The Web is a collection of documents called pages that are accessed using a browser and contain text, images, and hyperlinks. Search engines and directories help users find information on the Web. There are many types of websites including portals, news, business, educational and others.
The document provides information on networking components and concepts. It discusses:
- The basic components of a network include computers, devices like printers, and connections like switches that allow the devices to communicate and share data.
- Networks can be local area networks (LANs) that occupy a single building or site, or wide area networks (WANs) that connect multiple sites over a larger geographical area.
- Peer-to-peer networks have equal access between connected machines, while client-server networks have one centralized server computer that stores data and controls access for other client computers on the network.
- The Internet is the infrastructure that connects networks globally using TCP/IP, while the World Wide Web is
E business webinternet slides world wide web / protocolsAsjadAli34
The document discusses different types of Internet connections. It describes dial-up, DSL, wireless, cable, satellite, and cellular/broadband connections. For each type, it provides details on the technology used, typical speeds, and how the connection is established between the user and Internet service provider. The document also defines several Internet protocols used for tasks like email, file transfer, and accessing web pages.
High-speed dial-up works by using special software and acceleration servers to speed up traditional dial-up internet connections. The acceleration servers use broadband connections to quickly find webpages and send compressed data to users at speeds up to 5 times faster than standard dial-up. This process involves two handshakes - the modem handshake to initialize the connection and the software handshake to authenticate the user with the ISP.
This document discusses the introduction chapter of a computer networking textbook. It provides an overview of what the Internet is, including its basic components and how data is transmitted. It also explains what protocols are and how they are used to govern communication between devices on the Internet.
uploaded by eng aways somali eng of computer and art engineering at somali federal republic and islamic society of somali in mogadisho-east africa good advantage
Law technology product news understanding methods of establishing electroni...David Cunningham
The document discusses various methods that law firms can use to establish electronic client communications, including manual file transfers using modems, electronic bulletin boards, electronic mail, and public email services. Manual file transfers are simple but require coordination between the firm and client. Electronic bulletin boards allow documents to be uploaded and downloaded without coordination but have security limitations. Using electronic mail, firms can exchange documents and messages with clients through gateways that convert between different email formats or by connecting to public email services. However, firms need to consider issues of format compatibility, security, and document management with electronic document exchange.
Digital communication protocols, methods and devicessims1uni
The document discusses various digital communication methods and their applications. It describes protocols like HTTP, WAP, GSM, 3G, 4G and 5G; technologies like ADSL, VoIP, forums, blogs, wikis, email, instant messaging, SMS, MMS, video conferencing, virtual worlds; and applications like virtual learning environments. Each method is summarized with a brief explanation of its purpose and key examples. The document also outlines whether each method is commonly used via computers, mobile devices, or both.
This document provides an overview of communications and networks. It discusses the components required for successful communications including sending and receiving devices. It describes different types of networks including LANs (local area networks), MANs (metropolitan area networks), and WANs (wide area networks). It also differentiates between client/server and peer-to-peer network architectures. The document aims to help readers understand computer communications and networking concepts.
This document provides an overview of communications and networks. It discusses the components required for successful communications including sending and receiving devices. It describes different types of networks including LANs (local area networks), MANs (metropolitan area networks), and WANs (wide area networks). It also differentiates between client/server and peer-to-peer network architectures. The document covers a wide range of topics relating to computer communications and networking.
So, this was our FIRST SEMESTER presentation on "Internet".
Everyone is familiar with the word internet so, in this presentation we have tried to gather more and more information about internet from reliable sources so as to enhance knowledge.
hope this will help you!!!!
The document summarizes the agenda for an annual general meeting, including a review of the past year, developments in chapters in various cities, leadership training programs, awards for volunteers, and the outgoing and incoming boards of directors. Key people elected to the board include Michael Ambjorn as Chair and Agnesa Secerkadic as Director of International Relations.
This document provides an overview of computer networks and networking fundamentals. It discusses the basic components of a network including nodes, links, and different types of networks from personal area networks to wide area networks. Specific topics covered include network devices, topologies, local and wide area networks, internetworking, and storage area networks. The goal is to learn the basics of how computer networks work and operate at a fundamental level.
The document provides information about the history and workings of the internet. It discusses how the internet began as the ARPANET project in 1969 to connect universities and the US Defense department. It then explains how the internet works by breaking messages into packets that are sent and reassembled. The key protocols that allow the internet to function including TCP/IP and HTTP are described. The governance and standards bodies that coordinate the internet without a single authority are also outlined. Finally, popular uses of the internet such as email, search engines, and social networking are summarized.
The document provides an introduction and overview of the internet and the World Wide Web (WWW). It discusses the history and origins of the internet from ARPANET in 1969 to today with over 500 million host nodes. It describes how the internet works through internet service providers (ISPs) and domain names, and how people can connect via dial-up, DSL, cable or wireless. The document also summarizes the key components of the WWW including web browsers, web pages, websites, URLs, hyperlinks, search engines, and the 12 basic types of websites such as portals, news, business, educational, social networks and others.
This chapter examines how the Internet works as a network of interconnected networks operated by various organizations. It discusses the hierarchical structure of the Internet, with large national internet service providers (ISPs) connecting at internet exchange points (IXPs) and providing service to regional and local ISPs. Individuals and organizations connect to the Internet by establishing a connection between their device and their local ISP's point of presence (POP). The chapter also explores technologies for internet access like DSL and cable modems, and potential future directions for the Internet.
Mid-semester presentation for my Computers & Society course at Mount Royal University. Has some technical detail about how the internet works, web protocols, data centres, and typical security threats.
The document provides a summary of modern web development topics covered in 3 sentences or less:
Modern Web Development topics covered include the infrastructure of the internet, client-server communication models, the need for server-side programs, web architecture patterns, JavaScript's central role, front-end frameworks, cloud computing models, microservices architecture, and containers. Web development has become more complex with client-side logic, front-end frameworks, and the rise of cloud, microservices, and containers, which allow for more modular and scalable application development. Future trends discussed include progressive web apps, microservices architecture, and containers as a lightweight deployment mechanism for microservices.
This tutorial, produced in the framework of DC-NET project, gives basic information on Internet: How does it run? Which are the differences between Internet and the Web? What is an IP address? What is a router?
http://www.dc-net.org/index.php?en/196/tutorial
The document discusses the history and development of the Internet and World Wide Web. It begins with the origins of ARPANET in 1969 as a US military network and describes how it later expanded internationally through networks connecting universities, organizations and businesses. It then explains key aspects of how the Internet functions through servers, clients and protocols. Finally, it discusses the development of the World Wide Web by Tim Berners-Lee in 1989 and common web browsers and protocols that allow users to access and navigate websites through URLs.
Lesson 17 fundamental of internet (88 kb)IMRAN KHAN
The document discusses the history and fundamentals of the Internet. It begins by explaining how the US Defense Department funded ARPANET in 1969 to create a network that could withstand nuclear attacks. This led to the development of TCP/IP and connecting various networks, transforming ARPANET into today's Internet. The Internet provides key services like email, FTP, Telnet, and the World Wide Web. It allows for fast, global communication and access to information.
Chapter 2 The Internet & World Wide WebPatty Ramsey
The document discusses the history and components of the Internet and World Wide Web. It describes how the Internet originated as a US military network called ARPANET and has since grown significantly. No single organization controls the Internet, but several set standards. Users can connect to the Internet via various technologies like dial-up, DSL, cable or wireless. The Web is a collection of documents called pages that are accessed using a browser and contain text, images, and hyperlinks. Search engines and directories help users find information on the Web. There are many types of websites including portals, news, business, educational and others.
The document provides information on networking components and concepts. It discusses:
- The basic components of a network include computers, devices like printers, and connections like switches that allow the devices to communicate and share data.
- Networks can be local area networks (LANs) that occupy a single building or site, or wide area networks (WANs) that connect multiple sites over a larger geographical area.
- Peer-to-peer networks have equal access between connected machines, while client-server networks have one centralized server computer that stores data and controls access for other client computers on the network.
- The Internet is the infrastructure that connects networks globally using TCP/IP, while the World Wide Web is
E business webinternet slides world wide web / protocolsAsjadAli34
The document discusses different types of Internet connections. It describes dial-up, DSL, wireless, cable, satellite, and cellular/broadband connections. For each type, it provides details on the technology used, typical speeds, and how the connection is established between the user and Internet service provider. The document also defines several Internet protocols used for tasks like email, file transfer, and accessing web pages.
High-speed dial-up works by using special software and acceleration servers to speed up traditional dial-up internet connections. The acceleration servers use broadband connections to quickly find webpages and send compressed data to users at speeds up to 5 times faster than standard dial-up. This process involves two handshakes - the modem handshake to initialize the connection and the software handshake to authenticate the user with the ISP.
This document discusses the introduction chapter of a computer networking textbook. It provides an overview of what the Internet is, including its basic components and how data is transmitted. It also explains what protocols are and how they are used to govern communication between devices on the Internet.
uploaded by eng aways somali eng of computer and art engineering at somali federal republic and islamic society of somali in mogadisho-east africa good advantage
Law technology product news understanding methods of establishing electroni...David Cunningham
The document discusses various methods that law firms can use to establish electronic client communications, including manual file transfers using modems, electronic bulletin boards, electronic mail, and public email services. Manual file transfers are simple but require coordination between the firm and client. Electronic bulletin boards allow documents to be uploaded and downloaded without coordination but have security limitations. Using electronic mail, firms can exchange documents and messages with clients through gateways that convert between different email formats or by connecting to public email services. However, firms need to consider issues of format compatibility, security, and document management with electronic document exchange.
Digital communication protocols, methods and devicessims1uni
The document discusses various digital communication methods and their applications. It describes protocols like HTTP, WAP, GSM, 3G, 4G and 5G; technologies like ADSL, VoIP, forums, blogs, wikis, email, instant messaging, SMS, MMS, video conferencing, virtual worlds; and applications like virtual learning environments. Each method is summarized with a brief explanation of its purpose and key examples. The document also outlines whether each method is commonly used via computers, mobile devices, or both.
This document provides an overview of communications and networks. It discusses the components required for successful communications including sending and receiving devices. It describes different types of networks including LANs (local area networks), MANs (metropolitan area networks), and WANs (wide area networks). It also differentiates between client/server and peer-to-peer network architectures. The document aims to help readers understand computer communications and networking concepts.
This document provides an overview of communications and networks. It discusses the components required for successful communications including sending and receiving devices. It describes different types of networks including LANs (local area networks), MANs (metropolitan area networks), and WANs (wide area networks). It also differentiates between client/server and peer-to-peer network architectures. The document covers a wide range of topics relating to computer communications and networking.
So, this was our FIRST SEMESTER presentation on "Internet".
Everyone is familiar with the word internet so, in this presentation we have tried to gather more and more information about internet from reliable sources so as to enhance knowledge.
hope this will help you!!!!
The document summarizes the agenda for an annual general meeting, including a review of the past year, developments in chapters in various cities, leadership training programs, awards for volunteers, and the outgoing and incoming boards of directors. Key people elected to the board include Michael Ambjorn as Chair and Agnesa Secerkadic as Director of International Relations.
The document shares various facts about how people's appearances and behaviors can indicate inner qualities or needs, and encourages sharing the message with friends to receive benefits like luck or having wishes granted. It suggests sending the message to a certain number of friends will result in good luck for a corresponding period of time, from a year for one friend down to three hours for 20 friends. It concludes by wishing the reader a Merry Christmas and happiness in 2010.
Learning experinces in the making of purposeful mediaChole Richard
Two learners of a project based learning program, Adobe Youth Voices, share their learning experiences in the creation of a purposeful media, "Wrath of a Stepmother"
This is a Tulane University presentation sponsored by the Traumatology Institute: Treating traumatic stress injuries by Mark Russell, PhD (Antioch University of Seattle) and Charles Figley (Tulane University) that will be delivered Friday, April 5th in New Orleans.
Value analysis and value engineering are techniques used to analyze the value of products, processes, and capital projects. They involve identifying the functions of an item and finding ways to accomplish those functions at the lowest total cost while maintaining quality and performance. Value analysis was traditionally used on existing products while value engineering focused on new products at the design stage. Both aim to reduce unnecessary costs and improve operations and product performance using techniques like function analysis. The concepts and techniques were developed in the 1940s at GE by Lawrence Miles, who is considered the father of value analysis and value engineering.
The document contains drawings of various sports courses and equipment created by students using Autodesk software. There are drawings of a baseball course, diamond, and pitcher's mound; a croquet course and balls; a gymnastics course and beam; a polo course and water polo ball; letters; pool balls; a pool course; track grass; hurdles; and an assembly drawing of a pool course with parts list. The drawings include dimensions, notes, and were produced by Autodesk educational software.
This document provides guidance to help promote an upcoming conference called EuroComm on social media platforms like LinkedIn, Facebook, and Twitter. It outlines basic and more advanced actions people can take on each platform to help spread awareness and earn points in a contest, such as marking themselves as attending an event, liking and sharing event posts, inviting friends, and commenting on event pages. Completing more engaging tasks like personalizing invites or posting regularly earns more points. The goal is to maximize outreach for EuroComm leading up to the event in Turin.
Constituency dev fund cdf study whats wrongBhim Upadhyaya
CDFs allow MPs to directly allocate and spend funds in their constituencies, bypassing traditional budget processes. However, CDFs have several negative effects:
1. They breach the separation of powers by giving legislative bodies executive budget functions.
2. They can weaken government capacity by fragmenting planning and oversight, skewing resources based on political interests rather than need, and potentially displacing local government funds.
3. They compromise the legislature's ability to oversee the executive by involving MPs directly in budget execution rather than oversight. Overall, CDFs undermine accountability and priorities of service delivery.
Democratic ideas in colonial America began to emerge as colonists established their own local governments that allowed for limited self-representation. These early governments, such as the House of Burgesses and Mayflower Compact, gave colonists the ability to make decisions on local issues but ultimate authority still rested with the King of England. Over time, the lack of full representation and powers of these colonial governments, such as laws being passed without colonial consent, led to growing tensions that later contributed to America's democratic ideals.
Lezione 2 del corso Web Design from Ground to TopSkillsAndMore
Il linguaggio HTML è un metalinguaggio che permette di dare struttura alle proprie pagine web. È fondamentale essere in grado di tirare su questa struttura perché rappresenta le ossa di qualsiasi sito internet.
Conoscere la logica e gli elementi che puoi utilizzare ti permetterà di aggiungere nuove informazioni che i tuoi visitatori e i motori di ricerca potranno ottenere da del semplice codice.
Indian govt allocation of business rules latest 2015Bhim Upadhyaya
This document outlines the Government of India (Allocation of Business) Rules from 1961 which distribute responsibilities and subjects among various Ministries and Departments of the Government of India. It lists 91 Ministries and Departments and allocates subjects to each one. The rules provide the framework for how the business and work of the Government of India is organized across different administrative divisions.
This document provides a practice activity for 8th grade students describing people based on their physical appearances. It includes matching adjectives to body parts, filling in blanks about physical characteristics, writing descriptive sentences about yourself, classmates, friends and an ideal partner, answering questions about pictures of people with different hair colors and styles, facial hair, accessories, and rewriting run-on sentences as separate sentences with correct grammar. The document focuses on having students identify and describe physical attributes of various people.
The document discusses peer-to-peer (P2P) networks. P2P allows for direct data exchange between users without centralized servers. While P2P is commonly associated with file sharing, many companies and social networks also use P2P technologies. There are different architectures for P2P networks, including centralized networks with indexing servers, decentralized distributed networks without servers, and hybrid networks that combine aspects of both. P2P software uses techniques like distributed hash tables and multi-source transfers to efficiently locate and download content across the network.
The transport layer in computer networking provides host-to-host communication services for applications. It provides functions like connection-oriented data streams, reliability, flow control, and multiplexing. Common transport layer protocols include TCP, UDP, SCTP, and SPX. The OSI transport layer defines five classes of connection-mode protocols: class 0 (unacknowledged mode), class 1 (acknowledged mode), class 2 (numbered mode), class 3 (alternate mode), and class 4 (unconfirmed mode).
My IT Management course in UBC MBA
Prof: Ron Cenfetelli
Web 2.0 – Moving beyond HTML
Confidentiality
Authentication
Ability to verify the identity of people/organizations
Data/Message Integrity
Ensuring communications were not modified in transit/storage
Nonrepudiation
Parties cannot deny a communication
Proof that the sender sent and proof that the receiver received
OBJECTIVES In this chapter you will learn ... • The hi.docxcherishwinsland
OBJECTIVES
In this chapter you will learn ...
• The history of the Internet and World Wide Web
• Fundamental concepts and protocols that support the Internet
• About the hardware and software that supports the Internet
• How a web page is actually retrieved and interpreted
This chapter introduces the World Wide Web (WWW). The
WWW relies on a number of systems, protocols, and technologies all
working together in unison. Before learning about HTML markup,
CSS styling, JavaScript, and PHP programming, you must understand
how the Internet makes web applications possible. This chapter begins
with a brief history of the Internet and provides an overview of key
Internet and WWW technologies applicable to the web developer. To
truly understand these concepts in depth, one would normally take
courses in computer science or information technology (IT) covering
networking principles. If you find some of these topics too in-depth
or advanced, you may decide to skip over some of the details here
and return to them later.
I
How the Web Works 1
2 CHAPTER 1 How the Web Works
1.1 Definitions and History
The World Wide Web (WWW or simply the Web) is certainly what most people
think of when they see the word "Internet." But the WWW is only a subset of the
Internet, as illustrated in Figure 1.1.
1.1.1 A Short History of the Internet
The history of telecommunication and data transport is a long one. There is a stra
tegic advantage in being able to send a message as quickly as possible (or at least,
more quickly than your competition). The Internet is not alone in providing instan
taneous digital communication. Earlier technologies like radio, telegraph, and the
telephone provided the same speed of communication, albeit in an analog form.
Telephone networks in particular provide a good starting place to learn about
modern digital communications. In the telephone networks of old, calls were routed
through operators who physically connected caller and receiver by connecting a
wire co a switchboard to complete a circuit. These operators were around in some
areas for almost a century before being replaced with automatic mechanical
switches, which did the same job: physically connect caller and receiver.
One of the weaknesses of having a physical connection is that you must estab
lish a link and maintain a dedicated circuit for the duration of the call. This type of
network connection is sometimes referred to as circuit switching and is shown in
Figure 1.2.
The problem with circuit switching is that it can be difficult to have multiple
conversations simultaneously (which a computer might want to do). It also requires
more bandwidth since even the silences are transmitted (chat is, unused capacity in
the network is not being used efficiently).
FIGURE 1.1 The web as a subset of the Internet
Thou map of woe, that
thus dost talk in signs!
FIGURE 1.2 Telephone network as example of circuit switc.
Networking Report Essay
Essay about networks
Essay on Network
Essay on Wide Area Networks
Wireless Networking Essay
Leading An Event
Essay on Network Security
Network Design Essay
networking Essay example
Essay on Network Security
Network security is becoming increasingly important as more data needs to be monitored and protected from threats. Protecting a network involves various factors like password authentication, access control, software updates, antivirus software, firewalls, and intrusion detection tools. As threats become more sophisticated, companies need security departments and must go beyond just firewalls to adequately safeguard their networks and data.
This presentation is about:
Uses of Networking.
Various types of networking.
Applications used for networking.
Methods of network security.
Methods of communication -2G,3G,4G,Fiber Optics
Transmission Media.
Various types of protocols.
Cloud Computing
Protection against Viruses.
Microsoft power point internet history and growth [compatibility mode]Cr Faezah
The document provides an overview of the history and evolution of the internet. It describes the Victorian telegraph network in the 1840s as a precursor technology. In the late 1960s, ARPANET was created connecting five nodes and using the TCP specification developed by Vint Cerf in 1974. The internet continued growing through the 1980s and 1990s with the introduction of TCP/IP, browsers, HTML, and the World Wide Web, enabling new applications and fueling the rise of e-commerce.
Using Interconnected Computer Networks For CommunicationChelsea Porter
The document discusses the protocol stack, specifically how data moves through the layers of the TCP/IP and OSI models when requesting a webpage from a web server over a WAN. It explains the encapsulation process at each layer, such as how the application layer protocols HTTP and DNS are used, and how at the transport layer data is segmented and port numbers are added. It then discusses how at the network layer, logical addressing is applied to packets before being forwarded across the WAN. The document also covers subnetting IP addresses and includes screenshots of routing/switching device outputs and an email example.
Patton-Fuller Community Hospital has been providing medical care to the local community since 1975. As technology has advanced, the hospital now relies heavily on computer networks and digital systems. However, the hospital's current network infrastructure is outdated and in need of improvements to support modern medical equipment and ensure patient data security. Updating the network will require installing new wired and wireless networks, migrating systems to the cloud, and training staff on cybersecurity best practices. The goal is to implement a reliable and secure network to deliver high-quality care now and in the future.
This document provides an overview of multimedia communication and networks. It discusses open data network models and the layered OSI model. It describes the narrow waist model of the Internet and some of its limitations. It also discusses transport protocols like TCP and UDP, addressing in TCP/IP, and popular applications that use UDP. The document is an introductory unit on network fundamentals and protocols.
The document provides an overview of computer networking and the Internet. It describes the Internet as a worldwide computer network that interconnects millions of computing devices, allowing distributed applications running on end systems to exchange data. The Internet consists of end systems connected by communication links and routers. End systems access the Internet through internet service providers. Key protocols like TCP and IP control the sending and receiving of information and allow this global communication infrastructure.
Network protocols govern communications between computers by establishing rules for access methods, topologies, cabling, and data transfer speeds. Some common protocols described are Ethernet, Fast Ethernet, LocalTalk, Token Ring, FDDI, ATM, and Gigabit Ethernet, which vary in cable type, speed, and topology supported. Network diagramming software can be used to visually represent different network protocols.
The document provides an overview of Janet Abbate's book "Inventing the Internet" which explores the history of the development of the Internet from 1959 to 1994. The book examines the social and cultural factors influencing the Internet's evolution from ARPANET to a global network. It analyzes how the Internet was shaped by collaboration and conflict between various players including government, military, computer scientists, and businesses. The author traces the technological development of the Internet and links it to organizational, social, and cultural changes during that period.
Different Issues and Survey of Proposed Solutions in TCP over Wireless Enviro...Ranjeet Bidwe
This document discusses issues with using TCP in wireless networks and proposed solutions. The main issues are higher bit error rates, lower bandwidth, mobility, and longer round trip times in wireless networks compared to wired networks. TCP was designed for wired networks and assumes packet loss is always due to congestion, but in wireless networks loss can occur for other reasons like bit errors, handoffs, or disconnections. This wrong assumption causes TCP to reduce its window size unnecessarily, degrading performance. The document surveys proposed solutions like Snoop to cache packets at base stations to hide losses from the sender, and Explicit Congestion Notification to signal congestion before queue overflows.
The document provides an overview of information systems and networking concepts. It discusses client/server architecture and how processing is shared between clients and servers. It also describes the network layer model and how data is packaged and routed. Additionally, it covers local area networks, wireless networks, and the Internet as a network of networks that connects computers globally.
Cable Europe factsheet - Internet and traffic managementPaulo Valente
The document summarizes how internet traffic has grown exponentially over time, increasing from a small number of static pages to supporting a vast array of bandwidth-intensive services like video streaming. It describes how traffic management techniques are needed to prioritize time-sensitive data like phone calls and ensure quality of service for users. Looking to the future, it predicts that innovative new applications will require more robust traffic management to deliver an optimal user experience across all services.
Tim Berners-Lee originally proposed the World Wide Web in 1989 at CERN as a system for sharing information over a computer network. It utilized HTML for formatting documents, URLs for uniquely addressing resources, and HTTP for transporting messages. These basic technologies provided the building blocks for the web. HTTP became widely adopted, allowing the proliferation of static web pages in the early 1990s. This eventually led to more complex, dynamic web applications that still rely on HTTP and its request-response model today.
The document discusses the history and technology of the internet. It describes how ARPANET was established in 1969 as the first wide area packet switching network and how TCP/IP was developed in the 1970s as the standard communication protocol. It also summarizes key internet services like email, file transfer, and the world wide web, which allows for hyperlinked documents across a decentralized network.
1. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
Sponsore d By
Se le ct Font Size : A AA
A Fairer, Faster Internet Protocol
By Bob Briscoe
ILLUSTR ATIO N:
Q UIC KHO NEY
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
You might be shocked to learn that the designers of the Internet intended
that your share of Internet capacity would be determined by what your own
software considered fair. They gave network operators no mediating role
between the conflicting demands of the Internet’s hosts—now over a billion
personal computers, mobile devices, and servers.
The Internet’s primary sharing algorithm is built into the Transmission Control
Protocol, a routine on your own computer that most programs run—although
they don’t have to. TCP is one of the twin pillars of the Internet, the other
being the Internet Protocol, which delivers packets of data to particular
addresses. The two together are often called TCP/IP.
Your TCP routine constantly increases your transmission rate until packets fail
to get through some pipe up ahead—a tell-tale sign of congestion. Then TCP
very politely halves your bit rate. The billions of other TCP routines around the
Internet behave in just the same way, in a cycle of taking, then giving, that fills
the pipes while sharing them equally. It’s an amazing global outpouring of self-
denial, like the “after you” protocol two people use when they approach a
door at the same time—but paradoxically, the Internet version happens
between complete strangers, even fierce commercial rivals, billions of times
every second.
The commercial stakes could hardly be higher. Services like YouTube, eBay,
Skype, and iTunes are all judged by how much Internet capacity they can grab
for you, as are the Internet phone and TV services provided by the carriers
themselves. Some of these companies are opting out of TCP’s sharing regime,
but most still allow TCP to control how much they get—about 90 percent of the
200 000 terabytes that cross the Internet each second.
This extraordinary spirit of global cooperation stems from the Internet’s early
history. In October 1986, Internet traffic persistently overran available capacity
—the first of a series of what were called congestion collapses. The TCP
software of the day continued to try to retransmit, aggravating the problem
www.spectrum.ieee.org/print/7027 1/8
2. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
and causing everyone’s throughput to plummet for hours on end. By mid-1987
Van Jacobson, then a researcher at Lawrence Berkeley National Laboratory,
had coded a set of elegant algorithms in a patch to TCP. (For this he received
the IEEE’s prestigious Koji Kobayashi Computers and Communications Award
in 2002.)
Jacobson’s congestion control accorded well with the defining design principle
of the Internet: traffic control is consigned to the computers around the edges
of the Internet (using TCP), while network equipment only routes and forwards
packets of data (using IP).
The combination of near-universal usage and academic endorsement has
gradually elevated TCP’s way of sharing capacity to the moral high ground,
altering the very language engineers use. From the beginning, equal rates
were not just “equal,” they were “fair.” Even if you don’t use TCP, your
protocol is considered suspect if it’s not “TCP-friendly”—a cozy-sounding idea
meaning it consumes about the same bit rate as TCP would.
Sadly, an equal bit rate for each data flow is likely to be extremely unfair, by
any realistic definition. It’s like insisting that boxes of food rations must all be
the same size, no matter how often each person returns for more or how
many boxes are taken each time.
Consider a neighborhood network with 100 customers, each of whom has a 2-
megabit-per-second access line connected to a single shared 10 Mb/s regional
link. The network provider can get away with such a thin shared pipe because
most of the customers—let’s say 80 of the 100—don’t use it continuously, even
over the peak period. These people might think they are constantly clicking at
their browsers and getting new e-mail, but their data transfers might be active
perhaps only 5 percent of the time.
However, there are also 20 heavy users who download continuously, perhaps
using file-sharing programs that run unattended. So at any one moment, data
is flowing to about 24 users—all 20 heavy users, and 4 of the 80 light ones.
TCP gives 20 shares of the bottleneck capacity to the heavy users and only 4
to the light ones. In a few moments, the 4 light users will have stepped aside
and another 4 will take over their shares. However, the 20 heavy users will
still be there to claim their next 20 shares. They might as well have dedicated
circuits!
It gets even worse. Any programmer can just run the TCP routine multiple
times to get multiple shares. It’s much like getting around a food-rationing
system by duplicating ration coupons.
This trick has always been recognized as a way to sidestep TCP’s rules—the
first Web browsers opened four TCP connections. Therefore, it would have
been remarkable if this ploy had not become more common.
A number of such strategies evolved through innocent experimentation. Take
peer-to-peer file sharing—a common way to exchange movies over the
Internet, one that accounts for a large portion of all traffic. It involves
downloading a file from several peers at once. This parallel scheme, sometimes
known as swarming, had become routine by 2001, built into such protocols as
BitTorrent.
The networking community didn’t immediately view connecting with many
machines as a circumvention of the TCP-friendliness rule. After all, each
transfer used TCP, so each data flow “correctly” got one share of any
bottleneck it encountered. But using parallel connections to multiple machines
was a new degree of freedom that hadn’t been thought of when the rules
were first written. Fairness should be defined as a relation between people,
not data flows.
Peer-to-peer file sharing exposed both of TCP’s failings. First, a file-sharing
program might be active 20 times as often as your Web browser, and second,
it uses many more TCP connections, typically 5 or even 50 times as many.
www.spectrum.ieee.org/print/7027 2/8
3. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
Peer-to-peer thus takes 100 or 1000 times as many shares of Internet
bottlenecks as a browser does.
Returning to our 100 broadband customers: if they were just browsing the
Web and exchanging e‑mail, each would get nearly the full benefit of a 2 Mb/s
access pipe—if 5 customers were active at a time, they’d just squeeze into the
10Mb/s shared pipe. But if even 20 users started continuous parallel
downloading, the TCP algorithm would send everyone else’s bit rate
plummeting to an anemic 20 kilobits per second—worse than dial‑up! The
problem isn’t the peer-to-peer protocols; it’s TCP’s sharing rules.
Why can’t the service provider simply upgrade that stingy 10 Mb/s shared
pipe? Of course, some upgrades are necessary from time to time. But as a
general approach to the problem of sharing, adding capacity is like throwing
water uphill.
Imagine two competing Internet service providers, both with this 80:20 mix of
light and heavy users. One provider quadruples its capacity; the other doesn’t.
But TCP still doles out the upgrader’s capacity in the same way. So the light
users, who used to have a measly 20 kb/s share, now get a measly 80 kb/s—
still barely better than dial-up. But now the 80 light users must pay
substantially more for four times the long-distance capacity, which they hardly
get to use. No rational network operator would upgrade under these
conditions—it would lose most of its customers.
But there is plenty of evidence that Internet service providers are continuing to
add capacity. This is partly explained by government subsidies, particularly in
the Far East. Equivalently, weak competition, typical in the United States,
allows providers to fund continued investment through higher fees without the
risk of losing customers. But in competitive markets, common in Europe, service
providers have had to attack the root cause: the way their capacity is shared.
Network providers often don’t allow TCP to give all the new capacity straight to
the heavy users. Instead they impose their own sharing regimes on their
customers, thus overriding the worst effects of TCP’s broken regime. Some
limit, or “throttle,” the peak-time bit rate of the peer-to-peer customers.
Others partition the pipe to prevent heavy users encroaching on lighter ones.
Increasingly, the share of Internet capacity you actually get is the result of this
tussle between TCP and the service providers’ allocation schemes.
www.spectrum.ieee.org/print/7027 3/8
4. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
THROTTLE THIS: Throttling trie s to corre ct today’s TC P syste m [le ft] by cla m ping
down o n he a vy use rs [ce nte r], but the te chnique m isse s a trick . W ith we ighte d TC P
sharing [right], light use rs can go supe rfa st, so the y finish soone r, while he avy
use rs slow only fle e tingly, the n ca tch up. All this ca n be done without a ny
prioritization in the ne twork .
There’s a far better solution than fighting. It would allow light browsing to go
blisteringly fast but hardly prolong heavy downloads at all. The solution comes
in two parts. Ironically, it begins by making it easier for programmers to run
TCP multiple times—a deliberate break from TCP-friendliness.
Programmers who use this new protocol to transfer data will be able to say
“behave like 12 TCP flows” or “behave like 0.25 of a TCP flow.” They set a new
parameter—a weight—so that whenever your data comes up against others all
trying to get through the same bottleneck, you’ll get, say, 12 shares, or a
quarter of a share. Remember, the network did not set these priorities. It’s the
new TCP routine in your own computer that uses these weights to control the
number of shares it takes from the network.
At this point in my argument, people generally ask why everyone won’t just
declare that they each deserve a huge weight. The answer to the question
involves a trick that gives everyone good reason to use the weights sparingly
—a trick I’ll get to in a minute. But first, let’s check how this scheme ensures
the lightning-fast browsing rates I just promised.
The key is to set the weights high for light interactive usage, like surfing the
Web, and low for heavy usage, such as movie downloading. Whenever these
www.spectrum.ieee.org/print/7027 4/8
5. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
uses conflict, flows with the higher weighting—those from the light users—will
go much faster, which means they will also finish much sooner. Then the heavy
flows can expand back to a higher bit rate sooner than otherwise. This is why
the heavy flows will hardly take any longer to complete. The weighting scheme
uses the same strategy as a restaurant manager who says, “Get those
individual orders out right away, then come serve this party of 12.” But today’s
Internet has the balance of the weights exactly the other way around.
That brings us to the second part of the problem: how can we encourage
everyone to flip the weights? This task means grappling with something that is
often called “the tragedy of the commons.” A familiar example is global
warming, where everyone happily pursues what’s best for them—leaving lights
on, driving a big car—despite the effect this may have on everyone else
through the buildup of carbon dioxide and other greenhouse gases.
On the Internet, what matters isn’t how many gigabytes you download but
how many you download when everyone else is trying to do the same. Or,
more precisely, it’s the volume you download weighted by the prevailing level
of congestion. Let’s call this your congestion volume, measured in bytes. Think
of it as your carbon footprint for the Internet.
As with CO2, the way to cut back is to set limits. Imagine a world where some
Internet service providers offer a deal for a flat price but with a monthly
congestion-volume allowance. Note that this allowance doesn’t limit
downloads as such; it limits only those that persist during congestion. If you
used a peer-to-peer program like BitTorrent to download 10 videos
continuously, you wouldn’t bust your allowance so long as your TCP weight
was set low enough. Your downloads would draw back during the brief
moments when flows came along with higher weights. But in the end, your
video downloads would finish hardly later than they do today.
On the other hand, your Web browser would set the weights high for all its
browsing because most browsing comes in intense flurries, and so it wouldn’t
use up much of your allowance. Of course, server farms or heavy users could
buy bigger congestion quotas, and light users might get Internet access with a
tiny congestion allowance—for a lower flat fee.
But there’s a snag. Today Internet service providers can’t set congestion limits,
because congestion can easily be hidden from them. As we’ve said, Internet
congestion was intended to be detected and managed solely by the
computers at the edge—not by Internet service providers in the middle.
Certainly, the receiver does send feedback messages about congestion back
to the sender, which the network could intercept. But that would just
encourage the receiver to lie or to hide the feedback—you don’t have to reveal
anything that may be used as evidence against you.
Of course a network provider does know about packets it has had to drop
itself. But once the evidence is destroyed, it becomes somewhat tricky to hold
anyone responsible. Worse, most Internet traffic passes through multiple
network providers, and one network cannot reliably detect when another
network drops a packet.
Because Internet service providers can’t see congestion volume, some limit the
straight volume, in gigabytes, that each customer can transfer in a month.
Limiting total volume indeed helps to balance things a little, but limiting
congestion volume does much better, providing extremely fast connections for
light users at no real cost to the heavy users.
My colleagues and I have figured out a way to reveal congestion so that limits
can be enforced. We call it “refeedback” [see “
www.spectrum.ieee.org/print/7027 5/8
6. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
ILLUSTRATION: QUICKHONEY
.” Here’s how it works. Recall that today the computers at each end of an
exchange of packets see congestion, but the networks between them can’t.
So we built on a technique called Explicit Congestion Notification—the most
recent change to the TCP/IP standard, made in 2001. Equipment that
implements that change marks packets during impending congestion rather
than doing nothing until forced to drop them. The marks—just a change in a
single bit—let the network see congestion directly, rather than inferring it from
gaps in the packet stream. It’s also particularly neat to be able to limit
congestion before anyone suffers any real impairment.
Although the 2001 reform reveals congestion, it is only visible downstream of
any bottleneck as packets leave the network. Our scheme of refeedback
makes congestion visible to the upstream network before it enters the
Internet, where it can be limited.
Refeedback introduces a second type of packet marking—think of these as
credits and the original congestion markings as debits. The sender must add
sufficient credits to packets entering the network to cover the debit marks that
are introduced as packets squeeze through congested Internet pipes. If any
subsequent network node detects insufficient credits relative to debits, it can
discard packets from the offending stream.
To keep out of such trouble, every time the receiver gets a congestion (debit)
mark, it returns feedback to the sender. Then the sender marks the next
packet with a credit. This reinserted feedback, or refeedback, can then be used
www.spectrum.ieee.org/print/7027 6/8
7. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
at the entrance to the Internet to limit congestion—you do have to reveal
everything that may be used as evidence against you.
Refeedback sticks to the Internet principle that the computers on the edge of
the network detect and manage congestion. But it enables the middle of the
network to punish them for providing misinformation.
The limits and checks on congestion at the borders of the Internet are trivial
for a network operator to add. Otherwise, the refeedback scheme does not
require that any new code be added to the network’s equipment; all it needs
is that standard congestion notification be turned on. But packets need
somewhere to carry the second mark in the “IP” part of the TCP/IP formula.
Fortuitously, this mark can be made, because there is one last unused bit in
the header of every IP packet.
In 2005, we prepared a proposal documenting all the technical details and
presented it to the Internet Engineering Task Force (IETF), the body that
oversees Internet standards.
At this point, the story gets personal. Because I had set myself the task of
challenging the entrenched principle of TCP‑friendliness—equality of flow rates
for all TCP connections—I decided to talk only about the change to IP, omitting
any mention of weighted TCP. Instead I played up some other motivations for
adding refeedback to IP. I even showed how refeedback could enforce equal
flow rates—pandering to my audience’s faith while denying my own. But I just
looked like yet another mad researcher pushing a solution without a problem.
After a year of banging my head against a wall, I wrote an angry but—I trust—
precise attack on the dogma that equal flow rates were “fair.” My colleagues
got me to tone it down before I posted it to the IETF; evidently I’d softened it
enough at least to be invited to present my ideas at a plenary session in San
Diego late in 2006. The next day, a nonbinding straw poll of the large audience
showed widespread doubt about using TCP-friendliness as a definition of
fairness. Elwyn Davies of the Internet Architecture Board e-mailed me, saying,
“You have identified a real piece of myopia in the IETF.”
I was hardly the first to challenge these myths. In 1997 Frank P. Kelly, a
professor at the University of Cambridge, put together some awe-inspiringly
elegant and concise mathematical arguments to prove that the same weighted
sharing would maximize the value that users get from their Internet
throughput. However, to create the right incentives, he proposed varying the
prices charged for the packets as they were received, and everyone balked.
People like to control, in advance, what they will pay.
Objections to Kelly’s pricing scheme blinded the Internet community to all the
other insights in his work—particularly the message that equalizing flow rates
was not a desirable goal. That’s why my team built the refeedback mechanism
around his earlier ideas—to limit congestion within flat fees, without dynamic
pricing.
Everyone’s subsequent obsession with bandwidth hogs, and thus with
volume, is also misdirected. What matters is congestion volume—the CO2 of
the Internet.
Meanwhile, our immediate task is to win support in the Internet community for
limiting congestion and for a standards working group at the IETF to reveal the
Internet’s hidden congestion. The chosen mechanism may be refeedback, but I
won’t be miffed if something better emerges, so long as it makes the Internet
as simple and as fast.
About the Author
BOB BRISCOE describes how to ease Internet congestion by remaking the way
we share bandwidth in “A Fairer, Faster Internet” [p. 42 He says the problem
isn’t bandwidth hogs but the Internet’s sharing protocol itself. Briscoe is chief
researcher at BT’s Networks Research Centre, in England. He is working with
www.spectrum.ieee.org/print/7027 7/8
8. 12/30/2008 IEEE Spectrum: A Fairer, Faster Inter…
the Trilogy Project to fix the Internet’s architecture.
To Probe Further
Details on Internet fairness and the refeedback project can be found at
http://www.cs.ucl.ac.uk/staff/bbriscoe/projects/refb/.
www.spectrum.ieee.org/print/7027 8/8