This document discusses online communication and the internet. It defines online communication as reading, writing and communicating via networked computers, which originated in the 1960s. It then discusses key features of online communication like interactivity, virtuality and hypertextuality. It also covers strengths like facilitating networking and collaboration, and weaknesses like technical issues and information overload. The document also provides details on internet networks, components like clients, servers, nodes and transmission lines, as well as protocols that allow communication between devices.
A Fairer, Faster Internet Protocol
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
E-COMMERCE BUSINESS MODELS IN THE CONTEXT OF WEB 3.0 PARADIGMijait
Web 3.0 promises to have a significant effect in users and businesses. It will change how people work and
play, how companies use information to market and sell their products, as well as operate their businesses.
The basic shift occurring in Web 3.0 is from information-centric to knowledge-centric patterns of
computing. Web 3.0 will enable people and machines to connect, evolve, share and use knowledge on an
unprecedented scale and in new ways that make our experience of the Internet better. Additionally,
semantic technologies have the potential to drive significant improvements in capabilities and life cycle
economics through cost reductions, improved efficiencies, enhanced effectiveness, and new functionalities
that were not possible or economically feasible before. In this paper we look to the semantic web and Web
3.0 technologies as enablers for the creation of value and appearance of new business models. For that, we
analyze the role and impact of Web 3.0 in business and we identify nine potential business models, based in
direct and undirected revenue sources, which have emerged with the appearance of semantic web
technologies.
A Fairer, Faster Internet Protocol
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
E-COMMERCE BUSINESS MODELS IN THE CONTEXT OF WEB 3.0 PARADIGMijait
Web 3.0 promises to have a significant effect in users and businesses. It will change how people work and
play, how companies use information to market and sell their products, as well as operate their businesses.
The basic shift occurring in Web 3.0 is from information-centric to knowledge-centric patterns of
computing. Web 3.0 will enable people and machines to connect, evolve, share and use knowledge on an
unprecedented scale and in new ways that make our experience of the Internet better. Additionally,
semantic technologies have the potential to drive significant improvements in capabilities and life cycle
economics through cost reductions, improved efficiencies, enhanced effectiveness, and new functionalities
that were not possible or economically feasible before. In this paper we look to the semantic web and Web
3.0 technologies as enablers for the creation of value and appearance of new business models. For that, we
analyze the role and impact of Web 3.0 in business and we identify nine potential business models, based in
direct and undirected revenue sources, which have emerged with the appearance of semantic web
technologies.
Presentació duta a terme per Maria Isabel Gandia, cap de Comunicacions del CSUC, en el marc de l'Escola de Tardor de l'IBEI-ICANN-CSUC sobre els Reptes de la Governança d'Internet (The Challenges of Internet Governance) celebrada del 16 al 19 d'octubre de 2018.
AN INTRODUCTION TO WIRELESS MOBILE SOCIAL NETWORKING IN OPPORTUNISTIC COMMUNI...ijdpsjournal
Next generation networks will certainly face requesting access from different parts of the
network. The heterogeneity of communication and application software’s changing situations in
the environment, from the users, the operators, the business requirements as well as the
technologies. Users will be more and more mobile, protocols, etc. will increase and render the
network more complex to manage. Opportunistic communication has emerged as a new
communication paradigm to cope with these problems. Opportunistic networksexploits the
variation of channel conditions, provides an additional degree of freedom in the time domain and
increase network performance.The limited spectrum and the inefficiency in the spectrum usage
require such a new communication to exploit the existing wireless spectrum opportunistically by
allocation of spectrum based on best opportunity among all possibilities
This presentation is all about the internet basics we need to know before making a website or some other internet related works . This will help you to have a clear idea on What Is Internet.
Thank you
feel free to ask any queries in comment box
The Network Effects Bible is a comprehensive collection of terms and insights related to network effects all in one place. Produced by James Currier & the NFX team (www.nfx.com), an early-stage venture capital firm started by entrepreneurs who've built 10 network effect companies with more than $10 billion in exits across multiple industries and geographies.
Read the full Network Effects Bible at: https://www.nfx.com/post/network-effects-bible/
Follow us on Twitter @NFX
This tutorial, produced in the framework of DC-NET project, gives basic information on Internet: How does it run? Which are the differences between Internet and the Web? What is an IP address? What is a router?
http://www.dc-net.org/index.php?en/196/tutorial
In "The Future of the Internet IV," Director Lee Rainie reports on the results of a new survey of experts predicting what the Internet will look like in 2020 at the American Association for the Advancement of Science's 2010 Annual Meeting in San Diego.
An internet with lower case “i” is two or more networks that can communicate with each other. The most notable internet is called the Internet with upper case “I” is composed of thousands of interconnected networks The Internet as several backbones, provider networks, and customer networks. At the top level, the backbones (international ISPs) are large networks owned by some communication companies such as Sprint, Verizon (MCI), AT&T, and NTT. The backbone networks are connected through some complex switching systems, called peering points. At the second level, there are smaller networks, called provider networks that uses the services of the backbones and pay them for their services. The provider networks are connected to backbones or other provider networks. At the edge of the Internet the customer networks are networks that actually use the services provided by the Internet. They pay to provider networks for receiving services. Backbones and provider networks are also called Internet Service Providers (ISPs). The backbones are known as international ISPs and the provider networks are known as national or regional lSPs.
Presentació duta a terme per Maria Isabel Gandia, cap de Comunicacions del CSUC, en el marc de l'Escola de Tardor de l'IBEI-ICANN-CSUC sobre els Reptes de la Governança d'Internet (The Challenges of Internet Governance) celebrada del 16 al 19 d'octubre de 2018.
AN INTRODUCTION TO WIRELESS MOBILE SOCIAL NETWORKING IN OPPORTUNISTIC COMMUNI...ijdpsjournal
Next generation networks will certainly face requesting access from different parts of the
network. The heterogeneity of communication and application software’s changing situations in
the environment, from the users, the operators, the business requirements as well as the
technologies. Users will be more and more mobile, protocols, etc. will increase and render the
network more complex to manage. Opportunistic communication has emerged as a new
communication paradigm to cope with these problems. Opportunistic networksexploits the
variation of channel conditions, provides an additional degree of freedom in the time domain and
increase network performance.The limited spectrum and the inefficiency in the spectrum usage
require such a new communication to exploit the existing wireless spectrum opportunistically by
allocation of spectrum based on best opportunity among all possibilities
This presentation is all about the internet basics we need to know before making a website or some other internet related works . This will help you to have a clear idea on What Is Internet.
Thank you
feel free to ask any queries in comment box
The Network Effects Bible is a comprehensive collection of terms and insights related to network effects all in one place. Produced by James Currier & the NFX team (www.nfx.com), an early-stage venture capital firm started by entrepreneurs who've built 10 network effect companies with more than $10 billion in exits across multiple industries and geographies.
Read the full Network Effects Bible at: https://www.nfx.com/post/network-effects-bible/
Follow us on Twitter @NFX
This tutorial, produced in the framework of DC-NET project, gives basic information on Internet: How does it run? Which are the differences between Internet and the Web? What is an IP address? What is a router?
http://www.dc-net.org/index.php?en/196/tutorial
In "The Future of the Internet IV," Director Lee Rainie reports on the results of a new survey of experts predicting what the Internet will look like in 2020 at the American Association for the Advancement of Science's 2010 Annual Meeting in San Diego.
An internet with lower case “i” is two or more networks that can communicate with each other. The most notable internet is called the Internet with upper case “I” is composed of thousands of interconnected networks The Internet as several backbones, provider networks, and customer networks. At the top level, the backbones (international ISPs) are large networks owned by some communication companies such as Sprint, Verizon (MCI), AT&T, and NTT. The backbone networks are connected through some complex switching systems, called peering points. At the second level, there are smaller networks, called provider networks that uses the services of the backbones and pay them for their services. The provider networks are connected to backbones or other provider networks. At the edge of the Internet the customer networks are networks that actually use the services provided by the Internet. They pay to provider networks for receiving services. Backbones and provider networks are also called Internet Service Providers (ISPs). The backbones are known as international ISPs and the provider networks are known as national or regional lSPs.
Chapter 5 Networking and Communication Learning Objecti.docxrobertad6
Chapter 5: Networking and
Communication
Learning Objectives
Upon successful completion of this chapter, you will be
able to:
• understand the history and development of
networking technologies;
• define the key terms associated with networking
technologies;
• understand the importance of broadband
technologies; and
• describe organizational networking.
Introduction
In the early days of computing, computers were seen as devices
for making calculations, storing data, and automating business
processes. However, as the devices evolved, it became apparent that
many of the functions of telecommunications could be integrated
into the computer. During the 1980s, many organizations began
Chapter 5: Networking and
Communication | 99
combining their once-separate telecommunications and
information systems departments into an Information Technology
(IT) department. This ability for computers to communicate with
one another and to facilitate communication between individuals
and groups has had a major impact on the growth of computing over
the past several decades.
Computer networking began in the 1960s with the birth of the
Internet. However, while the Internet and web were evolving,
corporate networking was also taking shape in the form of local
area networks and client-server computing. The Internet went
commercial in 1994 as technologies began to pervade all areas of the
organization. Today it would be unthinkable to have a computer that
did not include communications capabilities. This chapter reviews
the different technologies that have been put in place to enable this
communications revolution.
A Brief History of the Internet
In the Beginning: ARPANET
The story of the Internet, and networking in general, can be traced
back to the late 1950s. The United States was in the depths of the
Cold War with the USSR as each nation closely watched the other
to determine which would gain a military or intelligence advantage.
In 1957, the Soviets surprised the U.S. with the launch of Sputnik,
propelling us into the space age. In response to Sputnik, the U.S.
Government created the Advanced Research Projects Agency
(ARPA), whose initial role was to ensure that the U.S. was not
surprised again. It was from ARPA, now called DARPA
((Defense Advanced Research Projects Agency), that the Internet
first sprang.
100 | Information Systems for Business and Beyond (2019)
http://history.nasa.gov/sputnik
ARPA was the center of computing research in the 1960s, but
there was just one problem. Many of the computers could not
communicate with each other. In 1968 ARPA sent out a request
for proposals for a communication technology that would allow
different computers located around the country to be integrated
together into one network. Twelve companies responded to the
request, and a company named Bolt, Beranek, and Newman (BBN)
won the contract. They immediately b.
Networking Report Essay
Essay about networks
Essay on Network
Essay on Wide Area Networks
Wireless Networking Essay
Leading An Event
Essay on Network Security
Network Design Essay
networking Essay example
Essay on Network Security
A computer network is a digital telecommunications network that allows network nodes to share resources. In computer networks, computer devices exchange data with each other using connections (data links) between nodes. These data links are established via network cables such as wire or fiber optics , or wireless media such as Wi-Fi .
Network computing devices that launch, route , and terminate data are called network nodes. [1] Nodes are often identified by network addresses and can include network hosts such as personal computers , phones , and servers , as well as network hardware such as routers and switches. Two such devices can be said to be interconnected when one can exchange information with the other, whether they are directly connected to each other or not. In most cases, application-specific communication protocols are layered (i.e., carry a payload ) over other general communication protocols . This formidable collection of information technology requires skilled network managers to keep all network systems running well.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Italy Agriculture Equipment Market Outlook to 2027harveenkaur52
Agriculture and Animal Care
Ken Research has an expertise in Agriculture and Animal Care sector and offer vast collection of information related to all major aspects such as Agriculture equipment, Crop Protection, Seed, Agriculture Chemical, Fertilizers, Protected Cultivators, Palm Oil, Hybrid Seed, Animal Feed additives and many more.
Our continuous study and findings in agriculture sector provide better insights to companies dealing with related product and services, government and agriculture associations, researchers and students to well understand the present and expected scenario.
Our Animal care category provides solutions on Animal Healthcare and related products and services, including, animal feed additives, vaccination
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
2. Definition of online communication
Features of online communication
Merits
Demerits
Strenghts
3. Online communication refers to reading,
writing and communication via networked
computers. Online communication dates back
to the late 1960s when US researchers first
developed protocols that allowed the sending
and receiving of messages via computers.
Online communication first became possible
in educational realms.
5. Cyber crime/cyber security
Anonymity
Cyber trolling
Authenticity/credibility
Privacy of Information
Information Overload
Multimedia
Technical/accessibility issues
Data plagiarism
Continuous changes in media
6.
7.
8.
9.
10.
11.
12.
13. Facilitates networking & Collaboration
Allows new ways of expression
Economical and faster compared to other
media
‘Always on’ or continuous updation
Best way to reach mass audience globally
Ubiquitous
14. Digital divide may exist
Requires infrastructure
Medium for literates
15. Characteristics of Internet
Internet Networks
Components of network
Various terms
16. 1. Ubiquity: The availability of omnipresent
computing, often passively in the background.
Some ethical issues here.
2. Digital: New media are digital media are capable of
Infinite duplication without degradation, Being
altered in a way that is far less detectable than
with analog media (compare these images), Easy
retrieval, calculation or computation since contents
are inherently "machine readable," and Digital
media are frequently stored on magnetic or optical
surfaces which do not have a proven permanence.
17. 3.Space Binding and Distance
Insensitivity: New media canvass large
distances, "binding" them. We get web pages
almost as easily from France as we do
Seattle. But how do these pages last
over time. Digital media present
severe problems of archiving or long-term
storage. Time binding is a serious problem.
Rag paper books last millenia while floppy
disk records are suspect after five years,
even with careful storage.
18. 4. Personalized New media commonly exist in
smart (computing) devices and networks. As
such, these systems can be instructed to
customize, individualize information for each
user. The idea ofmass media is challenged in an
environment where different messages are
crafted for each member of an audience.
One major issue is that of "profiling" or
categorizing consumers in terms of dominant
characteristics of spending, lifestyle and
beliefs. One frequently used profiling scheme
is VALS (values and lifestyle) profiling.
19. 5. Prothesis and Telepresence: Extension of
the self to a machine representation or
"prothesis." As well, the implication of this
point is also the reverse - an incorporation of
smart machines into our personal functioning
in increasing ways.
present webcams extend one's sight in almost
realtime to various exotic venues.
20. Virtuality, Virtual Community: Society without
propinquity.
See K.I.S.S of the Panopticon. We'll discuss some
core issues here later on in C300. Howard Rhinegold
is the great popularizer of "virtual community." See
his site. Here is a guide to virtual communities and
related articles (a little dated)
Hypertext: Providing linkage transparently within
documents, creating highly varied paths through a
body of information. Common, hypertext media are
called non-linear media. Implications are that (a) one
need not read documents in a prescribed order; (b)
authors, styles and permissible rules of content may
vary as one reads linked documents; (c) responsibility
and control is diffused - as is ownership of the
resulting content; (d) form and structure is easily
changed, composed on demand for individuals.
21. Interactivity: Seeking user input, then performing
functions based upon it.
Transaction forms modes, Amazon Books - a very
successful (Seattle) on-line bookseller. Another good
demo is to peruse the auction at Onsale.
Push v. Pull: New media contrast with older forms in
that users/audiences request custom content and are
not programmed to in the usual sense of television
and the press. Instead, content is "pulled" by the
consumer, not "pushed" by the media
organization. There have been strong efforts over
the past few years to establish a TV-like "push" to
web content. Tried originally byMSNBC and PointCast,
the scheme did not prove as popular as originally
thought. People like older media when they want to
passively consume. New media, it seems, are
preferred when consumers want more control or
"pull."
22. But there's another important implication here - an ability to cross with ease between or among
these sensory forms as content dictates. Software now allows text to be "read" by a PC or spoken
words to be interpreted as text (this conversion being at present less sure than the former). Here's
one of several PC-based speech-to-text translation programs. To go the other way, there's other
software.
"Smart" Server controlled functions, applications. "Appliance" terminals
Hot Java and implications. The idea here is that one doesn't have software locally, but draws it in
continuously updated form from the Internet. One may rent software in the future, rather than
buy a version outright.
Web TV and implications The core idea here is an "appliance" computer for web access. Costs are
lowered by using home TVs as a display, commonly the most costly part of a computer set-up.
secure modes and transactions Central here is conducting financial and personal transactions in
privacy and free from possible fraud by electronic intercept.
Wired, Wireless, Terrestrial and Satellite-based: While not strictly characteristic of new media,
the digital and smart character of new media make them more easily configured for a variety of
transmission methods. For example, cellular telephones are far more efficiently run (and with
better quality) as digital TDMA and CDMA devices than as the antique analogue AMPS phones. The
rates charged for analogue phones are higher on a per minute basis. On the otherhand, the digital
instruments themselves are more costly. That should change.
Electromagnetic v. Optical: Digitally-based new media are more readily converted to optical
transmission (using pulses of light) which affords advantages over conventional electronic
transmission (using magnetic pulses). In brief, electromagnetic systems are more fragile, are often
bulkier for a given capacity, more subject to interference, and often can be more easily tapped.
Optic fibres deliver gains in capacity, reliability and accuracy compared with traditional copper
wire and microwave radio technologies. However, given the large installed base of copper wire in
the world, it is often more economical to work with this existing technology rather than replace it
with fibre. Here's some more on this topic.
23.
24. The computer, smartphone or other device you're
using to read this may be called as -end
points clients.
Machines that store the information we seek on
the Internet are servers.
Other elements are nodes which serve as a
connecting point along a route of traffic.
And then there are the transmission lines which
can be physical, as in the case of cables and
fiber optics, or they can be wireless signals from
satellites, cell phone or 4G towers, or radios.
All of this hardware wouldn't create a network
without the second component of the Internet:
the protocols.
25. Protocols are sets of rules that machines
follow to complete tasks. Without a common
set of protocols that all machines connected
to the Internet must follow, communication
between devices couldn't happen. The
various machines would be unable to
understand one another or even send
information in a meaningful way. The
protocols provide both the method and a
common language for machines to use to
transmit data.
26. You've probably heard of several protocols on
the Internet. For example, hypertext
transfer protocol is what we use to view
Web sites through a browser -- that's what
the http at the front of any Web address
stands for. If you've ever used an FTP server,
you relied on the file transfer protocol.
Protocols like these and dozens more create
the framework within which all devices must
operate to be part of theInternet.
27. Two of the most important protocols are
the transmission control protocol (TCP)and
the Internet protocol (IP). We often group the
two together -- in most discussions about
Internet protocols you'll see them listed as
TCP/IP.
At their most basic level, these protocols
establish the rules for how information passes
through the Internet. Without these rules, you
would need direct connections to other
computers to access the information they hold.
You'd also need both your computer and the
target computer to understand a common
language.
28. When you want to send a message or retrieve
information from another computer, the
TCP/IP protocols are what make the
transmission possible.
29. Your request goes out over the network,
hitting domain name servers (DNS) along
the way to find the target server. The DNS
points the request in the right direction.
Once the target server receives the request,
it can send a response back to your
computer. The data might travel a
completely different path to get back to you.
This flexible approach to data transfer is part
of what makes the Internet such a powerful
tool.
30. First, you open your Web browser and connect to
our Web site. When you do this, your computer
sends an electronic request over your Internet
connection to your Internet service provider
(ISP). The ISP routes the request to a server
further up the chain on the Internet. Eventually,
the request will hit a domain name server (DNS).
This server will look for a match for the domain
name you've typed in (such as
www.howstuffworks.com). If it finds a match, it
will direct your request to the proper server's IP
address. If it doesn't find a match, it will send
the request further up the chain to a server that
has more information.
31. The request will eventually come to our Web
server. Our server will respond by sending
the requested file in a series of packets.
Packets are parts of a file that range
between 1,000 and 1,500 bytes. Packets have
headers and footers that tell computers
what's in the packet and how the information
fits with other packets to create an entire
file. Each packet travels back up the network
and down to your computer. Packets don't
necessarily all take the same path -- they'll
generally travel the path of least resistance.
32. That's an important feature. Because packets
can travel multiple paths to get to their
destination, it's possible for information to route
around congested areas on the Internet. In fact,
as long as some connections remain, entire
sections of the Internet could go down and
information could still travel from one section to
another -- though it might take longer than
normal.
When the packets get to you, your device
arranges them according to the rules of the
protocols. It's kind of like putting together a
jigsaw puzzle. The end result is that you see this
article.
33.
34.
35. Most large communications companies have
their own dedicated backbones connecting
various regions. In each region, the company
has a Point of Presence (POP). The POP is a
place for local users to access the company's
network, often through a local phone number
or dedicated line. The amazing thing here is
that there is no overall controlling network.
Instead, there are several high-level
networks connecting to each other
through Network Access Points or NAPs
36.
37. The routers determine where to send
information from one computer to another.
Routers are specialized computers that send your
messages and those of every other Internet user
speeding to their destinations along thousands of
pathways. A router has two separate, but
related, jobs:
It ensures that information doesn't go where it's
not needed. This is crucial for keeping large
volumes of data from clogging the connections of
"innocent bystanders."
It makes sure that information does make it to
the intended destination.
38. To keep all of these machines straight, each machine on
the Internet is assigned a unique address called an IP
address. IP stands for Internet protocol, and these
addresses are 32-bit numbers, normally expressed as four
"octets" in a "dotted decimal number." A typical IP address
looks like this:
216.27.61.137
The four numbers in an IP address are
called octets because they can have values between 0 and
255, which is 28 possibilities per octet.
Every machine on the Internet has a unique IP address. A
server has a static IP address that does not change very
often. A home machine that is dialing up through a modem
often has an IP address that is assigned by the ISP when
the machine dials in. That IP address is unique for that
session -- it may be different the next time the machine
dials in. This way, an ISP only needs one IP address for
each modem it supports, rather than for each customer.
39. Because most people have trouble
remembering the strings of numbers that
make up IP addresses, and because IP
addresses sometimes need to change, all
servers on the Internet also have human-
readable names, called domain names. For
example, www.howstuffworks.com is a
permanent, human-readable name. It is
easier for most of us to remember
www.howstuffworks.com than it is to
remember 209.116.69.66.
40. DNS servers accept requests from programs and other
name servers to convert domain names into IP
addresses. When a request comes in, the DNS server
can do one of four things with it:
It can answer the request with an IP address because
it already knows the IP address for the requested
domain.
It can contact another DNS server and try to find the
IP address for the name requested. It may have to do
this multiple times.
It can say, "I don't know the IP address for the domain
you requested, but here's the IP address for a DNS
server that knows more than I do."
It can return an error message because the
requested domain name is invalid or does not exist.
41. There are multiple DNS servers at every
level, so that if one fails, there are others to
handle the requests. The other key is
caching. Once a DNS server resolves a
request, it caches the IP address it receives.
Once it has made a request to a root DNS
server for any .COM domain, it knows the IP
address for a DNS server handling the .COM
domain, so it doesn't have to bug the root
DNS servers again for that information. DNS
servers can do this for every request, and
this caching helps to keep things from
bogging down.
42. The National Science Foundation (NSF) created the
first high-speed backbone in 1987. Called NSFNET, it
was a T1 line that connected 170 smaller networks
together and operated at 1.544 Mbps (million bits per
second). IBM, MCI and Merit worked with NSF to
create the backbone and developed a T3 (45 Mbps)
backbone the following year.
Backbones are typically fiber optic trunk lines. The
trunk line has multiple fiber optic cables combined
together to increase the capacity. Fiber optic cables
are designated OC for optical carrier, such as OC-3,
OC-12 or OC-48. An OC-3 line is capable of
transmitting 155 Mbps while an OC-48 can transmit
2,488 Mbps (2.488 Gbps). Compare that to a typical
56K modem transmitting 56,000 bps and you see just
how fast a modern backbone is.