A personal perspective on the evolving Internet
and Research and Education Networks
Latest Version February 15, 2010
For discussion purposes only
Over the past few years the Internet has evolved from an “end-to-end” telecommunications
service to a distributed computing information and content infrastructure. In the academic
community this evolving infrastructure is often referred to as cyber-infrastructure or
eInfrastructure .This evolution in the Internet was predicted by Van Jacobson several years ago
and now seems readily evident by recent studies such as the Arbor report indicating that the bulk
of Internet traffic is carried over this type of infrastructure as opposed to a general purpose
routed Internet/optical network. This evolving Internet is likely to have profound impacts on
Internet architectures and business models in the both the academic and commercial worlds.
Increasingly traffic will be “local” with connectivity to the nearest cloud, content distribution
network and or social network gateway at a local Internet Exchange (IX) point. Network topology
and architecture will increasingly be driven by the needs of the applications and content rather
than as general purpose infrastructure connecting users and devices. As a consequence the need
to deploy to IPv6 addressing or ID/loc split may be superseded by DNS-type such as laye r 7 XML
routing. This new Internet will more easily allow the deployment of low carbon infrastructure and
create new challenges and opportunities in terms of last mile ownership and network neutrality.
Customer owned networks and tools like User Controlled LightPath (UCLP) and Reverse Passive
Optical Networks (RPON) may become more relevant in order to allow users to connect directly to
the distributed content and application infrastructure at a nearby IX. Research and Education
(R&E) networks may play a critical leadership role in developing new Internet business strategies
for zero carbon mobile Internet solutions interconnecting next generation multi-channel RF mobile
devices to this infrastructure using “white space” and Wifi spectrum. Perhaps ultimately these
developments will lay the foundation for a National Public Internet (NPI) where R&E networks
deploy transit and/or internet exchanges in smaller communities and cities to support distribution
of content and information from not for profit organizations to the general public.
The Internet appears to be once again is enabling another major wave of innovation. Although
compared to past transformative changes in the Internet such as the invention of the web or
packet switching this new wave to date has received very little public notice or attention. This
lack of awareness is surprising given that investment by private sector companies and
government in this transformative technology now rivals what the carriers are investing in
traditional backbone and last mile Internet technology.
This evolution of the Internet was predicted several years ago by an Internet pioneer Dr Van
Jacobson [VAN JACOBSON] where he argued that network engineers and researchers are in
danger of being stuck with a tunnel vision of the need for an ”end-to-end” network much like a
previous generation of engineers and researchers were trapped in thinking that all networks had
to be connection oriented. Dr Van Jacobson’s predictions have recently been vindicated by data
from the Arbor [ARBOR] study showing that over 50% of data on the Internet now comes from
mostly content and application providers using distributed computing and storage infrastructure
independent of the traditional carrier Internet backbones.
Note that end-to-end network should not be confused with the end-to-end principle established by
MIT researchers Reed and Clark where that, whenever possible, communication control operations
should be defined to occur at the end-points of the Internet or as close as possible to the resource
Up to most recently the text book model of the Internet was for businesses and consumers to
access the internet through a last mile provider such as telephone or cable company. Their traffic
would be sent across the backbone to its destination by an Internet service provider. This model
worked reasonably well in the early days of the Internet but as new multimedia content such as
video and network applications evolved the current model failed to provide a satisfactory quality
of experience for users in terms of responsiveness and speed. As a result a host of content,
application and hosting companies invested in something for the purposes of this paper I have
collectively labeled as a Application Content Infrastructure (ACI) that complemented and
expanded the original Internet through the integration of computing, storage and network links.
In the academic world these developments have been underway for some time and the resulting
infrastructure is often referred to cyber-infrastructure or eInfrastructure. As in the commercial
world it is creating profound changes in how we think about networking as a generalized end-to-
end telecommunications service connecting users or institutions to a facility that is seamlessly
integrated with distributed computing and storage delivering specific applications or data
solutions for targeted communities. Examples include Optiuter, Teragrid, LHCnet, etc
These investments by ACIs are quite distinct from that made by the networks that transport traffic
across the Internet, even though both they use many of the same technologies such as optical
networks, routers, etc. Rather, ACIs are integrated networks and other associated infrastructure
used by application and content providers used to host, cache and distribute their services.
Fundamentally ACIs are about the transport of content and applications rather than bits.
Examples of ACIs include large distributed caching networks such as Akamai, cloud service
providers such as Amazon and Azure, Application Service Providers (ASPs) like Google and Apple,
Content Distribution Networks (CDNs) such as Limelight and Hulu, and social networking services
like Facebook and Twitter. Many Fortune 500 companies like banks and airlines have also
deployed their own ACIs as an adjunct to their own private wide area networks in order to
provide secure and timely service to their customers. Most major content and application
organizations have contracted with commercial ACIs or have deployed their own infrastructure.
ACIs also allows the content provider to load balance demand, so that traffic in regions
expressing excessive loads can be re-directed to nodes where there is spare capacity.
The end result is that with very little fanfare the Internet has been transformed so much so over
the past decade that virtually all major content and every advanced application on the
Internet is now delivered over an ACI independent of the traditional carrier Internet
This enormous investment in ACIs have resulted in major benefits not only for the content and
hosting companies but also have significantly improved Internet performance as perceived by the
end user. In many ways they have also relieved the major carriers from having to make major
investments in their own backbone and last mile infrastructure. These benefits and market drivers
for the development of ACIs can be briefly summarized as follows:
(a) They provide for faster and more responsive Internet experience for users as they
allow a organization’s content and services to be largely delivered locally over the ACI
rather than over a long distance end-to-end network with all the attendant issues of
latency, congestion and packet loss;
(b) They can deliver their data and services much cheaper than across traditional
Internet because reliability and redundancy can be achieved in many different ways
through distributed computing and storage at the application layer rather than the
network layer; and
(c) They enable new Internet business model where revenue is not related to the
number of bits a user downloads but linked to the specific application such as
advertising, product purchase and eventually such things as energy savings and carbon
There is growing evidence that ACIs now constitute one of the fastest growing parts of the overall
Internet eco-system and represents a substantial, if not the largest, portion of the investment by
companies in the Internet itself. In some cases their investment in Internet infrastructure equal or
outweigh those of the carriers and ACIs now girdle the globe and rival the carrier’s infrastructure
in terms of scale and complexity. This major growth in ACIs is also supported by recent study
done from Arbor Networks that demonstrated that the majority of Internet traffic is now being
generated by ACIs directly at major peering points. This traffic from ACIs is continued to expect
to grow significantly faster over time versus than traditional Internet traffic and will largely
dominate traffic patterns and network architecture in the coming decades.
Some people mistakenly think ACI provide a "privileged" access channel for major content and
application companies. In fact the just the opposite has happened. Many companies now
specialize in providing third party ACI services which results in greater competition and reduced
costs. Content and application providers now have much greater number of choices in how they
want to deliver their product and service to the end consumer.
As a result ACI also helps lower the barriers for small companies and start-ups as well, enabling
them to grow and compete. It is near-impossible for small providers to quickly deploy and
maintain additional servers during a wave of sudden popularity; at the same time, over-
provisioning and maintaining infrastructure up-front can be very costly. ACI helps solve this
dilemma - small companies can work with third-party ACI providers, and "scale" up their services
efficiently and cost-effectively.
It is not only small businesses but also academic research and education projects that benefit
from third party ACI facilities. A good example is the distribution of HDTV video from the ocean
floor with project Neptune [NEPTUNE] and new Federal government services such as "apps.gov"
[GSA]. In both cases it would be cost prohibitive for a university or government department to
deploy the thousands of servers and purchase Internet access to deliver these services. More
importantly according to Internet 2 [INTERNET2] cost savings of up to 40% on Internet transit
traffic are possible by connecting to ACIs at various Internet Exchange points.
The same ACI drivers of enhanced customer experience and low deployment cost have driven the
market for clouds and ASPs. More importantly because of the low deployment cost of ACIs
application and cloud providers can focus on business models where revenue is based on the
service provided rather than charging by bit of data transferred across the network. This is a
fundamental difference in the view of the world compared to end-to-end Internet carriers. Carriers
come largely from a centuries old telephone view of the world where bandwidth was expensive
and had to be carefully rationed by charging users by the minute or by the bit. But the advent of
low cost, high bandwidth optical networks and new data architectures enabled by ACIs allows
them to effectively ignore the cost of bandwidth and distance and instead focus on the value of
the actual service in terms of functionality and ease of use.
New promising business models demonstrate that revenue can be obtained from other value add
services such as cloud computing and eCommerce solutions such as SalesForce. On the horizon
other promising business models for ACIs can be made from energy reduction and the sale of
carbon offsets. For example two recent papers from researchers at MIT [MIT] and Rutgers
[RUTGERS] indicate that a ACI can save up to 35% and 45% respectively in energy costs by
moving data and storage to sites in the network with the lowest instantaneous energy costs.
Some organizations are also investigating how carbon offsets can be used to pay for the
deployment of this type of ACI which in a small way address the challenges of climate change.
ACIs in many ways are dis-intermediating many of the traditional end-to-end telecommunication
services offered by carriers. As the ACI infrastructure reaches ever closer to the end user it may
require new business models for the last and undoubtedly will result in renewed debates on
network neutrality. This will be especially true as the mobile Internet develops with a
corresponding infrastructure to deliver ACI to these devices. As ACI does not require an end-to –
end wireless network, offering wireless data locally through WiFi or new “white space” spectrum
may become the norm.
When the Internet was first conceptualized and implemented in the 1970s and 1980s it was
largely seen as an “end-to-end” network for data similar to the telephone voice network which
was then the dominant form of networking. But instead of making telephone call connections it
was used for the transmission of packets between connected computers.
The telephone network by its very nature is an “end-to-end” network as it is necessary to setup
connections between caller and receiver who may be separated by thousands of miles. The
telephone companies invested billions of dollars building global networks to allow users to make
calls to virtually any place on the planet where there was a telephone receiver.
Although the Internet used a different network technology i.e packets rather than circuit
connections, the network topology was much the same in that it was assumed users (or more
specifically their computers) would want to communicate with other user’s computers and or
servers which could be many thousands of miles away. For this reason it was easy to adapt the
Internet as an overlay network over the existing telephone infrastructure as they both had to
reach the same homes and businesses around the world.
The original Internet was made up of several different components and identified as “last mile”,
“regional Internet Service Provider (ISP)”, Internet Exchanges (IXs) and “backbone ISPs” as
illustrated in the following diagram:
The link from the user’s home or residence is identified as the last mile and is usually provided by
the local telephone or cable company. In some cases the last mile may be a wireless link or even
use optical fiber. The last mile usually terminates at the regional ISP’s hub or aggregation point.
In many cases these days the last mile provider is the one and the same as the regional ISP, but
this is not necessarily the case in all jurisdictions. From the regional ISP’s hub or integration point
the Internet traffic is carried by the regional ISP to a local or regional IX. IXs are located in just
about every major city in North America. It is at the IX where the regional ISP exchanges traffic
with one or more national backbone carrier ISPs. The backbone carrier ISPs carry the traffic to
another IX elsewhere in the county or around the world where another regional ISP then carries
the traffic across the destination last mile to the designated recipient.
The big disadvantage of this original architecture is that all traffic from a content company or
application must transit all these links and attendant probabilities of congestion and/or packet
loss. Any “end-to-end” network whether it is the public Internet or telephone system has also an
embedded liability as it is extremely expensive and technically challenging to build a reliable and
robust end-to-end infrastructure of any kind. Each link and node in the network can be a single
point of failure or congestion, of which the loss of any one can disrupt thousands of phone calls or
Internet communication sessions. This poses a major engineering challenge and the solution is to
have many redundant paths across the network with sophisticated routing and switching protocols
to quickly re-route traffic in the advent of congestion or failure at any node or link.
Application Content Infrastructure (ACIs) on the other hand are specialized infrastructure that
provides advanced services to the global Internet community that are delivered locally. This
avoids all the attendant problems of packet loss and congestion on the backbone network (and
also in the last mile). Invariably ACIs connect to the public Internet at the IXs or at private
peering points located in most cities. Their development was driven by several factors, but
primarily it was to enhance the user’s Internet experience, especially when there is significant
congestion and packet loss in the last mile networks. This new evolving model is shown in the
following diagram.The important thing to note about ACIs is that their service is built around
applications and content and not simply the transport of bits and packets across a network.
The use of distributed computing and storage to support their advanced applications allows ACIs
to keep content local. For example when a user connects to Facebook their traffic is carried over
their last mile cable or telephone broadband connection to the closest public Internet Exchange
point or private peering connection usually located in the same city. Their traffic is then handed
off to the Facebook ACI where a copy of the user’s home page is stored. As a user updates or
adds comments to their page their posting is distributed simultaneously to their Facebook’s
friends homepages over the Facebook ACI. Other than the last mile connection virtually none of
their communication on Facebook is carried over the public end-to-end Internet. More importantly
as perceived by the user all Facebook communication effectively stays local, despite the fact that
the user may Facebook friends scattered all over the world.
ACIs are not the same as private IP networks, often referred to as Virtual Private Networks (VPNs)
which are extensively used by Fortune 500 companies, governments and other large organizations
for internal traffic purposes. Some ACI networks are global in scale and rival the carrier public
Internet networks in terms of size and complexity. Good examples of such large scale ACIs
include Microsoft MSN’s network, Amazon’s EC2 cloud, Google’s Gmail and YouTube infrastructure
as well as Level3’s Content Distribution Service. Many Fortune 500 companies like banks and
airlines have also deployed their own ACIs as an adjunct to their own internal networks in order
to provide secure and reliable service to their customers.
It is important to note that ACIs constitute a lot more than just network links and routing nodes
as is typical public carrier Internet network. A major, if not the largest component of most ACIs is
the vast array of computing and storage devices that are located at various nodes throughout the
ACI infrastructure. As well the network links and elements tend to be more tightly integrated into
the ACI infrastructure of computing and storage infrastructure rather than being treated as an
abstract and independent network service. This allows for more creative and systemic solutions in
terms of reliability and load balancing.
The other distinguishing feature between ACIs and the public Internet is the number and types of
routes that advertised by either network. Generally because they operate at the application and
service layer ACIs only advertise a very small set of the Internet routing table. Instead they
depend on Domain Name Service (DNS) processes and content routing for the forwarding of
traffic. The public Internet, on the other hand must carry the full routing table (over 400,000
routes) because its primary mandate is to provide end-to end service. As a result the routing
infrastructure for a public Internet provider tends to much more complicated and expensive than
that of ACIs. However ACIs generally have made a much larger investment in storage and
computational resources in order to provide the services they offer.
The growth of ACIs is expected to continue dominate the future Internet. Many organizations still
operate legacy applications on their own servers such as e-mail, web hosting, etc. Increasingly,
these organizations are moving these applications to ACIs as the costs of maintaining their own
servers and applications continue to escalate. As well many organizations have come to discover
that ACIs provide their customers and users with much faster and responsive Internet experience
then if the same service was delivered over the traditional Internet. Many are now contracting
with commercial ACIs to deliver their content and/or applications. Several of the large carrier
Internet providers like Level3 offer such a service. In today’s marketplace offering your
customers a faster performing service can provide companies with a significant competitive
In fact some pundits claim that if we were to build the Internet from scratch we would rarely need
“end-to-end” communication services as most application such as e-mail, social networking etc
can easily exist in the “cloud” supported by ACIs rather than being an end-to-end service. Even
supposed “end-to-end” Internet applications like voice or video can easily supported by such ACI
services. One of the great Internet pioneers –Van Jacobsen [VAN JACOBSEN] has been doing
considerable research in this field and there is a famous YouTube video of his vision for the Future
Internet along these very lines. His vision of the future Internet is essentially a global network of
integrated ACIs and that end- to-end networks essentially reflect yesterday’s thinking.
The Major Components of ACI eco-system
Globally there are now hundreds of companies who provide some or all of these ACI services. As
mentioned previously there are also many third-party ACIs like Akamai and Limelight who
integrate these services in various ways and provide them to application and content companies.
There are also companies who specialize in delivery of medical, education and government
content and applications. In addition, ACI are in some cases wholly owned by a content or
application company as in the case of large organization such as Google, Microsoft, Amazon, and
others. Rather than leasing services from a third-party, they build their own infrastructure, and
then either own or lease fiber to connect their facilities together.
While these companies do not provide last-mile service to end-users, some last-mile providers,
such as Verizon, are also deploying their own CDN and other ACI.
Hosting and Serving Facilities
Hosting and serving facilities are companies that have built large data centers around the world
where they lease space to content and application companies who want to deploy their own
servers. Sometimes these hosting and serving companies also deploy their own severs and lease
them to content and application providers. Generally the connectivity between separate facilities is
arranged independently by the content and application provider. Many of these facilities are also
collocated with IXs around the world. Examples of commercial data center and hosting providers
include Savvis, QualityTech, Opsource, Switch and Data, Equinix , Rackspace, etc
Governments and universities also take advantage of hosting and serving facilities. For example
the General Service Administration operates 8 data centers [GSA-CENTER] throughout the US.
Content Distribution Networks (CDN)
As more and more organizations started distributing content and applications over the Internet,
the cost of deploying and managing servers in remote data centers became quite burdensome.
This was particularly true for smaller media and content companies. In response several
companies started deploying what are called Content Distribution Networks (CDNs) where they
undertook complete responsibility for the distribution, replication and load balancing of content
and media to various IXs and other interconnection points throughout the Internet. The
quintessential examples of CDNs are Akamai and Limelight. There are over a dozen companies in
this space including players such as CDNetworks, Edgecast, Voxel, Mirror Image, Amazon
Cloudfront, BitGravity, CacheFly, Edgecast Networks, etc
As with the hosting companies there are many content distribution networks that are available for
smaller content organizations and education or medical applications. Examples include Inuk
Networks [INUK] which distributes educational video in the US and the UK
Cloud service providers
While CDN facilities are useful and cost effective for distributing web pages and streaming video
they could not address the need for computing intensive applications or storage. "Clouds" were
developed to address this need. Clouds are essentially a global ACI facility of tightly coupled and
interconnected computing and storage resources. An entire separate paper could be written on
clouds but generally they are broken down into three major applications areas:
• Infrastructure as a Service (IaaS)
• Platform as a Service (PaaS)
• Software as a Service (SaaS)
Infrastructure as a service is cloud ACI where users can configure their own virtual computing
infrastructure and deploy their own applications. The Amazon EC2 cloud is the prototypical
example of this type of cloud ACI. Platform as a Service, on the other hand, removes much of the
complexity of creating virtual machines and allows users to focus on deploying an application.
The Microsoft Azure is built along these lines.
The big advantage here for application developers is that they can scale their applications without
having to invest in expensive computer servers and hardware. Users can add and subtract
computation and storage services as they need and they only pay for what they use. It
significantly reduces the cost of implementation as time and resources do not have to be spent
configuring physical servers and systems.
In addition to the big name clouds like Amazon Web Services, Google App Engine, and Azure
there are many dozens of other cloud ACI facilities including IBM, 3Tera, Netsuite, Joyent,
Software-as-a-service is a much more specialized type of cloud service. Some companies provide
particular applications that are are offered as services for users to access online, rather than to
download to their computer as client software. SalesForce being the poster child for this type of
cloud. These types of cloud services allow organizations to offload their labor intensive servers
like e-mail and word processing to the net and make these services easily accessible from
anywhere around the world.
Again, as with content networking there are several clouds being deployed by the research and
education community such as NASA's Nebula [NASA]
Internet Exchanges (IX)
The key component that ties together hosting companies, CDN facilities and clouds into a
collective ACI service is the Internet Exchange (IX) point. They often serve a dual purpose:
(a) This is where media, content and application companies connect to third party ACI
facilities for distribution of their product; and
(b) This is where ACI facilities connect with last mile access providers for delivery of the
content and applications to the consumer.
There is a whole range and variety of IXs from large public facilities such as the Peering and
Internet Exchange (PAIX) [PAIX] in Palo Alto California and the Amsterdam Internet Exchange
(AMS-IX) [AMS-IX] to smaller private facilities called peering points where a last mile provider
may directly interconnect with an ACI under terms and conditions that are of mutual benefit to
both parties. Backbone and regional ISPs also connect to these IXs in order to reduce their costs
of carrying Internet traffic as well.
There can be dozens if not hundreds of media, content and applications companies at any given
single IX. As well there may be a large number of backbone and regional ISPs. For example the
Amsterdam IX has over 345 connected organizations while the PAIX in San Jose has over 75
The IX can be thought of as a large cross connect switch that allows all these parties to exchange
traffic with each other. In most cases this is done on what is called a settlement free basis i.e.
there is no exchange of money to transmit or receive traffic. The only exception is between
backbone and regional ISPs who historically pay for the exchange of traffic between each other.
As well several R&E networks such as BCnet in British Columbia, KAREN in New Zealand and
UNINETT in Norway provide transit or internet exchange services for smaller communities and
cities which are too small to support a commercial IX.
There are several hundred IXs around the world. A reasonably comprehensive list of these
exchange points and the connected ACIs and Internet backbones can be found at
How ACIs improve delivery of online services
Content and application delivery is a critical issue for many media and information companies. In
today's Internet, traffic volumes can vary widely and unexpectedly in what are called "flash"
events. For instance, President Obama's inauguration generated over 7 million live, simultaneous
video streams delivered via Akamai's infrastructure, with total traffic in excess of 2 terabits per
second [AKAMAI]. Such huge traffic events can also happen unexpectedly, leading a given
services' traffic to surge in a short period of time.
In the past content and application providers would have to deploy their own server infrastructure
and often do a worst case over provisioning in anticipation of the fluctuating traffic volumes on the
Internet. This approach was also suspect to single points of failure from any outage on the
Internet which would make the site completely unreachable.
By locating servers that host the content in a highly distributed manner at various data centers
around the global Internet close to Internet users, content and application providers could insure
that there was no single point of failure. If there was a network outage at some point in the
network, content and applications could still be delivered elsewhere in the global Internet. This
architecture also allows content and application providers to load balance demand for their
content and redirect requests for some applications or content to less busy servers somewhere
else on the network.
How ACIs Improve Consumer Internet Experience
One of the original and still critically important drivers for ACIs is to improve the user's Internet
experience by reducing latency and enhance application responsiveness. As most congestion
occurs in the last mile networks the impact of such congestion is significantly reduced by moving
content and applications as close as possible to the consumer. Keeping data local and thereby
reducing transmission time can significantly enhance a users’ perception of the quality of the
responsiveness of an application – even under congestion and packet loss. As long as there is
sufficient bandwidth to prevent egregious congestion in the last mile virtually all new innovative
content and applications can be delivered over a best effort last mile network if they originate
from an ACI. For example, only a short time ago it seemed unimaginable that one could deliver
HD quality movies over the Internet – but this is done routinely today by many services delivered
using ACI, such as Hulu, Netflix, and YouTube.
On the Internet, each time an application on an originating computer sends a packet of data to a
distant user it must be acknowledged by the recipient’s computer (who sends a return packet
called an (ACK) before another packet from the originating computer is sent. If the originating
computer does not receive this acknowledgement packet within a specified time it assumes it
hasn’t been lost somewhere in the network and it retransmits the original packet. This simple
“handshake” protocol ensures that no packets are lost in transmission across the network.
Packet loss also signals to the originating computer there may be congestion on the network.
Presumably the original packet got dropped on the floor somewhere where there was too much
data for the network to handle. If you look closely at Internet traffic you will see it is made up of
billions and billions of these types of data and acknowledgement packets back and forth across
If the originating computer does not receive an acknowledgement packet within a specified time
frame not only does it retransmit the original packet it also immediately slows down its
transmission rate. Then very slowly the originating computer speeds up the transmission of
packets as long as it continues to receive ACKs for every transmitted packet. But if at any time it
does not receive an ACK it immediately throttles back the transmission of packets.
The time to reach maximum transmission speed is very important and is directly related to the
distance between the originating and receiving computer – the greater the distance the longer it
takes to reach maximum transmission speed. There are a number of other factors that may
control transmission speed, but a basic rule of thumb is that for every doubling of distance, the
time to reach maximum speed of transmission is reduced by half. This speed reduction is
necessary because greater distances means it takes a much longer time to send packets and
However if there is a single packet drop then transmission is immediately reduced by up to 50%
and then slowly allowed to ramp back up. It is obvious that the probability of packet loss will
increase the greater the distance between the originating and recipient computers as there are
many network links and nodes to traverse – any one of which could be briefly overloaded and
Packet loss can occur anywhere along a network from the user’s home, in the last mile and/or
across the carrier backbone network. However a study done by StreamCenter [STREAMCENTER]
indicates that most congestion occurs in the last mile.
Regardless of where the congestion and packet loss occurs in the network the effect is immediate
in terms of the responsiveness of the application as perceived by the user. The data transmission
rate is immediately reduced which is further aggravated if the originating and recipient computer
are a great distance apart. As applications incorporate more multimedia elements such as images
and video the impact can be quite dramatic in terms of the speed of which the data is received.
The same application if hosted a distant server will take orders of magnitude longer to download
then it is delivered locally given typical last mile congestion and packet loss. Internet users
experience this phenomena every day.
It's worth noting here that some have tried to make a false equivalence between last-mile
broadband providers' plans to prioritize traffic on their networks and the use of ACI to enhance
performance. This comparison is inapt on a number of levels.
As described above, ACIs enhance performance by serving content and applications at a location
that is closer to end-users, and by optimizing within an ACI providers' network for that purpose.
There is no upper bound on how many content and application providers could benefit from this
practice; in theory, everyone could deliver their content and applications from locations nearby to
broadband providers. This practice does not in any way impact the delivery of anyone else's non-
local traffic across the last-mile broadband provider's network, however.
In contrast, a broadband provider prioritizing traffic across its network is ultimately zero-sum. It is
impossible for everyone to take advantage of prioritization at the same time.. Prioritization means
that some packets are being moved to head of a network router's queue; in turn, some packets
are being moved further back in the queue. What's more, only the broadband provider itself has
the ability to prioritize traffic across its last-mile network, in contrast to the many different
providers that already provide ACI.
How ACIs Enable Innovation and create new economic opportunities
Numerous larger Internet companies have deployed their own ACI, in order to improve delivery of
their services. However, ACIs also have benefits through the ecosystem, including to start-ups
and small companies. Contrary to some perceptions ACIs are not a special channel or privileged
service for large media or application providers to deliver their content and services. Although
many large content and application companies have deployed their own ACI, there are many
companies who specialize in providing ACI as a third party service. It is these companies that are
enabling a great deal of new innovation on the Internet by lowering the barriers to entry and
enabling small innovative companies and not-for-profit organizations to scale their services as
Prior to the development of ACIs any small startup with a creative idea faced a daunting challenge
in scaling up their business as demand accelerated for their product or service. The cost of
deploying hundreds of servers and paying for Internet transit in many cases was a significant
challenge for the business case for an Internet startup.
But with the advent of ACI facilities such as content distribution networks and clouds, the
economics have completely changed. In order to get off the ground, small startups don't even
have to invest in a single hosting server, network link or virtually any type of physical
infrastructure. By lowering the up-front costs for innovation, it lowers the barriers to entry.
For example, Amazon's Web Services is used by thousands of entities, who essentially "rent"
server and computing capacity from Amazon. They can operate their own web services on this
platform, without deploying or having to maintain their own infrastructure. Amazon started this
service, because it wanted "to specifically target small businesses such as software developers,
much as it catered to consumers and merchants who sell on Amazon's marketplace."
For instance, as discussed in the Wall Street Journal:"Jeffrey McManus operates a Web site called
Approver.com that lets people share and edit documents online. He started the site in August
2006 and funds it himself, using Amazon's online storage service to help him when there is a
usage spike on his site.
'The key thing about Amazon's storage service is that it's a pay-as-you-go service,' says the 40-
year-old San Francisco resident. "We don't have to make this giant investment up front.'"[WSJ]
Another good example is a small startup called Renkoo [WSJ] which developed a Facebook
application that lets users send virtual "drinks" to friends. Rather than deploying their own
infrastructure the company turned to Amazon to "rent" server computers and computing. This
allowed the small startup to scale their application as demand warranted. Renkoo's application
became a big hit attracting 30 million users.
As customer demand increase, start-ups can scale up their business incrementally using ACI. One
can think of the ecosystem of ACI as creating a ladder of investment, with virtual services like
AWS at the bottom all the way up to owning one's own ACI and fiber connectivity at the top.
Rather than having to make huge investments up front in order to start their business, start-ups
can invest over time as their service expands, moving up one rung at a time as they grow their
Impact of ACIs on Innovation in Research and Education
Although ACIs are often associated with large media, content and application companies. They are
playing an increasingly vital role for delivering of educational material and supporting collaborative
A good example of this approach is the joint Canadian-US research project called Neptune
[NEPTUNE]. Neptune is a multi-million dollar research project to deploy several hundred mile long
fiber network on the Pacific Ocean floor off the west coast of Canada and the US. The fiber
network will connect a host of undersea instruments in particular HDTV cameras and undersea
robots at various locations on the ocean floor. These cameras and equipment will not only be
accessible by researchers but also by students around the world across the global Internet. The
intent is to ultimately create a "virtual aquarium" where students and researchers can view and
interact with undersea aquatic life in real time.
Obviously the network demands of "virtual aquarium" are huge and so Neptune is using an ACI
provided by Akamai to deliver the hundreds if not thousands of real time video streams around
Many schools and universities have large archives of educational video which they would like to
distribute to students and educators around the world. But as we have seen previously the cost of
distributing content, especially video, can be very expensive as it requires a host of servers to
deliver multiple streams or downloading of content. In addition, it imposes a huge cost on both
the originating and receiving institution in terms of Internet transit costs. As a result many
institutions are now using ACIs such as Content Delivery Networks and clouds for delivery of their
Other research projects are using ACI for crowd sourcing applications. For instance, there are
"Citizen Science" projects [CITIZEN] that engage with students and the general public to help
them identify exo-planets orbiting far off stars and catalog thousands of previously unknown
species in our forests and oceans. IBM has deployed an ACI facility called the World Community
Grid [WORLD] that taps into computing and storage resources around the world that is voluntarily
provided by many organizations. This ACI is contributing to research in finding new cures for
diseases such as Alzheimers and AIDS.
And of course computation clouds provided such as those provided by Amazon EC2 and Microsoft
Azure are having a huge impact on university computational research and burgeoning field of
what is called "eScience" or "cyber-infrastructure" Cyber-infrastructure is a specialized form of
ACI earmarked for the research and education community but accomplishes many of the same
goals in that it links instruments, computation and storage across the Internet to enable new ways
of doing science.
In the research and education sector good examples of global scaling ACIs include the distributed
computing and network infrastructure for the Large Hadron Collier (LHC) [LHCNET] based in
Geneva. Another example is the University of San Diego Optiputer project [OPTIPUTER]. In
many of these examples the ACIs have acquired their own fiber networks, as well as trans-ocean
Some experts such as Dan Reed [REED], Vice President of Microsoft and formerly a member of
the President's Committee on the Advancement of Science and Technology (PCAST) claim that in
the future most university research will be entirely dependent on ACIs.
ACIs and Innovative Delivery of Government Services
The General Services Administration of the US government several months ago announced a cloud
services portal called "apps.gov". The portal allows agencies and department to develop
applications using third party commercial ACI facilities such as clouds and social media services.
As the "apps.gov" web site mentions it allows agencies and government departments to scale
their applications without having to invest in expensive computer servers and hardware. Users can
add and subtract computation and storage services at they need and they only pay for what they
use. It significantly reduces the cost of implementation as time and resources do not have to be
spent configuring physical servers and systems. Most importantly it provides enhanced service
quality to citizens who use these services.
How ACIs benefit the rest of the Internet Ecosystem
To understand the impact of ACIs are having on the evolving Internet one only has to look at a
recent study by Arbor Networks [ARBOR]. Arbor Networks – in collaboration with University of
Michigan and Merit Network presented the largest study of global Internet traffic since the start
of the commercial Internet at recent Internet engineering conference in October 2009.
The study included analysis of Internet traffic across 110 large and geographically diverse cable
operators, international end-to end carrier backbones, regional networks and content providers.
Results from the study show that over the last five years, Internet traffic has migrated away from
the traditional Internet core of ten to twelve backbone providers and now flows directly between
large content providers and last mile networks (and then to their end-users). As the study notes
these networks “have become so popular they are changing the shape of the Internet itself such
that many organizations and ISPs are directly peering with content providers rather than
purchasing transit from [backbone] ISPs”.
As a result the costs of Internet transit have dropped dramatically for all the parties in the
Internet eco-system. Major media and content now often deliver their content directly to third
party ACIs at major IXs where most traffic is exchanged on a settlement free basis. Regional ISPs
and last mile providers have then been able to reduce the amount of Internet transit they
purchase from backbone Internet providers by directly connecting to an ACI at an IX. Backbone
ISPs have also benefited in that their costs are also significantly reduced as they can also access
the same traffic delivered by ACIs rather than carrying it on their backbones.
ACIs have not only reduced Internet transit costs but they have also introduced a greater degree
of competition. A content and application provider now has many different ways of delivering their
product to end users. Organization with a few very small applications and with a limited market
can still choose to host their own servers and purchase Internet transit from a regional or
backbone ISP. Alternatively they can decide to deploy multiple servers at different hosting data
centers or move their content or application to any number of third party ACI companies. As we
have seen there are a plethora of such companies with a wide range of services and options.
ACIs have also significantly reduced the cost of Internet infrastructure by adopting new network
architectures that simply and reduce the complexity of the traditional backbone ISP facilities.
Again this benefits not only the ACI but reduces cost of Internet delivery to all members of the
Internet eco-system. Because ACIs provide load balancing and redundancy through the wide
distribution of servers at various nodes around the world they do not need complex and highly
redundant network facilities between these node. In previous studies the author has estimated
the cost savings of this type of architecture is a fraction of the cost of a traditional backbone ISP
infrastructure [ST. ARNAUD]
Future Evolution for ACIs
The distinguishing and unifying feature of ACIs is that they are an Internet infrastructure the
closely couples network links with compute storage and content to enhance user’s Internet
experience by delivering content and applications locally. ACIs are likely to be a major component
of the next generation of wireless and Internet networks as exemplified by research programs
such as GENI and the future “green” or “energy-aware” Internet. These programs generalize the
concept of ACIs as fundamental feature of the Internet through virtualization of the network,
computing and storage links.
The UCLP [UCLP] initiative extended this concept further by allowing users to deploy and manage
their own ACIs. Although UCLP is often confused with end-to-end circuit switched optical network
solutions as it also uses lightpaths, its original intended purpose was to allow researchers and
businesses to construct their own ACI (also referred to as an Articulated Private Network (APN))
and/or establish direct peering to one or more Internet Exchange points.
The Zero Carbon Internet
Since ACIs achieve redundancy and reliability at the application or content layer they are
inherently more robust and reliable than a traditional end-to-end network which can have many
points of failure. As a consequence they are ideally suited to an environment that has clean but
perhaps unreliable sources of power. As mentioned previously research by MIT and Rutgers
indicate the ACIs may reduce computing and storage energy costs by as much as 45% compared
to distributing and storing data over the public end-to-end Internet. These energy savings are
possible because it is a fundamental feature of ACIs to support distributed computation, data sets
and information where data and computation quickly moved from one node to another in the wide
Research projects such as GreenStar [GREENSTAR] are deploying “green” ACIs where the
infrastructure is powered solely by renewable energy may play an important role in addressing the
challenge of climate change. A future Internet may be made up of multiple Green ACIs like
Greenstar whose revenue may be derived not only from traditional content and applications but
through carbon offsets or “gCommerce” applications under a national Cap and Dividend or Cap
and Reward program.[CAP]
ACI, Network Neutrality Challenges and Last Mile Networks
The growing dominant role of ACIs in the Internet infrastructure may pose future challenges for
regulators to insure that ACIs are able to peer and interconnect to last mile Internet service
providers and that their services are accessible and unconstrained to the ultimate consumer.
As more and more Internet Exchanges are deployed closer to consumers, the future Internet
regulatiry battle ground will be the last mile.
The inadequate investment in last mile infrastructure, particularly in North America, was one of
the principle drivers to deploy ACI facilities so that content and application companies could
ensure a certain degree of responsiveness and interactivity with users. As ACI facilities are
increasingly bypassing the backbone ISPs, despite the drop in Internet transit costs, the last
redoubt for the cable and telephone companies will be ownership and control of the last mile
infrastructure. But as discussed earlier it is in the last mile where most Internet congestion
occurs. Despite the huge investment made by ACIs in building out their infrastructure as close as
possible to the end consumer, it may not be sufficient to overcome the limitations to today’s last
mile infrastructure particularly with the next generation of high bandwidth applications and as the
carriers increasingly rely on traffic management techniques to address the challenges of
Historically building a last mile infrastructure that was part of a national or global end-to-end
network was very challenging and you needed large companies to maintain and build this
infrastructure. But with the development of ACIs and the disintermediation of end-to-end network
the last mile is a stand alone element which does not require the complexity of previous end-to-
Concepts like “Customer Owned Networks” [CUSTOMER] are thereby easier to imagine and may
represent the future direction of how we deploy and manage last mile infrastructure. The concept
of RPON [RPON] was a further articulation of this idea to allow consumers to control and manage
their peering and multi-home with multiple ACIs at a nearby Internet Exchange point.
The recent announcement by Google to fund a number of last mile fiber to the home pilot projects
may be the necessary catalyst to enable these types of innovative last mile solutions.
ACI and Wireless Internet
The existing cell phone networks are architected very much along the lines of a traditional end-to-
end network as their primary application is voice. 3G/4G networks have very complex
infrastructures as they need to provide a reliable and continuous voice service with a mobile end
device. But as Internet data application start to dominate with the new generation of wireless
devices the same architectural changes we have seen with wire line networks may also become
evident with wireless networks as well.
It is likely that “mobile” ACIs will soon appear on the scene perhaps using the new “white space”
spectrum. This part of the spectrum as well as other bands such as those earmarked for WiFi and
WiMax may be ideal for mobile ACI services, as opposed to traditional end-to-end voice cell phone
networks which require a much higher grade of uncongested spectrum. New multi-receiver radios
that can share radio spectrum and mobile data platforms as exemplified by the iPhone and
Android will soon be able to connect simultaneously to several different wireless ACI and end-to-
end 3G/4G networks. This will also present new challenges to regulators to insure that these
mobile data platforms remain open and that users will have unrestricted access to mobile ACIs.
The Kindle or iPhone are early examples of the next generation of mobile Internet devices. But if
they remain locked to one 3G/4G vendor then their practicality and usefulness will be limited
especially as users start to download large content such as textbooks, videos, etc. And this is not
mention there are other challenges in terms of their archaic and concerted practice of using Digital
Rights Management tools to limit access and sharing.
R&E networks may be well positioned to offer wireless Internet services to students and faculty at
university and college campuses throughout the country. As universities and colleges migrate
student e-mail away from their own servers and as social networking tools like Facebook and
texting become the more prevalent way of communicating the need for high speed wireless access
will be critical both off and on campus.
Most R&E networks are uncongested and have already established peerings with major ACI
facilities it would be very easy for them to offer wireless ACI services using WiFi and “white space”
for the next generation of mobile devices as exemplified by the Android and/or iPad. Deployment
of “zero carbon” white space base stations at universities and schools in partnership with major
ACI companies could extend reach of their networks beyond the campus so students and faculty
could still access high speed data even off campus. It also may offer new revenue opportunities
in terms of both “gCommerce” and traditional applications for R&E networks and their ACI
ACI and IPv6 addressing
As more and more traffic originates with ACI facilities the bulk of Internet routing will be done
through DNS hacks. The DNS system was never intended to be a routing protocol, but there has
been little research on developing new routing protocols for this type of architecture other than
Service Oriented Architecture (SOA) XML style routing.
In any event the pressure for IPv6 routing may be somewhat mitigated by the development of
ACI. If ACIs continue to expand and dominate Internet traffic we may need to rethink the need for
IPv6 deployment or addressing techniques such Loc/ID split [LOC/ID]. Instead URI routing and/or
combinations of Delay Tolerant Networking (DTN) with late binding between DNS and IP
forwarding addresses may be a better long term solution.
An important area of research, such as that undertaken by Van Jacobson needs to be undertaken
to explore these issues. ACIs present host of unique challenges in terms of addressing, security
and information redundancy that yet to have been adequately addressed.
Role of R&E networks and ACI
Originally most R&E networks were also designed as end-to-end networks. But the advent of the
academic equivalent to ACI referred to as cyber-infrastructure or eInfrastructure is fundamentally
changing their role in the same way it is affecting traditional Internet backbone providers.
Increasingly R&E networks will have four critical roles with respect to ACI:
(a) Enabling the deployment of ACI or cyber-infrastructure by research discipline or
community of interest;
(b) Enabling researchers and educators to access commercial ACI services by direct
(c) Hosting transit or Internet exchange points in smaller communities and cities where no
commercial IX exists; and
(d) Perhaps ultimately deploying wireless ACI services as well as content and peering for
not-for-profit organizations to create a National Public Internet (NPI) in the same spirit as
National Public Radio (NPR) or PBS.
The deployment of specialized network facilities supporting cyber-infrastructure is well underway
in many countries and there are several large cyber-infrastructure facilities that integrate
computing, storage and networking. But the challenge for most R&E network operators in this
environment, as we have seen with Internet backbone operators, is to realize that their role as a
general network service provider is being supplanted by the cyber-infrastructure IT staff who
manage and control the cyber-infrastructure including its network components. As mentioned
previously the UCLP technology was developed specifically for this purpose to allow R&E network
operators to hand off control and management of portions of the network to the cyber-
infrastructure management team.
However R&E networks can play an important intermediary role with respect to providing
interconnections to commercial ACIs at IXs who provide a variety services to the R&E community.
This role will become critically important as major ACIs like Microsoft’s recent announcement to
provide free access to their computation cloud for the research community. The Internet 2 and
NLR content peering service is a good example of the value of this type of arrangement.
A more important role for R&E networks is perhaps to extend the benefits of ACIs to smaller
communities that do not have a carrier neutral IX. Several R&E networks already provide this
service such as BCnet in British Columbia, KAREN in New Zealand and UNINETT in Norway.
Deploying these IXs will also enable small not-for-profit content and application providers to
access ACI facilities in larger centers in order to distribute their content globally. R&E network can
play an important aggregation and intermediary role for these smaller players. Cultural content
organizations, in particular don’t have the clout and wherewithal to distributed their content
globally or even locally. Providing a service for this community may be ultimately be the most
important role for R&E networks leading to the concept of a National Public Internet.
National Public Internet
One of the recurring drivers for ACI is to overcome the limitations of the last mile infrastructure
by locating content and applications as close as possible to the consumer. While carrier neutral
IXs exist in most major cities they are very rare in smaller communities. Governments and not for
profit organizations who wish to distribute content, especially video, to these smaller communities
over the Internet face a serious challenge. While they can contract with commercial ACIs to
deliver their content to consumers in major centers where there is local IX this is not possible in
smaller cities or rural areas.
One possible important role for R&E networks is to extend commercial ACI services to these
smaller centers for the distribution of not for profit content to consumers. As broadcast
organizations such as PBS, CBC or TVO look to have their entire content accessible over the
Internet they have an obligation to the public to insure that it accessible by all citizens especially
in smaller cities and rural areas. Backhauling this content over commercial networks may impose
a significant burden to ISPs and huge cost to the content provider as we have witnessed in the
complaints in the UK by BBC’s Internet broadcast service. Using a commercial ACI service for
major centers in combination with R&E networks for smaller cities may provide a workable
solution. The commercial last mile ISPs are still responsible for delivering the content and remain
the customer facing organization, but the R&E network, in effect, becomes an extension of a
commercial ACI for the purposes of distribution of not-for-profit content.
The concept of a National Public Internet will probably generate considerable debate, but as we
have seen in the past the Internet revolution never ceases and with every new development there
will always be those who want to maintain the status quo of outdated business models. But we
cannot ignore the fact that the Internet has now become probably the most important vehicle for
the dissemination and sharing of information and content.
[WSJ] Mangalidan, M, "Small Firms Tap Amazon's Juice", January15, 2008, Wall St. On Line,
[NEPTUNE] Neptune Canada Outreach and Education Program,
[GSA] Vivek Kundra, Federal CIO video speaking about new Federal Government "apps.gov" web
[INTERNET2] Internet 2 Content and Peering Service,
[GARTNER] "Gartner says Cloud market is expected to reach $150.1 billion in 2013.",
[CROWE] Interview with President of Level 3 on future of CDN,
[INUK] Inuk distance education video service, http://www.inuknetworks.com/universities.html
[PAIX] Peering and Internet Exchange Point now operated by Switch and Data, History and
[AMS-IX] The world's largest Internet exchange, http://www.ams-ix.net/
[STREAMCENTER] Cataltepe, Z. Moghe, P., Characterizing nature and location of congestion on
the public Internet, 30 June-3 July 2003, p 741- 746
[AKAMAI] Leighton, T., "Akamai and Cloud Computing: A perspective from the edge of the cloud"
[WSJ] Mangalidan, M, "Small Firms Tap Amazon's Juice", January15, 2008, Wall St. Journal On
[CITIZEN] Citizen Science, Blog on many examples of how clouds, CDN networks, crowd sourcing
etc help students and citizens participate in scientific discovery http://citizen-
[WORLD] World Community Grid, http://www.worldcommunitygrid.org/
[LHCNET] Global network linking computers and databases for high energy physics,
[OPTIPUTER] Optical network linking computers, visulization servers, etc
[ST. ARNAUD] St. Arnaud, W., et al. "Architectural and Engineering Issues for Building an Optical
Internet", 1998, www.canarie.ca
[REED] Reed, D., "Outsourcing: Perhaps its time" January 14, 2009
[ARBOR] C. Labovitz, et al "ATLAS Internet Observatory 2009 Annual Report"
[AccuStream] "CDN Market Growth 2006 - 2009"
[UCLP] User controlled LightPaths, http://www.uclp.ca/index.php?
[RPON] Reverse Passive Optical Networks, Free fiber to the home, http://free-fiber-to-the-
[GREENSTAR] Building a zero carbon network, http://www.greenstarnetwork.com/
[CAP] Cap and Dividend versus Cap and Reward
[CUSTOMER] D. Slater, T.Wu, Homes with Tails, What if you could own your Internet connection,
[BABCOCK] Babcock school of Management textbooks for 160 students alone produces
45 Tons CO2 http://www.stewartmarion.com/carbon-footprint/html/carbon-footprint-stuff.html
Bill St Arnaud was previously Chief Research Officer CANARIE Inc., an industry government
consortium to promote and develop information highway technologies in Canada, where he
worked for 15 years. He is Bill is currently pursuing pursing new opportunities related to his on-
going passion for Internet networking, especially in the area of developing network and ICT tools
to mitigate climate change.
At CANARIE, Bill St. Arnaud was been responsible for the coordination and implementation of
Canada's next generation optical Internet initiative called CA*net 4. Previously he was President
and founder of a network and software engineering firm called TSA ProForma Inc. TSA was a LAN/
WAN software company that developed wide area network client/server systems for use primarily
in the financial and information business fields in the Far East and the United States. Bill is a
frequent guest speaker at numerous conferences on the Internet and optical networking and is a
regular contributor to several networking magazines.