The Internet backbone consists of just over 6000 independent networks that exchange traffic in fashions that are not well understood outside of the backbone networking community. We explain how it works, how it has evolved and how it is continuing to evolve today.
This is a revised and annotated version of material most recently given as an invited presentation at OFC 2014, the optical fiber conference in San Francisco, in March 2014.
To provide higher resolution, I've also uploaded a version w/o annotations, i.e. just the graphics.
These are the graphics (in higher resolution) for my presentation, Internet Peering with annotations. See "Internet Peering, with annotations" for details.
Leading By Example
The challenge of leadership is to be strong, but not rude; be kind, but not weak; be bold, but not bully; be thoughtful, but not lazy; be humble, but not timid; be proud, but not arrogant; have humor, but without folly. - Jim Rohn
These are the graphics (in higher resolution) for my presentation, Internet Peering with annotations. See "Internet Peering, with annotations" for details.
Leading By Example
The challenge of leadership is to be strong, but not rude; be kind, but not weak; be bold, but not bully; be thoughtful, but not lazy; be humble, but not timid; be proud, but not arrogant; have humor, but without folly. - Jim Rohn
Understanding Remote Peering - Connecting to the Core of the InternetWilliam Norton
Understanding Remote Peering – The New Wave of Interconnection at the Core of the Internet.
Using real-world case studies, this free webinar explains remote peering and what it means to ISPs, content providers and the global Internet peering ecosystem. Learn from William B. Norton who has presented three popular USTelecom webinars on Internet peering.
Background
The Internet peering ecosystem is going through a historic and rapid paradigm shift.
The largest ISPs and content providers have always interconnected their networks at the core of the Internet using a technique called "Internet peering," the free and reciprocal exchange of access to each other's customers. In this way, networks of scale can exchange a large enough amount of traffic for free with one another to offset the cost of deployment (equipment, colocation, and transport to the colocation center). This justification is the basis for the business case for peering.
However, a recent trend -- called "remote peering" -- has emerged as a way to get these peering benefits but without the cost of additional equipment, transport, or colocation. The remote peering model is where a remote peering provider delivers transport to the customer router with Virtual Local Area Network (VLAN) extension(s) from the largest exchange points in the world. In this way, the customer gets all of the benefits of peering (performance, control over routing, direct relationships with the peer networks, etc.) without the large initial capital and operational costs.
This is not just a fringe or small change to peering - it is a fundamental shift in the Internet architecture. Remote peering is a new technique that helps make peering accessible to a much larger population. As a result of the cost shift, an increasing percentage of networks are peering across great distances. The peering paradigm of "peering keeps local traffic local" is no more.
During the free webinar you will hear case studies from the field where medium-sized content companies are able to enter the peering ecosystem and connect to multiple Internet Exchange Points over a single circuit. These companies have graciously allowed their cost numbers to be shared so the traditional peering model can be compared against the emerging remote peering model. Also, the webinar will highlight the strongest arguments on both sides of the debate over whether remote peering is good or bad for the global Internet peering ecosystem.
William B. Norton, Executive Director, DrPeering International and Author of the new 2014 Edition of “The Internet Peering Playbook: Connecting to the Core of the Internet” which includes a new chapter dedicated to remote peering.
PTC'14 Presentation by Steve Wilcox: “The Role IXPs and Peering Play in the E...Nicole White
Presentation given by Stephen Wilcox, CEO of IX Reach at PTC'14, entitled “The Role IXPs and Peering Play in the Evolution of the Internet”.
Excerpt:
"There are over 400 Internet Exchange Points distributed across the world, and growing. The largest and most successful reside in Europe and play a vital role in the growth and evolution of the Internet. There are over 50 IXPs in Europe alone, most promoting local traffic exchange but only a handful recognised as international hubs for interconnections. These particular IXPs continue to increase their value added services and expand globally - most recently to the US - promoting their non-for-profit and neutral business models in varying and emerging markets. This recent expansion makes it all the more important to consider the role IXPs and peering play in the continuing evolution of the Internet and how network operators should approach peering in a network blend.
The reciprocal interplay between Tier 1, 2 and 3 networks over IXPs, particularly those with an international focus, has become an interesting addition to the global Internet topology, enabling networks to reduce their costs and the Internet to grow in-line with end-user demand for high-bandwidth content and mobile usage. However, given 99.5% of peering agreements are completed on a hand shake, and the majority are settlement-free, scenarios such as de-peering can still occur, leading to partial Internet black-outs and events such as these need to be taken into consideration when building redundancy into a network.
This industry briefing will discuss the most common ways different networks value and utilise IXPs in their network blend, what to consider when choosing peering locations, why some participate at an IXP while others do not, and the de-peering scenarios that can occur and the impact this can have on a network's service. It will also touch on the geographical positioning of major IXPs and the trends in peering partners by selected countries."
Big Broadband: Public Infrastructure or Private MonopoliesWayne Caswell
This paper contrasts the different incentives of incumbent ISPs, municipalities and other stakeholders, suggesting that the cost of extending fiber closer to premises is high enough to cause IPSs to cherry pick the most profitable customers, leaving others to fend for themselves. That’s where public broadband comes in, but the politics can pose obstacles for municipalities that want their own networks, so this paper also includes a section explaining the fears of various stakeholders. Incumbent phone companies, for example, fear competition from VoIP alternatives and are using their deep pockets and powerful lobbyists to delay competition as long as they can.
Remote Internet Peering Vs IP Transit: A Shift in Internet ArchitectureRuth Plater
An overview explaining the reasons fo peering at Internet Exchange Points (IXPs), versus IP Transit, and the known benefits and uses of both approaches to a network blend.
The presentation brings to light the decline in pricing of IP Transit and what this means for the financial benefits that peering used to offer and how remote peering (or virtual connectivity at Internet Exchanges) continues to alter this landscape.
It also highlights how remote peering has changed the way network operators exchange traffic and made peering more accessible to smaller/medium sized companies and developing markets, by removing the initial barriers in terms of legal, billing and technical; simplifying the whole process of expanding a network.
Developing markets are also discussed, such as the Middle East, and the peering paradigm of 'keeping local traffic local' Vs virtually/remotely connecting to Internet Exchanges and the issues with both.
It touches on the shift in the industry and how Internet Exchanges and Data Centres are behaving more like networks and vice versa, explaining the new Central European Peering Hub project by LU-CIX - with IX Reach as the first carrier partner - whereby they encourage members to join major Internet Exchanges (DE-CIX, LINX, France-IX and AMS-IX) via their Internet Exchange platform.
The Network Effects Bible is a comprehensive collection of terms and insights related to network effects all in one place. Produced by James Currier & the NFX team (www.nfx.com), an early-stage venture capital firm started by entrepreneurs who've built 10 network effect companies with more than $10 billion in exits across multiple industries and geographies.
Read the full Network Effects Bible at: https://www.nfx.com/post/network-effects-bible/
Follow us on Twitter @NFX
Understanding Remote Peering - Connecting to the Core of the InternetWilliam Norton
Understanding Remote Peering – The New Wave of Interconnection at the Core of the Internet.
Using real-world case studies, this free webinar explains remote peering and what it means to ISPs, content providers and the global Internet peering ecosystem. Learn from William B. Norton who has presented three popular USTelecom webinars on Internet peering.
Background
The Internet peering ecosystem is going through a historic and rapid paradigm shift.
The largest ISPs and content providers have always interconnected their networks at the core of the Internet using a technique called "Internet peering," the free and reciprocal exchange of access to each other's customers. In this way, networks of scale can exchange a large enough amount of traffic for free with one another to offset the cost of deployment (equipment, colocation, and transport to the colocation center). This justification is the basis for the business case for peering.
However, a recent trend -- called "remote peering" -- has emerged as a way to get these peering benefits but without the cost of additional equipment, transport, or colocation. The remote peering model is where a remote peering provider delivers transport to the customer router with Virtual Local Area Network (VLAN) extension(s) from the largest exchange points in the world. In this way, the customer gets all of the benefits of peering (performance, control over routing, direct relationships with the peer networks, etc.) without the large initial capital and operational costs.
This is not just a fringe or small change to peering - it is a fundamental shift in the Internet architecture. Remote peering is a new technique that helps make peering accessible to a much larger population. As a result of the cost shift, an increasing percentage of networks are peering across great distances. The peering paradigm of "peering keeps local traffic local" is no more.
During the free webinar you will hear case studies from the field where medium-sized content companies are able to enter the peering ecosystem and connect to multiple Internet Exchange Points over a single circuit. These companies have graciously allowed their cost numbers to be shared so the traditional peering model can be compared against the emerging remote peering model. Also, the webinar will highlight the strongest arguments on both sides of the debate over whether remote peering is good or bad for the global Internet peering ecosystem.
William B. Norton, Executive Director, DrPeering International and Author of the new 2014 Edition of “The Internet Peering Playbook: Connecting to the Core of the Internet” which includes a new chapter dedicated to remote peering.
PTC'14 Presentation by Steve Wilcox: “The Role IXPs and Peering Play in the E...Nicole White
Presentation given by Stephen Wilcox, CEO of IX Reach at PTC'14, entitled “The Role IXPs and Peering Play in the Evolution of the Internet”.
Excerpt:
"There are over 400 Internet Exchange Points distributed across the world, and growing. The largest and most successful reside in Europe and play a vital role in the growth and evolution of the Internet. There are over 50 IXPs in Europe alone, most promoting local traffic exchange but only a handful recognised as international hubs for interconnections. These particular IXPs continue to increase their value added services and expand globally - most recently to the US - promoting their non-for-profit and neutral business models in varying and emerging markets. This recent expansion makes it all the more important to consider the role IXPs and peering play in the continuing evolution of the Internet and how network operators should approach peering in a network blend.
The reciprocal interplay between Tier 1, 2 and 3 networks over IXPs, particularly those with an international focus, has become an interesting addition to the global Internet topology, enabling networks to reduce their costs and the Internet to grow in-line with end-user demand for high-bandwidth content and mobile usage. However, given 99.5% of peering agreements are completed on a hand shake, and the majority are settlement-free, scenarios such as de-peering can still occur, leading to partial Internet black-outs and events such as these need to be taken into consideration when building redundancy into a network.
This industry briefing will discuss the most common ways different networks value and utilise IXPs in their network blend, what to consider when choosing peering locations, why some participate at an IXP while others do not, and the de-peering scenarios that can occur and the impact this can have on a network's service. It will also touch on the geographical positioning of major IXPs and the trends in peering partners by selected countries."
Big Broadband: Public Infrastructure or Private MonopoliesWayne Caswell
This paper contrasts the different incentives of incumbent ISPs, municipalities and other stakeholders, suggesting that the cost of extending fiber closer to premises is high enough to cause IPSs to cherry pick the most profitable customers, leaving others to fend for themselves. That’s where public broadband comes in, but the politics can pose obstacles for municipalities that want their own networks, so this paper also includes a section explaining the fears of various stakeholders. Incumbent phone companies, for example, fear competition from VoIP alternatives and are using their deep pockets and powerful lobbyists to delay competition as long as they can.
Remote Internet Peering Vs IP Transit: A Shift in Internet ArchitectureRuth Plater
An overview explaining the reasons fo peering at Internet Exchange Points (IXPs), versus IP Transit, and the known benefits and uses of both approaches to a network blend.
The presentation brings to light the decline in pricing of IP Transit and what this means for the financial benefits that peering used to offer and how remote peering (or virtual connectivity at Internet Exchanges) continues to alter this landscape.
It also highlights how remote peering has changed the way network operators exchange traffic and made peering more accessible to smaller/medium sized companies and developing markets, by removing the initial barriers in terms of legal, billing and technical; simplifying the whole process of expanding a network.
Developing markets are also discussed, such as the Middle East, and the peering paradigm of 'keeping local traffic local' Vs virtually/remotely connecting to Internet Exchanges and the issues with both.
It touches on the shift in the industry and how Internet Exchanges and Data Centres are behaving more like networks and vice versa, explaining the new Central European Peering Hub project by LU-CIX - with IX Reach as the first carrier partner - whereby they encourage members to join major Internet Exchanges (DE-CIX, LINX, France-IX and AMS-IX) via their Internet Exchange platform.
The Network Effects Bible is a comprehensive collection of terms and insights related to network effects all in one place. Produced by James Currier & the NFX team (www.nfx.com), an early-stage venture capital firm started by entrepreneurs who've built 10 network effect companies with more than $10 billion in exits across multiple industries and geographies.
Read the full Network Effects Bible at: https://www.nfx.com/post/network-effects-bible/
Follow us on Twitter @NFX
Highlighted notes while studying the Course:
Advanced Computer Networks
Article: Internet backbone
By: Wikipedia
Wikipedia is a multilingual online encyclopedia created and maintained as an open collaboration project by a community of volunteer editors using a wiki-based editing system. It is the largest and most popular general reference work on the World Wide Web. It is also one of the 15 most popular websites as ranked by Alexa, as of August 2020. It features exclusively free content and has no advertising. It is hosted by the Wikimedia Foundation, an American non-profit organization funded primarily through donations.
An internet with lower case “i” is two or more networks that can communicate with each other. The most notable internet is called the Internet with upper case “I” is composed of thousands of interconnected networks The Internet as several backbones, provider networks, and customer networks. At the top level, the backbones (international ISPs) are large networks owned by some communication companies such as Sprint, Verizon (MCI), AT&T, and NTT. The backbone networks are connected through some complex switching systems, called peering points. At the second level, there are smaller networks, called provider networks that uses the services of the backbones and pay them for their services. The provider networks are connected to backbones or other provider networks. At the edge of the Internet the customer networks are networks that actually use the services provided by the Internet. They pay to provider networks for receiving services. Backbones and provider networks are also called Internet Service Providers (ISPs). The backbones are known as international ISPs and the provider networks are known as national or regional lSPs.
An Internet service provider (ISP) is an organisation that provides services for accessing, using, or participating in the Internet. Internet service providers can be organised in various forms, such as commercial, community-owned, non-profit, or otherwise privately owned.
APNIC Training Delivery Manager for SEA and SA, Shane Hermoso, presents on the importance of peering and IXPs at the Women in Networking series on 17 November 2021
The evolution of internet service what is the internet and where it comes to all things about .about the internet from a to z and also tell about the growth of the internet in India and all over the world so keep in touch with our website digiblog.co
2016 Broadband Outlook Report, Telecoms.comBrian Metzger
The Broadband Outlook Report is based on quantitative research from more than 600 telecommunications and internet professionals, with over 50% of respondents representing communications service providers (CSPs). Inside you will discover:
* How CSPs define digital transformation
* What percentage of CSPs have a digital transformation strategy
* Which capabilities CSPs say are the most important to improving the digital experience
* And much more insight into the telecom industry’s digital transformation imperative
White spaces above 3 g hz and an applicationBrough Turner
At the Super WiFi Summit
White Spaces: The Radio Evolution
Tuesday ‐ 09/13/11 • 3:30-‐4:15pm
Brough Turner , Founder , netBlazr.com
Smart antennas and smart radios, Cognitive Radio and Beam Forming are on the verge of being incorporated into product. As we head toward these technologies, the opportunities exist for new models of service sharing and interconnection to deliver broadband solutions.
My presentation of netBlazr at Emerging Communications 2011 held at the SFO Marriott June 2011 in which I presented the background (why something like this is needed), the way we disrupt the existing duopoly and pull an end run around the phone companies, the cable companies, the FCC and Congress; and an update on how far we've gotten in our first 12 months.
Although I have high hopes for TVWS I also expect that, 10-20 years from now, we will look back on the TV White Spaces decision and recognize it as a breakthough in getting access to all otherwise unused spectrum, for example in the 3 GHz - 9 GHz range.
Brough’s keynote address at the October 2010 4G Wireless Evolution Conference.
In it, he argues:
1. All key 4G technologies are pioneered by Wi-Fi (3-5 year lead!).
2. Wi-Fi will be the dominant solution for mobile data offload.
3. 4G technologies represent a wireless tipping point with the result they will revolutionize backhaul and eventually the first mile (via wireless ISPs).
He closes with two slides on his new wireless ISP, netBlazr.
My keynote address at the 2003 Spring VON conference, presented on April 1, 2003. I pointed to real 100/100 Mbps Internet connectivity (deployed in 1999-2000, in Ulmea Sweden) emphasizing this was only possible by getting control of local fiber away from the incumbent PTT.
As presented at the 4G Wireless Evolution conference in Miami, January 22, 2010.
WiFI has been at the heart of the change to OFDM and MIMO solutions. It is not suprising that WiFi is a hotbed of innovation in today’s marketplace. This discussion looks at the current and future opportunities associated with WIFI and the implications for new kinds of deployment and adaptation by the LTE and WiMAX community.
A Wireless Tipping Point, Open Spectrum ImplicationsBrough Turner
As presented at eComm Europe, October 2009.
Are we using radio spectrum efficiently? No. Is this likely to change? Not soon.
"Smart" radios have the potential to support much more efficient and productive use of spectrum, but spectrum regulation is a political issue with well established stakeholders. What's more, our limited experiments with commons-based spectrum management have had widely differing results: WiFi, enormous success; UltraWideBand, disappointment.
WiFi's success happened in "junk" spectral bands where established players weren't interested. That will be difficult to repeat, but Brough will describe some very simple physical principals of radio propagation which, when combined with the next five years of Moore's law progress in semiconductors, suggest a path forward that's very different from TV white spaces. Indeed, the most important result of regulatory decisions on UltraWideBand and TV white spaces is they validate the concept of secondary access.
New Applications and New Business Models
Whether it's LTE or WiMAX or local WISPs using combinations of Wi-Fi, WiMAX and other technologies, we are on the verge of having affordable mobile broadband in the US (it's already available in the UK and Scandinavia and becoming available elsewhere in the EU). What services can be provided over the top and what services need or can benefit from operator capabilities (QoS, security, ...)? The iPhone store, Android store and similar initiatives suggest power is shifting away from the operators and into the hands of application developers and the end user. How can operators leverage their core capabilities (QoS, security, billing, customer relationships, call detail, ...) to provide applications and remain relevant to their customers?
How the history of cellular technology helps us understand 4G technology and business models and their likely impact on wireless broadband
Including:
Brief history of cellular wireless telephony
> Radio technology: TDMA, CDMA, OFDMA
> Mobile core network architectures
Demographics & market trends today
> 3.5G, WiMAX, LTE & 4G migration paths
Implications for the next 2-5 years
Open Spectrum - Physics, Engineering, Commerce and PoliticsBrough Turner
The Open Spectrum Potential for Evolutionary and Revolutionary Technology and Business Solutions
by
Brough Turner; Founder and CTO at Ashtonbrooke and Chief Strategy Officer at Dialogic
Presented to the Boston chapter of the IEEE Communications Society, May 14, 2009.
In November 2008, the FCC voted unanimously to permit unlicensed wireless devices that operate in the empty "white space" between TV channels. Their “TV White Spaces” decision was the culmination of many years of proceedings, but it's just one step in a much larger discussion, commonly referred to as “Open Spectrum.”
Our use of radio spectrum is regulated under principles that were established in the 1920s, when radio spectrum appeared to be a scarce resource and frequency was the only reasonable basis for allocation. Today’s wireless technology vastly exceeds anything imagined in the 1920s and from physical principles we know that many, many orders of magnitude further improvement are possible. Already the application of new approaches in just a few slivers of spectrum has fostered new industries – WiFi, Bluetooth and more.
The presentation discusses the predecessors, potentiality, and directions for Open Spectrum. This will include:
A brief history spectrum regulation from before the Radio Act of 1925 to today.
Results from measurements of actual spectrum utilization in New York and Washington DC.
An overview of "Open Spectrum" experiments to date, including “license exempt sharing” in the 900 MHz, 2.4 GHz and 5 GHz bands and different forms of "secondary use" including UWB, 3650 MHz and now TV White Spaces.
The physics of propagation and its impact on the range of White Spaces services vs. WiFi, WiMAX, 3GSM and LTE.
IEEE 802.11y protocols and the prospects for expanding secondary use beyond TV White Spaces.
Brough Turner is founder and CTO at Ashtonbrooke and Chief Strategy Officer at Dialogic. Formerly he was founder and CTO at Natural MicroSystems and NMS Communications. He speaks and writes on a variety of communications topics including 3G and 4G wireless tutorials. He presented most recently at the 4G Wireless Evolution conference in February. Brough is an electrical engineering graduate of MIT and has 25 years experience in telecommunications.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Internet peering, with annotations
1. Why
care?
Peering
and
Transit
are
li3le
understood
and
yet
this
is
the
heart
of
Internet
infrastructure
economics
and
it
is
the
growth
of
Internet
infrastructure
that
ul:mately
funds
most
of
us
here
at
OFC.
I
going
to
describe
how
thousands
of
independent
organiza:ons
compete
but
also
exchange
traffic
and
con:nually
grow
an
ever
more
distributed
Internet
infrastructure.
1
2. There
are
more
than
3
billion
Internet
users
and,
with
the
advent
of
very
low
cost
Android
smart
phones
($24
in
India
in
Feb
2014),
it’s
likely
that
number
will
double
in
just
a
few
years.
There
are
also
tens
of
millions
of
local
networks
(perhaps
more
than
100
million
based
just
on
the
number
of
WiFi
routers
that
have
been
sold).
They
connect
to
the
Internet
through
tens
of
thousands
of
ISPs
which
may
be
classified
as
Access,
Aggrega:on
or
Backbone,
or
as
Local,
Regional,
Na:onal
or
Interna:onal
but,
of
course,
many
ISPs
cross
these
boundaries.
For
today,
I’m
going
to
focus
on
the
6000
or
so
major
ISPs
that
form
today’s
Internet
backbone.
But
to
understand
today’s
complex
environment,
it’s
useful
to
see
how
it
emerged.
2
3. 25
years
ago,
there
was
only
one
backbone.
It
was
run
by
the
Na:onal
Science
Founda:on
for
the
benefit
of
various
researchers
and
government
agencies.
Regional
networks
connected
to
the
NSF
backbone
but,
with
only
one
backbone,
there
was
only
one
source
of
addressing
and
of
ul:mate
rou:ng
decisions.
3
4. As
other,
commercial
networks
grew
up,
they
interconnected
with
the
NSFNET
to
exchange
email
and
data
files.
They
also
found
other
ways
to
exchange
data
among
themselves,
but
s:ll
relied
on
the
NSFnet
as
the
ul:mate
authority
on
addressing
and
rou:ng.
4
5. With
the
advent
of
the
World
Wide
Web
and
the
Mosaic
browser,
Internet
growth
accelerated
and
the
NSF
sought
a
way
to
get
out
of
the
backbone
business.
Part
of
this
required
development
of
a
new
rou:ng
protocol
(BGP,
on
which
more
later)
which
went
on
within
the
IETF
between
1991-‐1994.
Part
of
this
required
establishing
four
Network
Access
Points
(NAPs)
where
backbone
providers
would
exchange
traffic
des:ned
for
other
backbones.
Of
course,
each
backbone
provider
had
its
own
network
that
enabled
all
connected
users
and
content
providers
to
communicate
with
one
another.
However,
users
were
not
interested
in
communica:ng
just
with
just
those
other
users
connected
to
the
same
backbone
provider.
They
wanted
to
communicate
with
any
user
and
any
content
provider,
regardless
of
backbone
provider.
To
offer
universal
connec:vity,
backbone
providers
interconnected
at
NAPs
(and
elsewhere)
to
exchange
traffic
des:ned
for
each
other’s
users.
It
is
these
interconnec:ons
that
make
the
Internet
the
“network
of
networks”
that
it
is
today.
Finally,
in
April
1995,
the
NSF
stopped
providing
backbone
services
and
the
commercial
Internet
was
born.
5
6. In
order
to
provide
complete
Internet
access,
the
backbone
providers
had
to
exchange
traffic
with
each
other.
What’s
more,
the
NSFNET
backbone
had
facilitated
open
traffic
exchange
at
many
levels,
so
there
were
many
peering
agreements
at
first.
But
the
Internet
was
also
growing
rapidly,
requiring
significant
capital
investments.
At
a
minimum,
investors
wanted
to
see
a
path
to
a
return
on
their
investment.
6
7. And,
with
just
six-‐seven
full
backbones
networks
in
existence,
the
backbone
ISPs
began
to
realize
they
had
the
makings
of
a
cartel.
7
8. As
a
cartel,
none
of
the
backbone
operators
had
to
provide
free
peering
to
regional,
local
or
other
smaller
networks.
Instead
they
could
sell
them
“Internet
Transit”
service
–
a
service
that
delivers
packets
to
the
rest
of
the
Internet.
Gradually
(and
some:mes
abruptly),
peering
rules
became
quite
exclusive.
To
peer
with
the
backbone,
you
had
to
be
present
at
all
major
NAPs,
you
had
to
have
a
significant
amount
of
traffic
and
that
traffic
had
to
be
roughly
symmetric.
This
had
an
immediate
impact
on
many
Tier
2
operators,
some
of
which
were
growing
more
rapidly
than
the
backbones.
8
9. De-‐peering
also
impacted
cable
companies,
several
major
content
hos:ng
networks
and
some
large
savvy
content
providers.
These
folks
realized
that,
even
if
they
had
to
buy
“Internet
Transit”
from
a
backbone
provider,
they
could
reduce
what
they
paid
the
backbone
providers
by
exchanging
traffic
among
themselves.
9
10. By
2002,
donut
peering
had
emerged.
The
Tier
2
ISPs,
cable
companies
and
content
providers
had
built
a
ring
around
the
cartel,
largely
rendering
the
original
cartel
irrelevant.
10
11. Indeed,
many
Tier
2
providers
now
had
interna:onal
networks
and
offered
lower
latency
&/or
be3er
pricing.
By
the
early
2000s,
the
Internet
was
substan:ally
more
distributed.
11
12. The
third
wave,
which
started
in
the
early
2000s
and
is
s:ll
evolving
today,
was
the
advent
of
Content
Distribu:on
Networks
(CDNs).
CDNs
may
have
limited,
private
or
no
communica:ons
infrastructure
of
their
own,
instead
they
distribute
content
servers
in
what
is
effec:vely
an
overlay
network.
Akamai
and
Limelight
created
early
CDNs.
Today,
Google,
Amazon
and
Level
3
also
run
content
distribu:on
networks
and
Nejlix
has
begun
deploying
their
own
CDN.
12
13. Typically,
major
CDNs
supply
their
servers
and
remotely
manage
them,
but
local
ISPs
install
them
and
pay
for
electricity
and
rack
space.
This
is
good
business
for
the
local
ISP
as
it
reduces
latency
for
their
customers
and
reduces
the
amount
of
upstream
Internet
transit
service
they
must
pay
for.
13
14. The
past
20
years
have
seen
enormous
turbulence
among
those
providing
the
core
of
the
Internet.
The
original
backbone
networks
have
survived,
but
their
ownership
has
gone
through
a
series
of
bankruptcies,
mergers
and
acquisi:ons.
Meanwhile
the
number
of
networks
par:cipa:ng
in
the
Internet
backbone
has
grown
from
6
to
over
6000.
14
15. I’ve
been
bandying
around
the
terms
“Peering”
and
“Internet
Transit.”
Let
me
explain
exactly
how
they
differ.
Internet
Transit
is
a
service
where
the
upstream
ISP
commits
to
deliver
traffic
to
any
valid
Internet
address.
It’s
typically
priced
in
$/Mbps/Month
and
the
Mbps
of
traffic
is
determined
by
measuring
traffic
levels
every
five
minutes
and
then
compu:ng
the
95th
percen:le
of
all
those
measurements
during
the
month.
Now
suppose
I’m
ISP1.
I
have
a
router
in
a
regional
data
center
where
I
buy
Internet
Transit
services,
but
I
no:ce
that
4%
of
my
traffic
is
to
my
compe:tor,
ISP2,
and
he
happens
to
have
a
router
in
the
same
regional
data
center
just
a
few
hundred
feet
away
from
mine.
He’s
my
compe:tor,
but
we
could
each
save
4%
of
our
monthly
bills
for
Internet
transit
if
we
agree
to
locally
exchange
the
traffic
that’s
des:ned
for
each
other’s
networks.
15
16. No:ce
that
we’re
only
exchanging
traffic
that
originates
with
a
customer
of
one
ISP
and
terminates
with
a
customer
of
the
other
peered
ISP.
16
17. ISP2
may
have
other
connec:ons
to
other
ISPs,
but
these
are
not
involved
(or
even
visible)
to
the
peering
arrangement
with
ISP1.
That’s
the
key
difference.
Peering
is
traffic
exchange
involving
only
those
addresses
that
are
served
by
the
two
peers.
Transit
involves
handling
packets
that
will
be
passed
off
to
one
or
more
addi:onal
networks.
17
18. But
whether
it’s
peering
or
transit,
what
is
actually
exchanged
and
how
does
it
work?
Here
things
are
remarkably
stable.
Operators
may
exchange
other
kinds
of
traffic
(MPLS,
Carrier
Ethernet)
for
other
services,
but
for
Internet
traffic,
they
exchange
IP
packets
(mostly
IPv4)
and
they
nego:ate
routes
using
Border
Gateway
Protocol
(BGP).
IPv4
is
essen:ally
unchanged
for
over
30
years
and
the
current
version
of
BGP
has
had
only
minor
tweaks
since
it
was
deployed
20
years
ago.
Business
arrangements
have
been
turbulent,
but
the
technology
has
been
remarkably
stable.
18
19. To
get
a
be3er
understanding
of
BGP,
suppose
I’m
running
BGP
on
my
edge
router
there
on
the
lel.
There
are
two
ISPs
I
wish
to
exchange
traffic
with
(either
peering
or
transit).
In
par:cular,
I’m
interested
in
gemng
traffic
to
address
blocks
A,
B
&
C.
My
router
starts
by
establishing
BGP
sessions
with
the
edge
routers
at
each
ISP.
19
20. Once
the
sessions
are
up,
I
get
an
announcement
from
the
edge
router
at
ISP1
saying
it’s
prepared
to
deliver
traffic
to
address
block
A
over
a
route
that
has
three
hops
and
traffic
for
address
block
B
over
a
route
that
has
one
hop.
20
21. This
is
followed
by
an
announcement
from
ISP2
saying
they
can
deliver
traffic
to
address
block
B
in
two
hops
or
to
address
block
C
in
two
hops.
Now,
I
have
to
make
some
decisions.
21
22. First
these
announcements
come
from
other
organiza:ons
who
may
or
may
not
be
competent.
Should
I
believe
ISP1
when
he
says
he
can
deliver
traffic
to
address
block
B
in
just
one
hop?
A
classic
example
of
mistakes
that
can
happen
occurred
in
Feb
2008
when
the
government
of
Pakistan
told
Pakistan
Telecom
to
block
traffic
to
YouTube
because
YouTube
was
hos:ng
blasphemous
videos.
The
engineers
at
Pakistan
Telecom
complied
by
crea:ng
a
very
specific
route
for
just
the
YouTube
addresses
(part
of
a
larger
Google
address
block).
Request
packets
that
matched
this
specific
route
were
sent
to
a
“black
hole
server,”
i.e.
a
server
that
dropped
each
packet
it
received.
Unfortunately,
this
black
hole
route
leaked
out
to
the
large
interna:onal
carrier,
Hong
Kong-‐based
PCCW.
PCCW
didn’t
have
route
filtering
in
place
on
this
par:cular
link
and
they
passed
the
black
hole
route
around
the
world.
Over
90
major
ISPs
erroneously
accepted
this
route
and
for
more
than
two
hours
YouTube
was
dark
while
almost
all
the
world’s
YouTube
requests
went
to
the
black
hole
server
in
Pakistan.
So
you
can’t
always
trust
your
neighbor,
however
competent
they
may
have
seemed
in
the
past.
There
are
many
addi:onal
considera:ons.
For
example,
certain
routes
may
have
preferen:al
pricing
up
to
a
certain
commitment
level
but
become
expensive
at
higher
traffic
levels.
So
the
choice
of
which
adver:sed
route
to
use
can
involve
some
quite
complex
considera:ons.
22
23. To
give
you
a
sense
of
the
business
trade
offs
that
go
on,
I
have
two
examples.
The
first
is
a
friend
of
mine
who
formed
a
fixed
wireless
ISP
in
southeastern
Illinois
a
few
years
ago.
Because
he
was
located
in
farm
country,
the
only
way
he
could
get
an
Internet
connec:on
was
by
buying
Internet
Transit
service
(called
Direct
Internet
Access
or
DIA)
from
Ameritech
(now
AT&T)
the
local
telephone
monopoly.
His
price
was
more
than
100x
what
Internet
Transit
would
have
cost
him
in
Chicago,
but
there
were
no
compe:ng
fiber
routes
through
his
area
and
even
if
he’d
been
close
to
a
long
distance
fiber
route
(say
between
Chicago
and
St
Louis),
local
connec:ons
to
long
distance
fiber
are
extremely
expensive
or,
more
olen,
just
not
available.
Once
his
business
was
up
and
running,
my
friend
spent
many
days
driving
to
and
from
Chicago
looking
for
tall
buildings
and
talking
to
building
owners.
Eventually
he
build
a
series
of
four
wireless
links
(totaling
more
than
70
miles)
which
connected
him
to
Chicago.
In
Chicago,
he
signed
up
for
a
monthly
recurring
charge
for
rack
space,
for
roof
rights
on
the
Chicago
data
center
and
for
a
cable
from
his
rack
to
their
“meet
me
room.”
He’d
also
promised
free
high
speed
Internet
service
to
three
building
owners,
downstate,
who
gave
him
roof
access
on
the
route
to
Chicago.
But
now
that
he
was
connected
in
Chicago,
he
could
purchase
Internet
transit
from
any
of
a
dozen
compe:ng
carriers
(at
a
:ny
frac:on
of
what
he
was
paying
AT&T).
Although
he
had
spent
nearly
$100K
(and
untold
man
hours)
pumng
this
wireless
route
together,
he
figured
his
payback
was
9
weeks.
Loca:on
ma3ers!
The
second
thing
that
happened
was,
as
his
total
traffic
grew
he
began
to
qualify
for
peering
with
major
content
providers
like
Google
and
Akamai.
This
cut
further
cut
his
costs
for
Internet
transit.
23
24. The
second
example
is
only
approximate,
but
representa:ve.
I
don’t
have
the
actual
numbers
on
YouTube’s
traffic
or
their
costs
during
the
20
months
between
their
founding
in
Feb
2005
and
their
purchase
by
Google
in
Oct-‐Nov
2006,
but
I
can
tell
you
that
one
of
their
early
employees
was
a
“peering
coordinator”
who
showed
up
at
NANOG
mee:ngs
early
in
2006.
In
early
2006,
there
was
already
a
great
interest
in
peering
with
YouTube.
By
the
summer
of
2006,
YouTube
was
the
5th
most
traffic’d
website
in
the
world.
They
were
s:ll
only
peering
in
Palo
Alto,
but
anyone
with
a
router
in
Palo
Alto
was
interested
in
offloading
their
YouTube
traffic.
And
any
Tier
one
carrier
that
didn’t
peer
with
YouTube
would
quickly
find
traffic
ra:os
going
unbalanced
on
links
where
they
handed
off
YouTube
traffic
to
someone
who
was
peering
with
YouTube.
I’m
not
showing
YouTube’s
costs
going
to
zero,
but
they
clearly
did
not
increase
(and
likely
went
down)
as
YouTube’s
traffic
grew!
24
25. Bill
Woodcock
and
Vijay
Adhikari
of
Packet
Clearing
House
did
a
very
comprehensive
survey
of
backbone
ISPs
in
2011
gemng
a
remarkable
86%
response
rate.
All
the
internal
indica:ons
are
this
survey
yielded
very
high
quality
data.
Several
interes:ng
things
emerged
from
this
data.
Most
notably,
many
operators
publish
a
set
of
peering
requirements,
and
these
typically
include
an
NDA.
But
if
you
meet
the
requirements,
there
are
no
formal
contracts!
These
are
handshake
agreements.
25
26. One
interes:ng
thing
was,
to
the
extent
there
are
contracts
between
operators
in
different
countries,
for
example
the
NDAs,
the
choice
of
governing
law
always
favors
the
country
with
stable
ins:tu:ons,
minimum
corrup:on
and
a
func:oning
judiciary.
26
27. In
terms
of
how
the
Internet
backbone
is
evolving,
the
most
interes:ng
thing
to
emerge
was
the
rise
of
mul:-‐lateral
peering.
These
are
arrangements
that
started
in
Asia
and
selected
loca:ons
in
Europe.
We
haven’t
seen
this
in
the
US
yet,
but
there
is
an
organiza:on,
“open-‐ix.org,”
backed
by
Google
and
Amazon
among
others,
that
is
trying
to
foster
the
spread
of
mul:-‐lateral
peering.
27
28. Mul:-‐lateral
peering
drama:cally
reduces
the
number
of
BGP
sessions
one
must
configure
and
manage,
thus
facilita:ng
more
peering.
With
bi-‐lateral
peering,
there
is
a
separate
BGP
session
for
every
peer.
28
29. In
mul:-‐lateral
peering,
one
organiza:on
–
perhaps
a
co-‐op
or
a
vendor
–
provides
a
single
route
server.
Each
par:cipant
establishes
a
single
BGP
session
to
this
server.
Typically,
the
route
server
includes
session-‐specific
configura:on
which
allows
you
some
of
the
flexibility
you
would
have
had
with
N
bi-‐lateral
peering
sessions
but,
to
get
started,
you
can
ignore
all
that
and
just
establish
one
simple
BGP
session
that
reaches
hundreds
of
peers.
29
30. This
graph
shows
the
number
of
IP
addresses
handled
by
various
carriers
as
a
func:on
of
how
many
peering
agreements
those
carriers
have.
You
can
see
that
some
of
the
original
Tier
1s
are
s:ll
visible
in
the
upper
lel,
but
otherwise,
the
Internet
backbone
is
very
distributed.
And
this
graph
is
based
on
addresses
handled,
not
on
traffic
carried.
30
31. When
we
look
at
traffic,
the
top
ISPs
are
quite
distributed.
Also,
we
can
see
what
happens
when
the
large
carrier
(Level
3
at
the
top)
buys
the
second
largest
carrier
(Global
Crossing
in
grey
just
below)
as
happened
in
April
2012.
Both
networks
immediately
saw
a
drop
in
traffic
as
customers
who
wanted
redundant
connec:ons
dropped
one
of
their
connec:ons
to
the
now
merged
business.
Then
over
:me,
both
networks
see
further
drops
in
traffic
as
the
rest
of
the
players
rearrange
their
networks.
Also,
note
that
this
traffic
diagram
only
deals
with
ISPs
that
offer
Internet
transit
services.
The
second
largest
network
in
the
world,
by
traffic,
is
Google.
If
Google
were
shown
on
this
graph,
it
would
appear
between
the
black
and
grey
lines.
So
the
Internet
is
very
distributed
and,
as
Rensys
notes
in
their
report,
the
rela:ve
market
share
of
the
backbone
carriers
as
a
group
has
been
falling
over
the
past
decade.
In
~20
years
of
the
commercial
Internet,
no
one
has
been
able
to
gain
control
of
the
Internet
backbone.
In
the
1990s,
the
original
gang
of
~six
backbone
providers
thought
they
had
an
oligopoly
(a
cartel),
but
by
2002,
second
:er
backbones
used
"donut
peering”
to
eliminate
the
original
:er
ones’
leverage.
Since
2000,
we've
seen
the
emergence
of
mul:ple
CDNs
(Akamai,
Level
3,
Google,
Limelight,
plus
Amazon,
Nejlix,
and
others
in
the
making)
which
have
further
diluted
any
a3empt
to
monopolize
the
backbone.
Also
over
the
past
20+
years,
we've
seen
an
explosion
in
the
number
of
buildings
where
some
kind
of
peering
takes
place.
In
short,
no
one
has
been
able
to
monopolize
the
Internet
backbone.
Now
we’re
seeing
the
emergence
of
mul:-‐lateral
peering
and
even
more
backbone
par:cipants.
31
32. The
Internet
backbone
is
a
very
interes:ng
phenomenon.
It’s
essen:ally
unregulated.
IANA
(the
body
that
supervises
the
assignment
of
addresses
and
other
protocol
number
assignments)
provides
only
coordina:on.
If
IANA
withheld
or
manipulated
assignments,
their
func:on
could
be
quickly
and
informally
bypassed.
Recently
we’ve
heard
a
lot
about
“regula:ng
the
Internet”
especially
since
the
revela:ons
of
NSA
spying.
But
most
such
discussion
is
happening
without
any
understanding
of
how
the
Internet
backbone
actually
works.
Forecasts
are
iffy,
but
the
current
system
is
extremely
successful
and
extremely
robust,
so
I
am
op:mis:c
the
Internet
will
con:nue
to
grow,
indefinitely.
32