2. Industry and IT Environment Details
• American multinational technology company that specializes in Internet-related services
and products.
• which include online advertising technologies, search engine, cloud computing,
software, and hardware.
• In 2001, Google acquired Deja News and in April 2003, Google acquired Applied
Semantics - making software applications .
• Google APIs are a set of application programming interfaces (APIs) developed by Google
which allow communication with Google Services and their integration to other services.
• Examples of these include Search, Gmail, Translate or Google Maps.
3. Industry and IT Environment Details
• As of 2016, Google owned and operated nine data centers across North and South
America, two in Asia, and four in Europe.
• Google uses a combination of Quagga open source software along with OpenFlow to
optimize its data center interconnects.
• Google's use of OpenFlow within its own data centers and it called as SDN network "B4.“
• Google's rationale for software-defined networking.
• First, by separating hardware from software, the company can choose hardware based
on required features while being able to innovate and deploy on software timelines.
• Second, it provides logically centralized control that will be more deterministic, more
efficient and more fault-tolerant.
• Third, automation allows Google to separate monitoring, management and operation
from individual boxes.
4. Industry and IT Environment Details
• At the start of the project, Google built its own switches (see image) using merchant silicon.
• Google built its own hardware because there wasn't any hardware in the market to fulfill its
needs.
• The only way to get well defined control and data plane APIs on at that
time was to build it ourselves(HW).
• Built from merchant silicon -100s of ports of nonblocking
10GE
• OpenFlow support
• Open source routing stacks for BGP, ISIS
• Multiple chassis per site – Fault tolerance & scale to multiple Tbps
• Fully centralized software controlled
5. Challenges - Solutions
Network-wide Visibility & Control
Direct Control
One of View of N/w as a whole
SDN separated the control plane and the
data plane
Centralized Controller – hierarchy of
controls in the n/w
HYBRID approach i.e. one SDN for one DC
Optimization
Unsustainable - CAPEX &
OPEX
Decentralized Protocol
H/w & S/w bundled
together
Dependent on IETF
Absence of HYBRID
topology
6. After effects of Adopting SDN
Fate Sharing Principle
Improvement because of
Centralized Scheme
Distinguishing b/w
High-Value & Bulk
Traffic
SDN based peering
7. Why SDN?
• SDN ⇏ Cheap Hardware
• SDN = programmatic decomposition of control, data and management planes
• Well defined APIs ⇒ fundamentally easier operational model
• Separation of control and data planes ⇒ much higher uptime
• Network function virtualization ⇒ new functions rolled out in days (vs years)
9. DC Getting right to the punch line, what do you see as the
biggest improvements you've managed to achieve by
going with SDN? AV Well, as we were saying earlier,
through a combination of centralized traffic engineering
and quality-of-service differentiation, we've managed to
distinguish high-value traffic from the bulk traffic that's
not nearly as latency-sensitive. That has made it possible
to run many of our links at near 100 percent utilization
levels
19. Benefits of SDN
• Unified view of the network fabric: With SDN we get a unified view of the network,
simplifying configuration, management and provisioning.
• High utilization: Centralized traffic engineering provides a global view of the supply and
demand of network resources. Managing end-to-end paths with this global view results in
high utilization of the links.
• Faster failure handling: whether it be link, node or otherwise are handled much faster.
Furthermore, the systems converge more rapidly to target optimum and the behavior is
predictable.
• Faster time to market/deployment: With SDN, better and more rigorous testing is done
ahead of rollout accelerating deployment. The development is also expedited as only the
features needed are developed.
• Hitless upgrades: The decoupling of the control plane from the forwarding/data plane
enables us to perform hitless software upgrades without packet loss or capacity degradation.
:
20. • High fidelity test environment: The entire backbone is emulated in software which not only
helps in testing and verification but also in running “what-if” scenarios.
• Elastic compute: Compute capability of network devices is no longer a limiting factor as
control and management resides on external servers/controllers. Large-scale computation,
path optimization in our case, is done using the latest generation of servers.