1
Multi-Cloud Network leveraging SD-
WAN
Reference Architecture
Author: Matsuo Sawahashi
Division: GTS Japan, Solutioning, Chief Architect
Mail: matsuos@jp.ibm.com
2
Self-
introduction
Name: Matsuo Sawahashi
Company: IBM Japan
Division: Global Technology Services
Title: Executive Architect / Chief Architect
Current job:
• Connected Vehicle Project at my client
• Design multi-cloud networking architecture leveraging SD-WAN and Cloud-Exchanges
• Design connected-vehicle platform architecture on Azure based on Zero Trust Security concept
• Design service quality monitoring system based on SRE (Site Reliability Engineering) principle
• GTS Japan Technical Vitalization Community Leader
• Provide mentoring and round table session for junior engineers
• Provide leading-edge technical seminars
• JUAS (Japan System Users Association) part time instructor
Certifications
• TOGAF9 certification
• The Open Group Distinguished Architect
Publications
• OpenStack Deep Technique Guide
3
Executive
Summary
• A lot of clients use multiple public clouds as a result of
selecting a cloud that meets their requirements
• In connecting on-premise DCs and clouds, as a result
of connecting with individual lines, it has become a
complicated, lacking flexibility, and high cost network
• This document produces a simple and flexible with
low cost multi-cloud networking reference architecture
leveraging a cloud exchange service with SD-WAN
(Software Defined Wide Area Network)
4
Problem
Statement
• If we connect between on-premise DCs and multi clouds with an individual
network as mesh topology, we have to purchase a lot of lines as shown
above
• We need to change a lot of devices if we want to change the bandwidth and
the routing
• It becomes complicated, lack flexibility and expensive
AWS
East
IBM
Cloud
West
Azure
West
AWS
West
IBM
Cloud
East
Azure
East
Chart 1. Individual connectivity
On premise DC
East Primary
On premise DC
West Backup
Carrier
Network
Service
Carrier
Network
Service
5
Solution Idea
No.1
• The cloud exchange service provides multi-cloud connectivity, which is preconnected with a lot of
public cloud service providers and we can order virtual circuits to connect required cloud providers
• We can quickly use multi-cloud after connecting existing carrier network services to a cloud
exchange service DCs
• Changing bandwidth for each cloud can be executed immediately by just changing each virtual
circuit
AWS
East
IBM
Cloud
East
Azure
East
Chart 2. Use a cloud exchange service
Cloud
Exchange
Service
East
AWS
West
IBM
Cloud
West
Azure
West
Cloud
Exchange
Service
West
On premise DC
East Primary
On premise DC
West Backup
Carrier
Network
Service
Carrier
Network
Service
CPE
CPE…Customer Premises Equipment
VC…Virtual Circuit
VC
6
Solution Idea
No.2
• This architecture leverages the Cloud Exchange
Service with SD-WAN on underlying existing network
services
• SD-WAN provides simple and flexible routing and
bandwidth management
AWS
East
IBM
Cloud
East
Azure
East
Chart 3. Use a cloud exchange service with SD-WAN
Cloud
Exchange
Service
East
AWS
West
IBM
Cloud
West
Azure
West
Cloud
Exchange
Service
West
On premise DC
East Primary
On premise DC
West Backup
Carrier
Network
Service
Carrier
Network
Service
SD-WAN on
underlying existing
network services
SD-WAN
Router
CPE
SD-WAN
Router
7
Architectural Decision Examples
Issue How to connect between on-premise DCs and multi cloud DCs
Decision We will develop HUB-DCs with SD-WAN
Status Decided
Category Infrastructure – Networking
Assumptions • We must use Microsoft Azure as main platform for Connected Vehicle Project, AWS as secondary
platform for external system developed by 3rd party and IBM Cloud for BigData analysis
• We could use other cloud services for each requirement such as Oracle Cloud, Google and others.
Options 1. Use leased lines for each connection (Mesh topology)
2. Develop HUB-DCs (Hub-Spoke topology)
3. Develop HUB-DCs with SD-WAN (Software Defined WAN)
Arguments (Rationale)
Risk • SD-WAN would be a rapid growing technology
• We have not experienced to develop SD-WAN
Implications • CPE (Customer Premises Equipment) needs to be deployed on HUB-DCs to realize SD-WAN
• HUB-DC must support Mega cloud services such as IBM Cloud, Azure and AWS
Notes
Option Flexibility Simplicity Security Stability Change Speed
1 Low Low Complex Low Low – few months
2 Medium Medium Complex Medium Medium – few weeks
3 High High Flexible High High – few hours
8
Value
propositions
• Flexibility
• We can build a closed network by combining multiple leased line
and the Internet
• Even if we decide to use a new cloud service, we do not need to
make major network changes
• We can separate multiple closed networks according to
application and security requirements
• Scalability
• In addition to the option to increase line speed, we also have the
option of adding lines
• Even if the cloud service’s utilization changes, we can change the
network capacity immediately
• Low cost
• If you get inexpensive lines you can switch them one by one
• It can also improve availability by combining inexpensive but low
quality lines
• Change of networking can be realized by software, so change
cost can be reduced
9
BGP diagram
example -
Azure
East HUB-DC
Azure East Region
Customer Edge
BGP Router
ExpressRoute
West HUB-DC
Azure West Region
Customer Edge
BGP Router
ExpressRoute
East DC West DC
On premise
Router
On premise
Router
VNET-East1 VNET-West1VNET-East2 VNET-West2
WAN
VNET Peering VNET Peering
SVR
SVR
In order to detect a BGP route
failure and go though a detour
route, a server must be placed in a
VNET that connects a VNET to
which Express Route belongs via
VNET Peering
10
BGP diagram
example –
Azure
Considerations
• Azure ExpressRoute has a limit on the number of VNET
that can be peered as following table
• If we want to connect VNET that exceeds the limit, we
need to consider Transit VNET
• However, the BGP route information does not reach the
VNET behind of Transit VNET which is not directly
connected to ExpressRoute VNET
• If we take Transit VNET configuration, we need to install a
BGP device on Transit VNET and must configure BGP
setting to connect ExpressRoute
Express Route Circuit Size
Number of VNET links for
Standard
Number of VNET links for
Premium
50 Mbps 10 20
100 Mbps 10 25
200 Mbps 10 25
500 Mbps 10 40
1 Gbps 10 50
2 Gbps 10 60
5 Gbps 10 75
10 Gbps 10 100
Number of Virtual Networks per ExpressRoute circuit
11
BGP diagram
example 2 -
Azure
East HUB-DC
Azure East Region
Customer Edge
BGP Router
ExpressRoute
West HUB-DC
Azure West Region
Customer Edge
BGP Router
ExpressRoute
East DC West DC
On premise
Router
On premise
Router
VNET-East-Transit VNET-West-Transit
VNET-East1 VNET-West1
WAN
VNET Peering VNET Peering
SVR
SVR
If the number of VNET that can be
connected to ExpressRoute
exceeds the upper limit, it is
necessary to adopt Transit VNET
configuration and install BGP
device on Transit VNET
VNET-East2 VNET-West2
BGP
Device
BGP
Device
12
Study of Cloud Exchange Service Provider in
Japan
Operating
Company
Service Name Type DC Location Connecting Cloud Vender
Azure AWS IBM Oracle
Equinix Equinix Cloud
Exchange (ECX)
DC Tokyo /
Osaka
O
O
O
O
O
-
-
-
@Tokyo Direct Connect DC Tokyo - O O -
NTT-COM Multi Cloud
Connect
Gateway - O O - O
SoftBank Direct Access Gateway - O O - -
KDDI Direct Connect Gateway - O O O -
• We need to choose Japan HUB-DC as following axis
• DC Type – Equinix and @Tokyo
Gateway type exchange lacks flexibility (No CPE capability)
• Support Azure / AWS – Equinix and others
• Having DC in East and West Japan – Equinix only
Evaluation point Proposed Cloud Exchange
Equinix
• DC Type (CPE capability)
• Having DCs in East and West Japan
• Support Azure, AWS and others
As of May 2018
13
Study of Cloud Exchange Service Provider in
US LA
Operating
Company
Service Name Type DC Location Connecting Cloud Vender
Azure AWS IBM Oracle
Equinix Equinix Cloud
Exchange
DC LA O O O*1 O*1
Zayo CloudLink Gateway LA (Overlay) O O - -
NTT-COM Multi Cloud Connect Gateway - O - - -
Megaport Direct Connect Gateway LA (Overlay) O - - -
CoreSite Open Cloud
Exchange
DC LA O O O*1 O*1
• We want to choose HUB-DC as following axis
• DC Type – Equinix / CoreSite
Gateway type exchange lacks flexibility (No CPE capability)
• Support Azure and AWS – Equinix / CoreSite
Evaluation point Proposed Cloud Exchange
Equinix
• DC type exchange (CPE capability)
• Support multiple AWS / Azure
• Tokyo-LA connectivity
*1 … Connectable via DC other than LA
As of May 2018

Multi cloud network leveraging sd-wan reference architecture

  • 1.
    1 Multi-Cloud Network leveragingSD- WAN Reference Architecture Author: Matsuo Sawahashi Division: GTS Japan, Solutioning, Chief Architect Mail: matsuos@jp.ibm.com
  • 2.
    2 Self- introduction Name: Matsuo Sawahashi Company:IBM Japan Division: Global Technology Services Title: Executive Architect / Chief Architect Current job: • Connected Vehicle Project at my client • Design multi-cloud networking architecture leveraging SD-WAN and Cloud-Exchanges • Design connected-vehicle platform architecture on Azure based on Zero Trust Security concept • Design service quality monitoring system based on SRE (Site Reliability Engineering) principle • GTS Japan Technical Vitalization Community Leader • Provide mentoring and round table session for junior engineers • Provide leading-edge technical seminars • JUAS (Japan System Users Association) part time instructor Certifications • TOGAF9 certification • The Open Group Distinguished Architect Publications • OpenStack Deep Technique Guide
  • 3.
    3 Executive Summary • A lotof clients use multiple public clouds as a result of selecting a cloud that meets their requirements • In connecting on-premise DCs and clouds, as a result of connecting with individual lines, it has become a complicated, lacking flexibility, and high cost network • This document produces a simple and flexible with low cost multi-cloud networking reference architecture leveraging a cloud exchange service with SD-WAN (Software Defined Wide Area Network)
  • 4.
    4 Problem Statement • If weconnect between on-premise DCs and multi clouds with an individual network as mesh topology, we have to purchase a lot of lines as shown above • We need to change a lot of devices if we want to change the bandwidth and the routing • It becomes complicated, lack flexibility and expensive AWS East IBM Cloud West Azure West AWS West IBM Cloud East Azure East Chart 1. Individual connectivity On premise DC East Primary On premise DC West Backup Carrier Network Service Carrier Network Service
  • 5.
    5 Solution Idea No.1 • Thecloud exchange service provides multi-cloud connectivity, which is preconnected with a lot of public cloud service providers and we can order virtual circuits to connect required cloud providers • We can quickly use multi-cloud after connecting existing carrier network services to a cloud exchange service DCs • Changing bandwidth for each cloud can be executed immediately by just changing each virtual circuit AWS East IBM Cloud East Azure East Chart 2. Use a cloud exchange service Cloud Exchange Service East AWS West IBM Cloud West Azure West Cloud Exchange Service West On premise DC East Primary On premise DC West Backup Carrier Network Service Carrier Network Service CPE CPE…Customer Premises Equipment VC…Virtual Circuit VC
  • 6.
    6 Solution Idea No.2 • Thisarchitecture leverages the Cloud Exchange Service with SD-WAN on underlying existing network services • SD-WAN provides simple and flexible routing and bandwidth management AWS East IBM Cloud East Azure East Chart 3. Use a cloud exchange service with SD-WAN Cloud Exchange Service East AWS West IBM Cloud West Azure West Cloud Exchange Service West On premise DC East Primary On premise DC West Backup Carrier Network Service Carrier Network Service SD-WAN on underlying existing network services SD-WAN Router CPE SD-WAN Router
  • 7.
    7 Architectural Decision Examples IssueHow to connect between on-premise DCs and multi cloud DCs Decision We will develop HUB-DCs with SD-WAN Status Decided Category Infrastructure – Networking Assumptions • We must use Microsoft Azure as main platform for Connected Vehicle Project, AWS as secondary platform for external system developed by 3rd party and IBM Cloud for BigData analysis • We could use other cloud services for each requirement such as Oracle Cloud, Google and others. Options 1. Use leased lines for each connection (Mesh topology) 2. Develop HUB-DCs (Hub-Spoke topology) 3. Develop HUB-DCs with SD-WAN (Software Defined WAN) Arguments (Rationale) Risk • SD-WAN would be a rapid growing technology • We have not experienced to develop SD-WAN Implications • CPE (Customer Premises Equipment) needs to be deployed on HUB-DCs to realize SD-WAN • HUB-DC must support Mega cloud services such as IBM Cloud, Azure and AWS Notes Option Flexibility Simplicity Security Stability Change Speed 1 Low Low Complex Low Low – few months 2 Medium Medium Complex Medium Medium – few weeks 3 High High Flexible High High – few hours
  • 8.
    8 Value propositions • Flexibility • Wecan build a closed network by combining multiple leased line and the Internet • Even if we decide to use a new cloud service, we do not need to make major network changes • We can separate multiple closed networks according to application and security requirements • Scalability • In addition to the option to increase line speed, we also have the option of adding lines • Even if the cloud service’s utilization changes, we can change the network capacity immediately • Low cost • If you get inexpensive lines you can switch them one by one • It can also improve availability by combining inexpensive but low quality lines • Change of networking can be realized by software, so change cost can be reduced
  • 9.
    9 BGP diagram example - Azure EastHUB-DC Azure East Region Customer Edge BGP Router ExpressRoute West HUB-DC Azure West Region Customer Edge BGP Router ExpressRoute East DC West DC On premise Router On premise Router VNET-East1 VNET-West1VNET-East2 VNET-West2 WAN VNET Peering VNET Peering SVR SVR In order to detect a BGP route failure and go though a detour route, a server must be placed in a VNET that connects a VNET to which Express Route belongs via VNET Peering
  • 10.
    10 BGP diagram example – Azure Considerations •Azure ExpressRoute has a limit on the number of VNET that can be peered as following table • If we want to connect VNET that exceeds the limit, we need to consider Transit VNET • However, the BGP route information does not reach the VNET behind of Transit VNET which is not directly connected to ExpressRoute VNET • If we take Transit VNET configuration, we need to install a BGP device on Transit VNET and must configure BGP setting to connect ExpressRoute Express Route Circuit Size Number of VNET links for Standard Number of VNET links for Premium 50 Mbps 10 20 100 Mbps 10 25 200 Mbps 10 25 500 Mbps 10 40 1 Gbps 10 50 2 Gbps 10 60 5 Gbps 10 75 10 Gbps 10 100 Number of Virtual Networks per ExpressRoute circuit
  • 11.
    11 BGP diagram example 2- Azure East HUB-DC Azure East Region Customer Edge BGP Router ExpressRoute West HUB-DC Azure West Region Customer Edge BGP Router ExpressRoute East DC West DC On premise Router On premise Router VNET-East-Transit VNET-West-Transit VNET-East1 VNET-West1 WAN VNET Peering VNET Peering SVR SVR If the number of VNET that can be connected to ExpressRoute exceeds the upper limit, it is necessary to adopt Transit VNET configuration and install BGP device on Transit VNET VNET-East2 VNET-West2 BGP Device BGP Device
  • 12.
    12 Study of CloudExchange Service Provider in Japan Operating Company Service Name Type DC Location Connecting Cloud Vender Azure AWS IBM Oracle Equinix Equinix Cloud Exchange (ECX) DC Tokyo / Osaka O O O O O - - - @Tokyo Direct Connect DC Tokyo - O O - NTT-COM Multi Cloud Connect Gateway - O O - O SoftBank Direct Access Gateway - O O - - KDDI Direct Connect Gateway - O O O - • We need to choose Japan HUB-DC as following axis • DC Type – Equinix and @Tokyo Gateway type exchange lacks flexibility (No CPE capability) • Support Azure / AWS – Equinix and others • Having DC in East and West Japan – Equinix only Evaluation point Proposed Cloud Exchange Equinix • DC Type (CPE capability) • Having DCs in East and West Japan • Support Azure, AWS and others As of May 2018
  • 13.
    13 Study of CloudExchange Service Provider in US LA Operating Company Service Name Type DC Location Connecting Cloud Vender Azure AWS IBM Oracle Equinix Equinix Cloud Exchange DC LA O O O*1 O*1 Zayo CloudLink Gateway LA (Overlay) O O - - NTT-COM Multi Cloud Connect Gateway - O - - - Megaport Direct Connect Gateway LA (Overlay) O - - - CoreSite Open Cloud Exchange DC LA O O O*1 O*1 • We want to choose HUB-DC as following axis • DC Type – Equinix / CoreSite Gateway type exchange lacks flexibility (No CPE capability) • Support Azure and AWS – Equinix / CoreSite Evaluation point Proposed Cloud Exchange Equinix • DC type exchange (CPE capability) • Support multiple AWS / Azure • Tokyo-LA connectivity *1 … Connectable via DC other than LA As of May 2018