Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014

5,044 views

Published on

As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against increasingly complex design requirements. This session follows the evolution of a single regional VPC into a multi-VPC, multiregion design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, managing multitenant VPCs, conducting VPC-to-VPC traffic, running multiple hybrid environments over AWS Direct Connect, and integrating corporate multiprotocol label switching (MPLS) clouds into multiregion VPCs.

Published in: Technology

(ARC403) From One to Many: Evolving VPC Design | AWS re:Invent 2014

  1. 1. November 11, 2014 | Las Vegas, NV Yinal Ozkan, AWS Principal Solutions Architect
  2. 2. version 3
  3. 3. Prod QA Dev
  4. 4. Region Availability Zone VPC Internet Gateway Internet Users Public Subnet Servers
  5. 5. Region Availability Zone VPC NAT Instance Internet Gateway Internet Users Private Subnet 1 Public Subnet Private Subnet 2
  6. 6. NAT Instance Private Subnet 2 Private Subnet 3 Private Subnet 4 Region Availability Zone 1 VPC NAT Instance Internet Gateway Internet Users Private Subnet 1 Public Subnet 1 Availability Zone 2 Public Subnet 2
  7. 7. Availability Zone B Public Subnet Availability Zone A Private Subnet Public Subnet Private Subnet Instance A 10.1.1.11 /24 Instance C 10.1.3.33 /24 Instance B 10.1.2.22 /24 Instance D 10.1.4.44 /24 VPC CIDR: 10.1.0.0 /16 Route Table Destination Target 10.1.0.0/16 local 10.1.1.0/24 Instance B
  8. 8. AWS region Public-facing web app Internal company app What’s next? VPN connection Customer data center
  9. 9. Public-facing web app Internal company app #2 HA pair VPN endpoints Customer data center Internal company app #3 Internal company app #4 Internal company app #1 Internal company Dev Internal company QA AWS region Backup AD, DNS Monitoring Logging
  10. 10. Public-facing web app Internal company app #2 HA pair VPN endpoints Customer data center Internal company app #3 Internal company app #4 Internal company app #1 Internal company Dev Internal company QA AWS region Backup AD, DNS Monitoring Logging Direct Connect Facility Customer Data Center Physical Connection Logical Connections VLANs Logical Connections VLANs
  11. 11. Public-facing web app Internal company app Customer data center AWS Region #2 Public-facing web app Internal company app AWS Region #2 Customer Backbone Acting As Hub
  12. 12. •Security groups and NACLs stillapply AWS region Public-facing web app Internal company app #1 HA pair VPN endpoints company data center Internal company app #2 Internal company app #3 Internal company app #4 Services VPC Internal company Dev Internal company QA AD, DNS Monitoring Logging •Security groups still bound to single VPC
  13. 13. 10.1.0.0/16 10.0.0.0/16 •VPCs within same region Peer Request Peer Accept •Same or different accounts •IP space cannot overlap •Only 1 between any 2 VPCs
  14. 14. 10.1.0.0/16 10.0.0.0/16 10.0.0.0/16 ✔
  15. 15. 10.0.0.0/16 10.0.0.0/16 PCX-1 PCX-2 Subnet 1 10.1.1.0/24 Subnet 2 10.1.2.0/24 10.1.0.0/16 Route TableSubnet 1 Destination Target 10.1.0.0/16 local 10.0.0.0/16 PCX-1 Route Table Subnet 2 Destination Target 10.1.0.0/16 local 10.0.0.0/16 PCX-2 A B C
  16. 16. 10.1.0.0/16 10.0.0.0/16 Route Table Destination Target 10.1.0.0/16 local 10.0.0.0/16 PCX-1 Route Table Destination Target 10.0.0.0/16 local 10.1.0.0/16 PCX-1 PCX-1 •No IGW or VGW required A B •No SPoF •No bandwidth bottlenecks
  17. 17. 10.0.0.0/16 10.2.0.0/16 PCX-1 PCX-2 Subnet 1 10.1.1.0/24 Subnet 2 10.1.2.0/24 10.1.0.0/16 Route TableSubnet 1 Destination Target 10.1.0.0/16 local 10.0.1.11/32 PCX-1 Route Table Subnet 2 Destination Target 10.1.0.0/16 local 10.2.0.0/16 PCX-2 A B C Subnet 3 Route TableSubnet 3 Destination Target 10.0.0.0/16 local 10.1.1.0/24 PCX-1 10.0.1.11 10.0.0.0/16
  18. 18. 10.1.0.0/16 10.4.0.0/16 10.0.0.0/16 10.3.0.0/16 172.16.0.0/16 192.168.0.0/16 10.2.0.0/16 172.17.0.0/16 C A
  19. 19. 10.1.0.0/16 10.4.0.0/16 10.0.0.0/16 10.3.0.0/16 172.16.0.0/16 192.168.0.0/16 10.2.0.0/16 172.17.0.0/16 company data center 10.10.0.0/16
  20. 20. 10.1.0.0/16 10.4.0.0/16 10.0.0.0/16 10.3.0.0/16 172.16.0.0/16 192.168.0.0/16 10.2.0.0/16 172.17.0.0/16 company data center 10.10.0.0/16
  21. 21. 10.4.0.0/16 10.0.0.0/16 172.16.0.0/16 192.168.0.0/16 172.17.0.0/16 10.1.0.0/16 10.2.0.0/16 10.3.0.0/16
  22. 22. the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2 How many connections ?
  23. 23. 10.4.0.0/16 10.0.0.0/16 10.3.0.0/16 172.16.0.0/16 192.168.1.0/24 10.2.0.0/16 172.17.0.0/16
  24. 24. Public-facing web app HA pair VPN endpoints Customer data center Prod QA Dev region
  25. 25. Peer review •Shared infrastructure services moved to VPC •1 to 1 peering = app isolation •Security groups and NACLs stillapply AWS region Public-facing web app Internal company app #1 HA pair VPN endpoints company data center Internal company app #2 Internal company app #3 Internal company app #4 Services VPC Internal company Dev Internal company QA AD, DNS Monitoring Logging •Security groups still bound to single VPC
  26. 26. Public-facing web app Internal company app AWS Region #2 Public-facing web app Internal company app AWS Region #2 VPN Gateway VGW IGW
  27. 27. Public Subnet Availability Zone A Private Subnet Public Subnet Availability Zone B Private Subnet NAT A Public: 54.200.129.18 Private: 10.1.1.11 /24 Instance C 10.1.3.33 /24 Instance B 10.1.2.22 /24 Instance D 10.1.4.44 /24 Internet Amazon S3 AWS region Other private subnets can share the same routing table and use the NAT. But… Amazon Dynamo DB
  28. 28. Public Subnet Availability Zone A Private Subnet Public Subnet Availability Zone B Private Subnet NAT A Public: 54.200.129.18 Private: 10.1.1.11 /24 Instance B 10.1.2.22 /24 Internet Amazon S3 AWS region … you could reach a bandwidth bottleneck if your private instances grow and their NAT-bound traffic grows with them. Amazon Dynamo DB
  29. 29. Availability Zone A Availability Zone B Private Subnet Internet Amazon S3 Amazon Dynamo DB AWS region Public Subnet Public Subnet NAT Customers Public load balancer Web Servers •Image processing app with high outbound network to Amazon S3 Direct to Amazon S3 Public Elastic Load Balancing Subnet Private Subnet Public ELB Subnet Multi-AZ Auto Scaling group Auto Scaling group •Public load balancer (Elastic Load Balancing) receives incoming customer HTTP/S requests •Auto Scaling assigns public IP to new web servers •With public IPs, web servers initiate outbound requests directly to Amazon S3 •NAT device still in place for private subnets
  30. 30. $aws autoscaling create-launch-configuration --launch-configuration-name hi-bandwidth- public --image-id ami-xxxxxxxx --instance-type m1.xlarge --associate-public-ip-address Sample launch configuration (named “hi-bandwidth-public”):
  31. 31. Availability Zone A Private Subnet Availability Zone B Private Subnet Internet Amazon S3 AWS region Public Subnet Public Subnet NAT •Use Auto Scaling for NAT availability •Create 1 NAT per Availability Zone •All private subnet route tables to point to same zone NAT •1 Auto Scaling group per NAT with min and max size set to 1 •Let Auto Scaling monitor the health and availability of your NATs •NAT bootstrap script updates route tables programmatically Auto scale HA NAT NAT Amazon Dynamo DB
  32. 32. $aws autoscaling create-auto-scaling- group --auto-scaling-group-name ha- nat-asg --launch-configuration-name ha-nat-launch --min-size 1 --max-size 1 --vpc-zone-identifier subnet- xxxxxxxx Sample HA NAT Auto Scaling group (named “ha-nat-asg”):
  33. 33. Latest version of the script: https://github.com/ralex-aws/vpc
  34. 34. m1.small Low m1.large Moderate m1.xlarge, c3.2xlarge High t1.micro Very Low
  35. 35. Public Subnet Availability Zone A Private Subnet Public Subnet Availability Zone B Private Subnet Instance A Public: 54.200.129.18 Private: 10.1.1.11 /24 Instance C 10.1.3.33 /24 Instance B 10.1.2.22 /24 Instance D 10.1.4.44 /24 Route Table Internet Amazon S3 Amazon Dynamo DB AWS region AWS outside the VPC
  36. 36. Availability Zone A Private Subnet Private Subnet AWS region Virtual Private Gateway VPN connection Customer data center Intranet App Intranet App Availability Zone B Internal customers All private application Route Table Destination Target 10.1.0.0/16 local Corp CIDR VGW
  37. 37. Amazon S3
  38. 38. Availability Zone A Private Subnet Private Subnet AWS region VPN connection Customer data center Intranet App Intranet App Availability Zone B Internal customers Controlling the border Internal Load balancer Elastic Load Balancing Private Subnet Elastic Load Balancing Private Subnet ELB Multi AZ Auto Scaling group •Deploy internal Elastic Load Balancing layer across Availability Zones •Add all instances allowed outside access to a security group •Use this security group as the only source allowed access to the proxy port in the load balancer’s security group
  39. 39. Availability Zone A Private Subnet(s) Private Subnet(s) AWS region VPN connection Customer data center Intranet App Intranet App Availability Zone B Internal customers Controlling the border Internal Load balancer Elastic Load Balancing Private Subnet Elastic Load Balancing Private Subnet •Squid Proxy layer deployed between internal load balancer and the IGW border. Proxy Public Subnet Proxy Public Subnet Amazon S3 HTTP/S Multi AZ Auto Scaling group •Only proxy subnets have route to IGW. •Proxy security group allows inbound only from Elastic Load Balancing security group. •Proxy restricts which URLs may pass. In this example, s3.amazonaws.com is allowed. •Egress NACLs on proxy subnets enforce HTTP/S only.
  40. 40. # CIDR AND Destination Domain based Allow # CIDR Subnet blocks for Internal ELBs acl int_elb_cidrs src 10.1.3.0/24 10.1.4.0/24 # Destination domain for target S3 bucket acl s3_v2_endpoints dstdomain $bucket_name.s3.amazonaws.com # Squid does AND on both ACLs for allow match http_access allow int_elb_cidrs s3_v2_endpoints # Deny everything else http_access deny all
  41. 41. Using Squid Proxy Instances for web service access in Amazon VPC: http://aws.amazon.com/articles/5995712515781075
  42. 42. More on private connections
  43. 43. Customer data center AWS Direct Connect location AWS Direct Connect Private Virtual Interface (PVI) connects to VGW on VPC •1 PVI per VPC •802.1Q VLAN Tags isolate traffic across AWS Direct Connect Private fiber connection One or multiple 50 –500 Mbps, 1 Gbps or 10 Gbpspipes Simplify with AWS Direct Connect Public-facing web app AWS region Prod QA Dev
  44. 44. A few bits on AWS Direct Connect… •Dedicated, private pipes into AWS •Create private (VPC) or public interfaces to AWS •Cheaper data-out rates than Internet (data-in still free) •Consistent network performance compared to Internet •At least 1 location to each AWS region (even GovCloud!) •Recommend redundant connections •Multiple AWS accounts can share a connection
  45. 45. VPC 1 PrivateVirtual Interface 1 VLAN Tag 101 BGP ASN 7224 BGP Announce 10.1.0.0/16 InterfaceIP 169.254.251.5/30 10.1.0.0/16 VGW 1 Multiple VPCs over AWS Direct Connect Customer Switch + Router Customer Interface 0/1.101 VLAN Tag 101 BGP ASN 65001 BGP Announce Customer Internal InterfaceIP 169.254.251.6/30 VLAN 101 VLAN 102 VLAN 103 VPC 2 10.2.0.0/16 VGW 2 VPC 3 10.3.0.0/16 VGW 3 2 102 10.2.0.0/169.254.251.9/1.102 102 65002 169.254.251.10/30 1.103 103 65003 169.254.251.14/3 103 10.3.0.0/169.254.251.13/30 Route Table Destination Target 10.1.0.0/16 PVI 1 10.2.0.0/16 PVI 2 10.3.0.0/16 PVI 3 Customer Internal Network
  46. 46. AWS Direct Connect in the United States AWS Direct Connect Equinix, San Jose us-west-1 us-west-2 us-east-1 AWS Private Network Disaster Recovery VPN to VGW
  47. 47. Customer internal network VPC 1 PublicVirtual Interface 1 VLAN Tag 501 BGP ASN 7224 BGP Announce AWSRegional Public CIDRs InterfaceIP Public /30 Provided 10.1.0.0/16 VGW 1 Public AWS + VPCs over AWS Direct Connect Customer Interface 0/1.501 VLAN Tag 501 BGP ASN 65501 (or Public) BGP Announce Customer Public InterfaceIP Public /30 Provided VLAN 101 VLAN 102 VLAN 103 VLAN 501 VPC 2 10.2.0.0/16 VGW 2 VPC 3 10.3.0.0/16 VGW 3 Public US AWS Regions Route Table Destination Target 10.1.0.0/16 PVI 1 10.2.0.0/16 PVI 2 10.3.0.0/16 PVI 3 Public AWS PVI 5 NAT / PAT security layer Customer Switch + Router
  48. 48. See what your VGW sees Before: Enable: After:
  49. 49. Customerrouters Customer internal network AWS Direct Connect routers AWS region AWS Direct Connect location Multiple physical connections: •Active / Active links via BGP multi-pathing •Active / Passive also an option •BGP MEDs or local preference can influence route •Bidirectional Forwarding Detection (BFD) protocol supported
  50. 50. Customer routers Customer global MPLS backbone network US-East-1 AWS region AWS Direct Connect location: Virginia or NYC Going global Customerrouters AWS Direct Connect routers AWS Direct Connect location: Ireland or London EU-West-1 AWS region AWS Direct Connect routers
  51. 51. With AWS regions just another spoke on your global network, it’s easy to bring the cloud down to you as you expand around the world. US customer data center EU-West-1 region EU customer data center Customer MPLS backbone AWS Direct Connect PoP Ireland or London US-West-1 region AWS Direct Connect PoP Virginia or NYC AP-Southeast-1 region AWS Direct Connect PoP Singapore AP customer data center
  52. 52. http://bit.ly/awsevals

×