Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

(NET302) Delivering a DBaaS Using Advanced AWS Networking

895 views

Published on

Delivering a managed database-as-a-service in a highly secure and simple way can be a challenging problem, especially when your customers have many different network and access requirements. We went through many iterations trying to find a model that was easy to support, but also gave our customers control and visibility. In this session, we explore the incredibly flexible AWS networking solutions that we have used to deliver our services to customers with wildly different architectures. This is an advanced session for those who want deliver services into complex or divergent network architectures, while still maintaining control of the infrastructure that your services are deployed on.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

(NET302) Delivering a DBaaS Using Advanced AWS Networking

  1. 1. © 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Ben Bromhead, Instaclustr October 2015 NET302 Delivering a DBaaS Using Advanced AWS Networking
  2. 2. Who am I? • Ben Bromhead, CTO @ Instaclustr What does Instaclustr do? • Cassandra as a Service • Managing 300+ instances • 95% on Amazon Web Services
  3. 3. What to Expect from the Session • Exploration of challenges faced delivering DBaaS • How and when to use AWS networking features to solve these challenges • A (meandering) history of our AWS journey
  4. 4. Some basics What is Cassandra? • A scalable, highly available OLTP database • Inspired by the Dynamo (Amazon) and the BigTable (Google) papers • Tunable consistency • Clients aware of topology What a Cassandra DBaaS should look like: • High throughput / low latency • Secure • Easy
  5. 5. Challenge #1: multi-tenancy
  6. 6. Our first attempt at multi-tenancy How we first started: • Multi-tenancy was done by deploying resources under our customers’own AWS accounts • Limited access IAM user • Billing done via Amazon DevPay
  7. 7. Multi-tenancy and Cassandra How we first started: • Cassandra is a scale-out OLTP / operational database, designed for use cases that grow beyond a single server • No point trying to multi-tenant within Cassandra • Other than app level, 99% of multi-tenant use cases don’t make sense for a highly scalable DB like Cassandra • Need to multi-tenant at the cluster level
  8. 8. Multi-tenancy by AWS account US_EAST_1 Availability Zone B Availability Zone CAvailability Zone A Cassandra Cassandra Cassandra Customer 1 Customer N … … …
  9. 9. Multi-tenancy by AWS account Pros: • Deployed in customer account – access was simple • Billing was simple Cons: • Change over to VPC? • No two AWS accounts are the same • Billing wasn’t flexible • Customers would mess with our stuff • Unable to detect AZ capacity
  10. 10. Time to change! Run everything under our own AWS account!
  11. 11. Multi-tenancy by VPC Pros: • Reduce support overhead • Flexible billing • Simplify AWS interface Cons: • Had to rewrite everything • Had to do our own billing • Already know our AZ capacity • Used this opportunity to move across to using VPCs… how connect?
  12. 12. Multi-tenancy by VPC US_EAST_1 Availability Zone Availability ZoneAvailability Zone Cassandra Cassandra Cassandra Cassandra Cassandra Cassandra Customer 2 Customer 1 Customer N
  13. 13. Multi-tenancy by VPC Side effects include: We now have lots and lots of VPCs Multiple accounts to get around VPC hard limits…
  14. 14. When to multi-tenant with VPC 1. The service you provide is a network service 2. The service you provide is directly related to resource consumption (CPU, RAM, etc.) 3. The service you deploy leverages a complex network configuration (multi-region, multi-AZ)
  15. 15. Challenge #2: connectivity
  16. 16. Support connectivity from outside AWS • Hybrid clusters that span cloud / private data centers • Support multi-region Cassandra clusters • Support developers connecting from their personal machines • Occasional service running in a different provider Resulting requirement: • Support connectivity from outside an AWS region
  17. 17. Luckily Cassandra is awesome… • Cassandra natively understands NAT’d environments • Deploy instances in a subnet with an IGW • Public IP for every node • Sprinkle in some security group magic and Cassandra authentication Problem solved!
  18. 18. Cassandra with public IPs Cassandra Cassandra Cassandra VPC subnet VPC subnet VPC subnet security group Internet Gateway
  19. 19. When to deliver services via public IP 1. You want people to actually use your service…
  20. 20. Support Heroku customers Heroku is a Platform as a Service that runs on top of AWS – cannot dictate the IP it connects from Resulting requirement: • Support secure global ingress (aka, Allow All)
  21. 21. Cassandra with public IPs Cassandra Cassandra Cassandra VPC subnet VPC subnet VPC subnet security group Internet Gateway
  22. 22. Cassandra with public IPs Cassandra Cassandra Cassandra VPC subnet VPC subnet VPC subnet security group Internet Gateway
  23. 23. Luckily Cassandra is awesome… Add 0.0.0.0/0 to the security group… Cassandra supports client-to-node certificate authentication Problem solved!
  24. 24. Cassandra with public IPs Cassandra Cassandra Cassandra VPC subnet VPC subnet VPC subnet security group Internet Gateway
  25. 25. When to support universal ingress 1. Your customers are unlikely to have a static IP 2. Complex / changing access patterns 3. Your service can support robust authentication
  26. 26. Support private connectivity within AWS • Some customers think that accessing their database over a public IP address is scary • Not all applications have direct Internet access (app layer tier) • Easy to do with EC-2 Classic Resulting requirement: • Support access to Cassandra via private IP
  27. 27. Support private connectivity within AWS This actually could have been impossible within a VPC…
  28. 28. Luckily AWS is awesome… By the time we had started to look at VPCs as our preferred environment, AWS had introduced the last feature we needed: • VPC peering
  29. 29. VPC peering US EAST 1 Instaclustr AWS account Customer AWS account Customer AWS account
  30. 30. VPC peering – total control on both sides US EAST 1 Instaclustr AWS account Customer AWS account #1 Customer AWS account #2 security group security group
  31. 31. VPC peering is our most used AWS feature 70% of our production clusters have one or more VPC peering connections with other account. • Critical to adoption within the enterprise • Critical for multi-level architectures where app layer does not have external egress • Almost always need to educate the customer • Still incur inter-AZ traffic charges • Your us-east-1a is not the same as my us-east-1a
  32. 32. When to use VPC peering 1. Resources accessing your service are located in AWS. 2. You provide a service used by the app / DB tier.
  33. 33. Challenge #3: custom solutions
  34. 34. Supporting complex / custom requirements One crucial component of success with any XaaS business is to ensure uniformity of customer accounts: • Reduces support cost per account • Ensures consistent experience across customers • One-off solutions still haunt us • But…one-off solutions have also won us accounts and have been rolled into production features (eventually)
  35. 35. Leverage AWS components We try to always leverage AWS components for one-off solutions within customer VPCs: • Primarily enabled by our VPC multi-tenanting approach – does not impact other customers • It’s always a proven and managed solution • Easy to bring into the fold when we support it properly
  36. 36. Custom solutions: an example A customer wants access to the underlying Cassandra data files for data sovereignty and offline analytics. • Luckily, we back up all snapshots to Amazon S3 • We didn’t want to write a whole snapshot access UI and service for our website • Instead, we just provided read-only IAM credentials to the S3 bucket containing those snapshots
  37. 37. Custom solutions: a second example A customer wants to migrate their existing on-premises cluster to AWS/Instaclustr. • No public IP access to their cluster • Use AWS virtual private gateway to connect to their concentrator • Let Cassandra’s multi-dc support handle the data sync...
  38. 38. Key takeaways • Using a VPC per service simplifies multi-tenancy • VPCs offer a number of connectivity options • Ensure your service supports robust authentication • VPC multi-tenancy allows custom connectivity and functionality without impacting other customers
  39. 39. Thank you!
  40. 40. Remember to complete your evaluations!

×