What do you do when your usual setup or turnkey solution isn’t suited for your workload?
Most of the documentation and user feedback that you can find about OpenStack is written for the use-case of running a public facing cloud serving several external customers. When you want to host a single tenant with a single application the problem is completely different, you don't want publicly exposed APIs. You want to ensure optimal resource allocation to maximize your application performance. You want to leverage the fact that you own the infrastructure layer to optimize your instance placement strategy, and to get the best latency and to avoid creating SPOFs using affinity (or anti affinity rules).
This talk will focus on what we learned during a two years journey; from getting OpenStack up and running reliably, to investigating performance bottlenecks, to maximizing the performance of our private cloud.
Water Industry Process Automation & Control Monthly - April 2024
Openstack Summit Tokyo 2015 - Building a private cloud to efficiently handle 40 billion requests per day
1. Building a Private Cloud to Efficiently
Handle 40 Billion Requests / Day
October 28th, 2015
Pierre Gohon | Sr. Site Reliability Engineer | pierre.gohon@tubemogul.com
Pierre Grandin | Sr. Site Reliability Engineer | pierre.grandin@tubemogul.com
2. Who are we?
TubeMogul (Nasdaq : TUBE)
● Enterprise software company for digital branding
● Over 27 Billion Ads served in 2014
● Over 40 Billion Ad Auctions per day in Q3 2015
● Bids processed in less than 50 ms
● Bids served in less than 80 ms (inc. network round trip)
● 5 PB of monthly video traffic served
● 1.6 EB of data stored
3. Who are we?
Operations Engineering
● Ensure the smooth day to day operation of the platform
infrastructure
● Provide a cost effective and cutting edge infrastructure
● Provide support to dev teams
● Team composed of SREs, SEs and DBAs (US and UA)
● Managing over 2,500 servers (virtual and physical)
5. ● 6 AWS Regions (us-east*2, us-west*2, europe, apac)
● Physical servers in Michigan / Arizona (Web/Databases)
● DNS served by third party (UltraDNS +Dynect)
● External monitoring using Catchpoint
● CDNs to deliver content
● External security audits
We’re not adding complexity!
Before Openstack: we’re already very “Hybrid”…
9. ● DIY ?
○ Small OPS team
■ 12 members in two timezones
■ 3 only dedicated to OpenStack
○ New challenges
■ Internal training
■ Little external support (really ?) vs AWS
■ Manage data centers (Servers, Network, …)
OpenStack challenges - Operational aspect
10. ● Are applications AWS dependent ?
○ Internal ops tools
○ Developer’s applications
○ AWS S3, DynamoDB, SNS, SQS, SES, SWF
● Convert developers to the project : we need their support
● OpenStack release cycle (when shall we update to latest
version?)
● OpenStack really needed components ?
● How far do we go (S3 replacement ? Network control ?
Hardware control ?)
OpenStack challenges - Application migration aspect
11. ● Managing our own ASN / IPs (v4/v6)
● Choose “best for needs” transit providers (tier 1)
● Better control routes to/from our endpoints
● Allow dedicated AWS connections / others
● Allow direct peerings to ad networks
● Want to be accountable for networking issues
● Cost control
How? Networking - External connectivity
12. ● Applications are already designed for redundancy/cloud
● Circumvent virtualized networking limitations
● Fine-tune baremetal nodes for HAProxy
● For the future equipments are “cloud ready” (nexus 5K for
top of rack switch)
○ automatic switch configuration
○ cisco software evolutions ?
● 1G for admin, X*10G for public ?
● Leverage multicast ?
How? Networking - Hybrid physical / virtualized
15. ● If you are not building a multi-thousand hypervisors cloud,
you don’t need it to be complex
● Simplifies day-to-day operations
● Home made puppet catalog
○ because less lines of code
○ because of the learning curve
○ because need to tweak settings (ulimit?)
● No need for horizon
● No need for shared storage
How? Keep it simple
16. ● Affinity / anti-affinity rules
○ Enforce resiliency using anti-affinity rules
○ Improve performances using affinity rules
How? Leverage your knowledge of your infrastructure
{"profile": "OpenStack", "cluster": "rtb-hbase",
"hostname": "rtb-hbase-region01", "nagios_host":
"mgmt01"}
18. Infrastructure As Code
● Follow standard development lifecycle
● Repeatable and consistent server
provisioning
Continuous Delivery
● Iterate quickly
● Automated code review to improve code
quality
Reliability
Improve Production Stability
Enforce Better Security Practices
How? Continuous Delivery
19. ● We already have a lot of automation:
● ~10,000 Puppet deployments last year
● Over 8,500 production deployments via jenkins last year
● On the infrastructure:
○ masterless mode for the deployment
○ master mode once the node is up and running
● On the VMs:
○ Puppet run is triggered by cloud-init, directly at boot
○ from boot to production ready: <5 minutes
Puppet
see also : http://www.slideshare.net/NicolasBrousse/puppet-camp-paris-2015
21. Gerrit, an industry standard : OpenStack, Eclipse, Google, Chromium,
WikiMedia, LibreOffice, Spotify, GlusterFS, etc...
Fine Grained Permissions Rules
Plugged into LDAP
Code Review per commit
Stream Events
Integrated with Jenkins, Jira and Hipchat
Managing about 600 Git repositories
Infrastructure As Code - Gerrit Integration
22. Infrastructure As Code - Gerrit in Action
Automatic verify : -1 if the commit doesn’t pass Jenkins code validation
26. Infrastructure As Code - Safe upgrade paths
Easy as 1-2-3:
1. Test your upgrades using Jenkins
2. Deploy the upgrade by pressing a
single button*
3. Enjoy the rest of your day
* https://github.com/pgrandin/lcam
fig.1 : N. Brousse, Sr. Director of Operation Engineering,
switching our production workload to OpenStack
28. Monitor as much as you can ?
● Existing monitoring (Nagios, Graphite) still in use
● Specific checks for OpenStack
○ check component API : performance /
availability / operability
○ check resources : ports, failed instances
● Monitoring capacity metrics for all hardware
● SNMP traps for network equipment
● Monitoring is just an extension of our existing
monitoring in AWS
29. Monitoring auto-discovery
● New OpenStack node is automatically monitored
○ automatically / upon request
○ nagios detects new hosts (API query)
○ nagios applies component related check by role
○ graphing is also automatically updated
38. What does not fit?
Downscaling does not really make sense for us
cpus are online and paid for, we should use them
Upscaling has its limits : AWS is refreshing instance types
every year …
Sometime a small feature added can have huge load
impact.
It makes sense to keep the elastic workloads (machine
learning, ...) in AWS
39. ● We can be “double hybrids” (aws + openstack + haproxy bare
metal)
● Dev environment is needed for Openstack (new versions / break
things)
● Storage is still a big issue due to our volume (1.6 EB)
● Some stuff may stay “forever” on AWS ?
● More dev/ops communication
● OpenStack is flexible
● No need for HA everywhere
● Spikes can be offloaded on AWS
(cloud bursting)
What we’ve learnt
40. Still a lot left to do
Technical aspect
Need to migrate other AWS Regions
Gain more experience
Version upgrades
Continue to adapt our tooling
Add more alarms for capacity issues
Different Regions, different issues ?
Human aspect
Dev team still thinks in the AWS world
( and sometimes OPS too…)
41. - Ad serving in production since 2015-05
- Bidding traffic in production since 2015-09
- 100% uptime since pre-production (2015-03)
Cost of operation for our current production workload:
- Reduced by a factor of two, including OpEx cost!
Aftermath