OpenStack in the Enterprise
Holly Bazemore & Shilla Saebi
Comcast
August 23, 2016
2 2
#wearecomcast #weareopenstack
3
34+ Regions
700+ Tenants (Projects)
20K Instances
4
Our Commitment to OpenStack
Vancouver Summit -Kilo Austin Summit - Newton
36,000 Lines of Code 67,000 Lines of Code
47% Increase in a Single Year!
Our History with OpenStack
• Started in 2012 with a proof of concept that spanned across 3
regions running Essex
• Moved to Grizzly for stability in 1 of our regions
• Abandoned ship, nuked and paved Essex to Havana across
our footprint
• We continued growing, 500% in the first year
Our History with OpenStack
• By the end of 2014 we had 8 regions
• In 2015 we started our upgrades to IceHouse and decided
to go big or go home
• We are continuing with our deployments and growing
rapidly
OpenStack Today
• We refreshed our team structure and created a team focused solely on deployments
• We partnered with an internal Comcast team that focuses solely on deployments
• Partnered with our data center and network teams including support from their senior
leadership
• Ditched our previous software and hardware architecture plans and started over to
handle scale at Comcast
• Automated more through Puppet and Ansible
• Documented our processes and tested them on newbies to ensure they were complete
• Currently we have:
• Over 960,000 cores (vCPUs)
• Over 20,000 vms
• 34 multi-tenant regions running IceHouse across the nation 7
8
Tools We Use for Deployments
8
9
Core OpenStack Services We Use
Nova
Keystone
Horizon
Neutron
Glance
Ceilometer
Cinder
Heat
Our OpenStack Footprint
• Currently we have:
• Over 960,000 cores (vCPUs)
• Over 20,000 vms
• 34 multi-tenant regions running IceHouse across the nation
• So far this year:
• CPU growth 61%
• Memory growth 83%
• Storage growth 85%
• By the end of 2016:
• Mitaka deployed in multiple regions
• 3 – 1,000 node data centers
• 400 node test bed
What kind of work loads do we have
• Hadoop/Kafka
• Video encoding
• Networking tools
• Rabbit MQ
• Mesos
• ELK
• Cassandra
• Kubernetes
• Email (can we say the technology?)
• Call center tools
• Internal conference bridges
• Big data processing
• X1
• And many more
11
12
Contact Us
12
Holly Bazemore
Director of Strategy & Deployments
Twitter:@hfbazemore
irc: cloudhollyb
Shilla Saebi
Community Development Lead
Twitter:@shillasaebi
irc: shillasaebi
13

OpenStack in the Enterprise

  • 1.
    OpenStack in theEnterprise Holly Bazemore & Shilla Saebi Comcast August 23, 2016
  • 2.
  • 3.
    3 34+ Regions 700+ Tenants(Projects) 20K Instances
  • 4.
    4 Our Commitment toOpenStack Vancouver Summit -Kilo Austin Summit - Newton 36,000 Lines of Code 67,000 Lines of Code 47% Increase in a Single Year!
  • 5.
    Our History withOpenStack • Started in 2012 with a proof of concept that spanned across 3 regions running Essex • Moved to Grizzly for stability in 1 of our regions • Abandoned ship, nuked and paved Essex to Havana across our footprint • We continued growing, 500% in the first year
  • 6.
    Our History withOpenStack • By the end of 2014 we had 8 regions • In 2015 we started our upgrades to IceHouse and decided to go big or go home • We are continuing with our deployments and growing rapidly
  • 7.
    OpenStack Today • Werefreshed our team structure and created a team focused solely on deployments • We partnered with an internal Comcast team that focuses solely on deployments • Partnered with our data center and network teams including support from their senior leadership • Ditched our previous software and hardware architecture plans and started over to handle scale at Comcast • Automated more through Puppet and Ansible • Documented our processes and tested them on newbies to ensure they were complete • Currently we have: • Over 960,000 cores (vCPUs) • Over 20,000 vms • 34 multi-tenant regions running IceHouse across the nation 7
  • 8.
    8 Tools We Usefor Deployments 8
  • 9.
    9 Core OpenStack ServicesWe Use Nova Keystone Horizon Neutron Glance Ceilometer Cinder Heat
  • 10.
    Our OpenStack Footprint •Currently we have: • Over 960,000 cores (vCPUs) • Over 20,000 vms • 34 multi-tenant regions running IceHouse across the nation • So far this year: • CPU growth 61% • Memory growth 83% • Storage growth 85% • By the end of 2016: • Mitaka deployed in multiple regions • 3 – 1,000 node data centers • 400 node test bed
  • 11.
    What kind ofwork loads do we have • Hadoop/Kafka • Video encoding • Networking tools • Rabbit MQ • Mesos • ELK • Cassandra • Kubernetes • Email (can we say the technology?) • Call center tools • Internal conference bridges • Big data processing • X1 • And many more 11
  • 12.
    12 Contact Us 12 Holly Bazemore Directorof Strategy & Deployments Twitter:@hfbazemore irc: cloudhollyb Shilla Saebi Community Development Lead Twitter:@shillasaebi irc: shillasaebi
  • 13.

Editor's Notes

  • #2 Since 2012, Comcast has experienced 400% year over year growth in demand and capacity for it’s Openstack-based Elastic Cloud service. This talk will highlight the areas where Openstack has delivered business value for Comcast and the challenges ahead in meeting the demand for more complex capabilities.
  • #3 One of the largest Multi-Service Operators - MSOs in USA 10s of millions of customers 100s of millions of devices We most likely touch your home in some way or another, even if you aren’t a subscriber
  • #4 We are running OpenStack in 34 regions and have more expected in the pipeline We have over 700 tenants We have 20,000 instances
  • #5 Over a year ago at the Vancouver summit, we had contributed 36K lines of code towards the Kilo release of OpenStack Today we have almost 73,000 lines of code, most of our contributions have been towards documentation, Neutron and OpenStack-Ansible We are the founders of the Northern Virginia OpenStack meetup group and have 500 members and growing We participate as track chairs for the OpenStack summit We are part of the Women of OpenStack and try to contribute back to the community wherever we can
  • #6 Essex – wasn’t sure it was ready for production, but our developers loved it – better than bare metal – faster to get moving – we were self service and on demand. Grizzly – discovered that upgrades sucked Havana - as the demand sky rocketed we had to learn about scaling Icehouse – very painful upgrade, took a year and a half for us to code
  • #8 2016 has been a huge year for us Ops can’t do everything – separated deployments - this was our biggest win on speeding up deployments Docs are still a work in process Currently: VMs – we believe this number will at least double in 2017
  • #9 2016 has been a huge year for us Ops can’t do everything – separated deployments - this was our biggest win on speeding up deployments Docs still in process Currently: VMs – we believe this number will at least double in 2017
  • #10 The Core OpenStack services we use Nova Keystone Horizon Neutron Glance Heat Cinder Ceilometer
  • #11 Management call out – give thanks for the support
  • #13 We’ll be at the reception later as will several members of our team that did the work we have described here, please find us if you have any questions.