Your SlideShare is downloading. ×
 TripleO
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

TripleO

1,372
views

Published on

Deploy OpenStack using OpenStack

Deploy OpenStack using OpenStack

Published in: Technology

0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,372
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. TripleO   OpenStack-­‐on-­‐OpenStack   Deploy OpenStack using OpenStack
  • 2. Problem   Installing,  Upgrading  and  Opera1ng  OpenStack  Cloud  
  • 3. Current  Approach  
  • 4. Cloud  Maintenance  Cycle   Entropy   H/W   Failure   Bugs   Install   Reconfigure   Upgrade   CI/CD   Golden   Images   HA   Setup  
  • 5. Pi;alls  of  current  approach   Cannot  deploy  OpenStack  easily  and  reliably.     •  Con1nuous  Integra1on  and  Con1nuous  Delivery     Ø  Ability  to  roll-­‐out  the  cloud  during  maintenance  cycles.   Ø  Deploy  something  that  is  tested    with  minimal  varia1ons  in  mul1ple  environments   (Dev./QA/Prod.)   •  Maintenance  and  Installa1on  Costs   Ø  Requires  more  efforts  to  roll-­‐out  the  cloud  incurring  in  more  costs  and  less  benefits   (one  1me  cost).   •  Complexity  in  installa1on  and  upgrade  processes.   •  Handling  migra1on  from  one  version  to  other  version  of  OpenStack.   •  No  single  tool  chain  or  API   Ø  Current  set  of  deployment  tools  have  awkward  hand  offs  and  do  not  have  seamless   integra1on.  
  • 6. Solu?on  
  • 7. TripleO   An  endeavor  to  drive  down  the  effort  required  to  deploy  an  OpenStack  cloud,  increase  the  reliability  of  deployments   and  configura?on  changes    and  consolidate  the  disparate  opera?ons  projects  around  OpenStack.   Design  Guidelines:   •  •  •  •  •  •  Robust  automa1on  to  do  CI  and  deployment  tes1ng  of  a  cloud  at  bare  metal  layer.   Customize  generic  disk  images  to  use  with  Nova  Bare  Metal  using  DISKIMAGE-­‐BUILDER.   Orchestrate  deployment  of  these  images  onto  bare  metal  using  HEAT.   Deploy  the  same  tested  images  to  produc1on  clouds  using  NOVA  BAREMETAL/IRONIC.   Create  configura1ons  on  disk  and  trigger  in-­‐instance  reconfigura1ons  using  OS-­‐*-­‐CONFIG.   Clean  interfaces  to  plug  other  alterna1ve  implementa1ons.  Ex:  Use  Puppet/Chef  for  configura1ons.  
  • 8. Benefits  of  TripleO   •  Driving  the  cost  of  opera1ons  down.   •  Increasing  reliability  of  deployments  and  consolida1ng  on  a  single  API  for  deploying  machine  images.   •  Use  of  gold  images  allows  one  to  test  precisely  what  will  be  running  in  produc1on  or  test  environment  -­‐   either  virtual  or  physical  and  provides  early  detec1on  of  many  issues.     •  Gold  image  also  ensures  that  there  is  no  varia1on  between  machines  in  produc1on  -­‐  no  late  discovery  of   version  conflicts,  for  instance.   •  Using  CI/CD  tes1ng  in  the  deployment  pipeline  gives  us:   Ø  Ability  to  deploy  something  that  has  been  tested.   Ø  No  means  to  invalidate  the  above  tests  (e.g.  kernel  version,  OpenStack  calls  etc.)   Ø  Implement  varia1ons  in  configura1ons  (e.g.  network  topology  could  vary  between  staging  and   produc1on.)   •  Use  of  cloud  APIs  for  bare  metal  deployment  permit  trivial  migra1on  of  machine  between  roles.   •  A  single  tool  chain  to  provision  and  deploy  onto  hardware  is  simpler  and  lower  cost  to  maintain  than   having  heterogeneous  systems.  
  • 9. Hypervisor  Driver  vs.  Bare  Metal  Driver   •  Tenant/Project  has  full  and  direct  access  to  the  hardware,  and  that  hardware  is  dedicated   to  a  single  instance.   •  Nova  does  not  have  any  access  to  manipulate  a  baremetal  instance  except  for  what  is   provided  at  the  hardware  level  and  exposed  over  the  network,  such  as  IPMI  control.     •  Some  func1onality  (instance  snapshots,  aZach  and  detach  network  volumes  to  a  running   instance)  implemented  by  other  hypervisor  drivers  is  not  available  via  the  baremetal  driver   •  Security  concerns  created  by  tenants  having  direct  access  to  the  network  (e.g.,  MAC   spoofing,  packet  sniffing,  etc.).   Ø  Other  hypervisors  mi1gate  this  with  virtualized  networking.   Ø  Neutron  +  OpenFlow  can  be  used  if  network  hardware  supports  it.   •  Public  cloud  images  may  not  work  on  some  hardware,  par1cularly  if  your  hardware   requires  addi1onal  drivers  to  be  loaded.  
  • 10. Nova  Bare  Metal   FEATURES   •  •  •  •  •  •  •  Hypervisor  driver  for  Openstack  Nova  Compute.   Same  role  as  drivers  for  other  hypervisors  (KVM,  ESXi,  etc).   No  hypervisor  between  the  tenants  and  the  physical  hardware.   Exposes  hardware  via  Openstack's  API  using  pluggable  drivers   Ø  Power  Control  of  enrolled  H/W  via  IPMI.   Ø  PXE  boot  of  the  bare  metal  nodes.   Support  for  x86_64  &  i386  architectures.   Support  for  Flat  Network  environments.   Cloud-­‐init  is  used  to  pass  the  user  data  into  the  bare  metal  instances  ager  provisioning     Projects     Project   NOVA  Compute  +   Hypervisor  Driver   NOVA  Compute  +   Bare  Metal  Driver   Hypervisor   (KVM/ESXi)   Physical  Hardware   NOVA  HYPERVISOR  DRIVER   I   P   M   I   P   X   E   Physical  Hardware   NOVA  BARE  METAL  DRIVER  
  • 11. Bare  Metal  Driver  is  now  IRONIC   Mo?va?on  for  split   •  Maintain  ‘One  DB’  per  project.   •  Bare  Metal  Driver  needs  to  store  non-­‐Nova  details  in  a  separate  DB.   •  Create  separa1on  between  physical  and  virtual  environments   •  •  •  Remove  unnecessary  interac1ons.   HW  Specific  tasks  (HW  RAID,  Firmware  Updates  etc.).   Interac1ons  with  other  projects  –  Cinder,  Quantum  etc.   Ironic  is  s1ll  gekng  refactored  and  heavily  under  development.   Use  Nova  Bare  Metal  for  now.   Future  Plans  (implementa?on  to  be  done  in  Ironic)   •  •  •  •  •  •  •  •  Improve  performance/scalability  of  PXE  deployment  process.   BeZer  support  for  complex  non-­‐SDN  environments  (Ex:  sta1c  VLANs).   BeZer  integra1on  with  neutron-­‐dhcp.   Support  for  persistent  storage  through  Cinder.   Support  snapshot  and  migrate  of  baremetal  instances.   Support  non-­‐PXE  image  deployment.   Support  other  architectures  (arm,  1lepro).   Support  fault-­‐tolerance  of  baremetal  nova-­‐compute  node.  
  • 12. HEAT   •  •  •  •  OpenStack’s  orchestra1on  program.   An  orchestra1on  engine  to  launch  mul1ple  composite  cloud  applica1ons.   Heat  Orchestra1on  Templates(HOT)  are  text  files  that  are  treated  as  code.   Provides  OpenStack  na1ve  ReST  API  +  CloudForma1on-­‐compa1ble  Query  API.   Features:   •  Heat  template  describes  the  infrastructure  for  a  cloud  applica1on  in  a  text  file  that  is   readable  and  writable  by  humans,  and  can  be  checked  into  version  control.   •  Infrastructure  resources  can  be  described  as  servers,  floa1ng  ips,  volumes,  security   groups,  users.   •  Autoscaling  service  integrates  with  Ceilometer,  to  include  a  scaling  group  as  a   resource  in  a  template.   •  Templates  also  specify  the  rela1onships  between  resources  (e.g.  this  volume  is   connected  to  this  server).   •  Manages  the  whole  lifecycle  of  the  applica1on     •  Just  modify  the  template  when  you  need  to  change  your  infrastructure.   •  Heat  primarily  manages  infrastructure,  but  the  templates  integrate  well  with  sogware   configura1on  management  tools  such  as  Chef,  Puppet.   •  TripleO  specific  templates  are  maintained  as  tripleo-­‐heat-­‐templates   Ceilometer  -­‐>  OpenStack  Metering  Service  
  • 13. Heat  Services   heat     •  CLI  tool  to  communicates  with  the  heat-­‐api  and  execute  OpenStack-­‐na1ve  ReST   API  or  AWS  CloudForma1on  APIs.     •  End  developers  could  also  use  the  heat  REST  API  directly.     heat-­‐api     •  Provides  an  OpenStack-­‐na1ve  ReST  API  that  processes  API  requests  by  sending   them  to  the  heat-­‐engine  over  RPC.     heat-­‐api-­‐cfn     •  Provides  an  API  that  is  compa1ble  with  CloudForma1on  and  processes  API   requests  by  sending  them  to  the  heat-­‐engine  over  RPC.     heat-­‐engine     •  Orchestrate  the  launching  of  templates  and  provide  events  back  to  the  API   consumer.   tripleo-­‐heat-­‐templates   •  These  templates  provide  the  rules  describing  how  to  deploy  the  baremetal   undercloud  and  virtual  overclouds.    
  • 14. Heat  Architecture   Inputs   Parameters   Template   URL  or  Template  File  /  Parameter  Data   Data   Heat  CLI   ReST   Heat  ReST  API   AMQP   Heat  Engine   Outputs  
  • 15. Heat  Engine  Architecture   AMQP  Engine  API   Other   Opera?ons   Parser  API   Other  Opera?ons   Parser   Plugin  API   Resource  Plugin   autoscaling   db_instance   loadbalancer   user   subnet   route_table   net   port   OpenStack  Python     Client  APIs   OpenStack  Python  Clients  (python-­‐*-­‐client)   OpenStack  Project’s   ReST  APIs   OpenStack  Projects   router  
  • 16. Sample  HOT   parameters:! InstanceType:! type: string! description: Instance type to create! default: m1.small! hidden: False! constraints:! ­ allowed_values {m1.tiny, m1.small, m1.large}! resources:! MyInstance:! type: OS::Nova::Server! properties:! KeyName: { get_param: KeyName }! ImageId: { get_param: ImageId }! InstanceType: { get_param: InstanceType }! outputs:! InstanceIP:! description: The IP address of the instance! value: {get_attr: [MyInstance, PublicIP] }!
  • 17. Disk  Image  Builder   •  Responsible  for  building  disk  images,  file  system  &  ramdisk  images  to  use  with   OpenStack  (both  virtual  and  bare  metal).   •  Core  func1onality  includes  the  various  opera1ng  system  specific  modules  for  disk/   filesystem  images,  and  deployment  and  hardware  inventory  ramdisks.   •  Builds  an  image  by  set  of  hooks  –  root  image,  preinstall,  install  packages  &  perform   post  installa1on  steps.   •  An  image  build  is  parameterized  by  including  Elements,  where  elements  could  be   some  specific  s/w  you  wanted  to  install  or  add  some  plugin  for  a  specific  task.   •  During  an  image  build  process  most  of  the  things  get  cached,  like  pypi  packages,   yum/apt  packages.   tripleo-­‐image-­‐elements   •  These  elements  create  build-­‐1me  specialized  disk/par11on  images  for  TripleO.     •  The  elements  build  images  with  sogware  installed  but  not  configured  -­‐  and  hooks  to   configure  the  sogware  with  os-­‐apply-­‐config.   Limita?ons:   •  No  support  for  image  based  updates  yet.   •  Full  HA  is  not  yet  implemented.   •  Bootstrap  removal  is  not  yet  implemented  (depends  on  full  HA).  
  • 18. os-­‐*-­‐config   •  These  tools  work  with  metadata  that  is  delivered  by  Heat  to  create   configura1on  files  on  disk  (os-­‐apply-­‐config),  and  to  trigger  in-­‐instance   reconfigura1on  including  shukng  down  services  and  performing  data   migra1ons.   •  os-­‐apply-­‐config  reads  a  JSON  metadata  file  and  generates  templates.  It  can   be  used  with  any  orchestra1on  layer  that  generates  a  JSON  metadata  file   on  disk.   •  os-­‐refresh-­‐config  subscribes  to  the  Heat  metadata  and  then  invokes  hooks   -­‐  it  can  be  used  to  drive  os-­‐apply-­‐config,  or  Chef/Puppet/Salt  or  other   configura1on  management  tools.  
  • 19. Deploying  TripleO*   Currently,  Nova  cannot  reliably  run  two  different  hypervisors  in  one  cloud.   Under  Cloud  &  Over  Cloud   A  seed  cloud,  runs  baremetal  nova-­‐compute   The  under  cloud,  runs  baremetal  nova-­‐compute   and  deploys  instances  on  bare  metal.   and  deploys  instances  on  bare  metal,  is  managed   Hosted  in  a  KVM  to  deploy  the  under  cloud.   and  used  by  the  cloud  sysadmins.   The  over  cloud,  which  runs  using  the  same  images  as  the  under  cloud,  but  as  a  tenant  on  the   undercloud,  and  delivers  virtualised  compute  machines  rather  than  bare  metal  machines.   No  Full  HA  support,  hence  the  need  for  seed  cloud.   *  Subject  to  change  
  • 20. References   hZps://wiki.openstack.org/wiki/TripleO   hZps://github.com/openstack/tripleo-­‐incubator   hZp://docs.openstack.org/developer/tripleo-­‐incubator/deploying.html   hZp://docs.openstack.org/developer/tripleo-­‐incubator/devtest.html     hZps://wiki.openstack.org/wiki/Baremetal   hZps://wiki.openstack.org/wiki/BaremetalSplitRa1onale   hZps://wiki.openstack.org/wiki/Ironic     hZps://wiki.openstack.org/wiki/Heat   hZps://github.com/openstack/tripleo-­‐heat-­‐templates     hZps://github.com/openstack/diskimage-­‐builder   hZps://github.com/openstack/tripleo-­‐image-­‐elements     hZps://github.com/openstack/os-­‐apply-­‐config   hZps://github.com/openstack/os-­‐refresh-­‐config   hZps://github.com/openstack/os-­‐collect-­‐config     Future  Development:   •  Etherpads  of  Icehouse  (Next  release  of  OpenStack)   •  Github  code  repositories   •  Launchpad  –  Blueprints  &  Bug  Tracking