CLOSER	
  2014,	
  April	
  3	
  2014,	
  Barcelona
Ryousei	
  Takano,	
  Atsuko	
  Takefusa,	
  Hidemoto	
  Nakada,	
  	
...
Background
•  Open	
  source	
  Cloud	
  OS	
  
–  Private/Public/Hybrid	
  clouds	
  
–  e.g.,	
  Apache	
  CloudStack,	
...
Two	
  Inter-­‐cloud	
  models
A.  Overlay	
  model:	
  vIaaS	
  
–  FederaQon	
  by	
  IaaS	
  users	
  
–  e.g.,	
  Righ...
Goal
•  Goal:	
  
–  A	
  Cloud	
  OS	
  can	
  transparently	
  scale	
  in	
  and	
  out	
  without	
  
concern	
  for	
...
Contribu3on
•  We	
  propose	
  a	
  new	
  inter-­‐cloud	
  service	
  model	
  HaaS,	
  
which	
  enable	
  us	
  to	
  ...
Outline
•  IntroducQon	
  
•  Iris:	
  Inter-­‐cloud	
  Resource	
  IntegraQon	
  System	
  
•  Experiment	
  
•  Conclusi...
HaaS:	
  Hardware	
  as	
  a	
  Service
7
HaaS	
  Data	
  Center
IaaS	
  DC	
  1	
  
IaaS	
  DC	
  2	
  
L1	
  VMs
IaaS	
 ...
Iris:	
  Inter-­‐cloud	
  resource	
  
integra3on	
  system
•  Iris	
  provides	
  light-­‐weight	
  resource	
  managemen...
Nested	
  Virtualiza3on	
  (Nested	
  VMX)
•  Trap	
  &	
  emulate	
  VMX	
  instrucQons	
  
–  To	
  handle	
  a	
  singl...
Iris	
  and	
  GridARS
10
NW	
  
Manager	
  
IaaS	
  Data	
  Center
Cloud	
  OS
DC	
  Resource	
  
Manager
Resource	
  
Co...
Iris	
  REST	
  API
•  Build	
  a	
  HaaS	
  tenant	
  
POST	
  /iris/haas/deploy	
  
=>	
  <HaaS	
  ID>	
  
•  Get	
  the...
Outline
•  IntroducQon	
  
•  Iris:	
  Inter-­‐cloud	
  Resource	
  IntegraQon	
  System	
  
•  Experiment	
  
•  Conclusi...
Experiment
•  Experiments	
  
–  User	
  VM	
  deployment	
  [AGC]	
  
–  User	
  VM	
  migraQon	
  [AGC]	
  
–  User	
  V...
An	
  emulated	
  inter-­‐cloud	
  on	
  AGC
14
	
  
	
	
  
	
M8024	
  
(L2	
  switch)	
  
VLAN	
  100-­‐200	
VLAN	
  1	
 ...
User	
  VM	
  Deployment
15
Latency IaaS HaaS
0	
  ms 11.88	
   11.89
5	
  ms -­‐ 15.19
10	
  ms -­‐	
   18.84
100	
  ms -...
User	
  VM	
  Migra3on
16
	
  
	
	
  
	
IaaS	
  data	
  center	
 HaaS	
  data	
  center	
Bandwidth:	
  1	
  Gbps	
  
Laten...
User	
  VM	
  Migra3on
17
0	
  
10	
  
20	
  
30	
  
40	
  
50	
  
60	
  
70	
  
80	
  
90	
  
100	
  
0	
   20	
   40	
  ...
BYTE	
  UNIX	
  benchmark	
  on	
  AGC
IaaS	
  UVM HaaS	
  UVM
Dhrystone 77.16	
   57.07
Whetstone 86.29 70.08
File	
  cop...
BYTE	
  UNIX	
  benchmark	
  on	
  HCC
HaaS	
  UVM	
  (Virtage) HaaS	
  UVM	
  (KVM)
Dhrystone 47.27 48.82
Whetstone 77.24...
Outline
•  IntroducQon	
  
•  Iris:	
  Inter-­‐cloud	
  Resource	
  IntegraQon	
  System	
  
•  Experiment	
  
•  Conclusi...
Conclusion	
  and	
  Future	
  Work
•  We	
  propose	
  a	
  new	
  inter-­‐cloud	
  service	
  model,	
  
Hardware	
  as	...
Thanks	
  for	
  your	
  aen3on!
22
Acknowledgement:	
  
This	
  work	
  was	
  partly	
  funded	
  by	
  the	
  FEderated...
Upcoming SlideShare
Loading in...5
×

Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center

665

Published on

CLOSER 2014@Barcelona

Published in: Technology, Business
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
665
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
13
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center

  1. 1. CLOSER  2014,  April  3  2014,  Barcelona Ryousei  Takano,  Atsuko  Takefusa,  Hidemoto  Nakada,     Seiya  Yanagita,  and  Tomohiro  Kudoh     Informa(on  Technology  Research  Ins(tute,     Na(onal  Ins(tute  of  Advanced  Industrial  Science  and  Technology  (AIST),  Japan Iris:  Inter-­‐cloud  Resource  Integra3on   System  for  Elas3c  Cloud  Data  Center
  2. 2. Background •  Open  source  Cloud  OS   –  Private/Public/Hybrid  clouds   –  e.g.,  Apache  CloudStack,  OpenStack   •  Inter-­‐cloud  federaQon   –  ElasQc  resource  sharing  among  clouds   –  Assured  service  availability   under  disaster  and  failure   –  Guaranteed  QoS  against  rapid    load  increase   –  e.g.,  GICTF,  IEEE  P.2302   2 FederaQon
  3. 3. Two  Inter-­‐cloud  models A.  Overlay  model:  vIaaS   –  FederaQon  by  IaaS  users   –  e.g.,  RightScale   3 DC DC DC IaaS VI IaaS IaaS DC   (requester) DC   (provider) DC   (requester) VI VI  (=  IaaS) VI  (=  IaaS) vIaaS:  VI  as  a  Service HaaS:  Hardware  as  a  Service federa(on  layer   IaaS  tenant                                                 Virtual  Infrastructure B.  Extension  model:  HaaS   –  FederaQon  for  IaaS   providers  
  4. 4. Goal •  Goal:   –  A  Cloud  OS  can  transparently  scale  in  and  out  without   concern  for  the  boundary  of  data  centers.   •  Requirements:   –  Ease  of  use:  no  modificaQon  for  Cloud  OS   –  Mul3-­‐tenancy:  Cloud  OS-­‐neutral  interface   –  Secure  isola3on:  isolaQon  between  a  HaaS  provider  and   HaaS  users  (IaaS  providers)   •  SoluQon:   –  Nested  VirtualizaQon   4
  5. 5. Contribu3on •  We  propose  a  new  inter-­‐cloud  service  model  HaaS,   which  enable  us  to  implement  “elasQc  data  center.”   •  We  have  developed  Iris,  which  constructs  a  VI  over   distributed  data  centers  by  using  nested   virtualizaQon  technologies,  including  nested  KVM   and  OpenFlow.   •  We  demonstrate  Apache  CloudStack  can  seamlessly   manage  resources  over  mulQple  data  centers  on  an   emulated  inter-­‐cloud  environment. 5
  6. 6. Outline •  IntroducQon   •  Iris:  Inter-­‐cloud  Resource  IntegraQon  System   •  Experiment   •  Conclusion 6
  7. 7. HaaS:  Hardware  as  a  Service 7 HaaS  Data  Center IaaS  DC  1   IaaS  DC  2   L1  VMs IaaS  admin IaaS  tenant PMs L1  VMs L1  VMs PMs:  Physical  Machines   L1  VMs:  Layer  1  Virtual  Machines   L2  VMs:  Layer  2  Virtual  Machines L2  VMs IaaS  tenant L2  VMs IaaS  tenant L1  VMs IaaS  tenant IaaS  admin PMs OpenStack   PMs CloudStack  
  8. 8. Iris:  Inter-­‐cloud  resource   integra3on  system •  Iris  provides  light-­‐weight  resource  management     with  simple  REST  API   •  Nested  VirtualizaQon   –  Compute   •  Nested  VMX  (KVM)   •  KVM  on  LPAR  (Virtage)   –  Network   •  Full-­‐meshed  overlay  network  for  each  HaaS  tenant   •  OpenFlow  and  GRE  tunnel   8
  9. 9. Nested  Virtualiza3on  (Nested  VMX) •  Trap  &  emulate  VMX  instrucQons   –  To  handle  a  single  L2  exit,  L1  hypervisor  does  many  things:  read  and   write  the  VMCS,  disable  interrupts,  page  table  operaQon,  ...   –  These  operaQons  can  be  trapped,  leading  to  exit  mulQplicaQon.   –  Eventually,  a  single  L2  exit  causes  many  L1  exits!   •  ReducQon  of  exit  mulQplicaQon   –  S/W:  EPT  shadowing   –  H/W:  VMCS  shadowing 9 L2 L1 L0 Single  level Two  level ..... VM  Exit VM  Entry L2  VM  Exit
  10. 10. Iris  and  GridARS 10 NW   Manager   IaaS  Data  Center Cloud  OS DC  Resource   Manager Resource   Coordinator IaaS  admin   (Requester) NW   Manager HaaS  Data  Center IaaS User   VMs User   VMs Iris 1.  Request   HaaS  tenant 2.  Request     resources   3.  Request  DC  resources   and  a  connecQon   4.  Configure  HaaS   tenant  over     the  resources GWGW GridARS  
  11. 11. Iris  REST  API •  Build  a  HaaS  tenant   POST  /iris/haas/deploy   =>  <HaaS  ID>   •  Get  the  status   GET  /iris/haas/<HaaS  ID>   =>  NEW|PREPARED| DESTROYED|ERROR| UNKNOWN   •  Destroy  the  HaaS  tenant   DELETE  /iris/haas/<HaaS  ID> 11 {    "computer":  [                {"hostName"  :  "host1",                  "cpuNumber"  :  1,                  "cpuSpeed"  :  1000,                  "memory"  :  1048576,                  "disk"  :  5,                  "ipAddress"  :  "192.168.1.11",                  "netmask"  :  "255.255.0.0",                  "gateway"  :  "192.168.1.1",                  "pubkey"  :  "....."},                {"hostName"  :  "host2",                  "cpuNumber"  :  1,                  "cpuSpeed"  :  1000,                  "memory"  :  1048576,                  "disk"  :  5,                  "ipAddress"  :  "192.168.1.12",                  "netmask"  :  "255.255.0.0",                  "gateway"  :  "192.168.1.1",                  "pubkey"  :  "....."}],        "network":  [                {"iaasIPAddress"  :  "123.45.67.89",                  "haasTunnelMode"  :  "star",                  "haasTunnelProtocol"  :  "GRE",                  "interDCTunnelProtocol"  :  "GRE"}]}  
  12. 12. Outline •  IntroducQon   •  Iris:  Inter-­‐cloud  Resource  IntegraQon  System   •  Experiment   •  Conclusion 12
  13. 13. Experiment •  Experiments   –  User  VM  deployment  [AGC]   –  User  VM  migraQon  [AGC]   –  User  VM  performance  [AGC,  HCC]   •  Experimental  seungs   –  AGC  (AIST  Green  Cloud)   •  Emulated  inter-­‐cloud  environment   •  Nested  KVM   –  HCC  (Hitachi  Harmonious  CompuQng  Center)   •  Real  WAN  environment   •  KVM  on  LPAR  (Virtage)   13
  14. 14. An  emulated  inter-­‐cloud  on  AGC 14     M8024   (L2  switch)   VLAN  100-­‐200 VLAN  1 VLAN  3 C4948   (L3  switch)   GtrcNET-­‐1  (WAN  emulaQon) IaaS  data  center HaaS  data  center   GW GW Bandwidth:  1  Gbps   Latency:  0  –  100  msec •  Compute  node:  quad-­‐core  Intel  Xeon  E5540@2.53GHz  x  2   •  IaaS:  Apache  CloudStack  4.0.2 8  nodes 5  nodes CloudStack Iris
  15. 15. User  VM  Deployment 15 Latency IaaS HaaS 0  ms 11.88   11.89 5  ms -­‐ 15.19 10  ms -­‐   18.84 100  ms -­‐ 86.50 Result:  Elapsed  Qme  of   user  VM  deployment   [seconds]       IaaS  data  center HaaS  data  center Bandwidth:  1  Gbps   Latency:  0  –  100  msec   CloudStack UVM Experimental  seung:   VM  images
  16. 16. User  VM  Migra3on 16     IaaS  data  center HaaS  data  center Bandwidth:  1  Gbps   Latency:  0  –  100  msec   CloudStack UVM Experimental  seung:   VM  images UVM 1)  IaaS  -­‐>  IaaS 4)  HaaS  -­‐>  HaaS 3)  HaaS  -­‐>  IaaS2)  IaaS  -­‐>  HaaS
  17. 17. User  VM  Migra3on 17 0   10   20   30   40   50   60   70   80   90   100   0   20   40   60   80   100   VM  migra3on  Time  [seconds] Network  latency  (one-­‐way)  [milliseconds] IaaS  -­‐>  IaaS   IaaS  -­‐>  HaaS   HaaS  -­‐>  IaaS   HaaS  -­‐>  HaaS   VM  migraQon   over  WAN CloudStack  management   communicaQon  over  WANBaseline:  2.6  sec.
  18. 18. BYTE  UNIX  benchmark  on  AGC IaaS  UVM HaaS  UVM Dhrystone 77.16   57.07 Whetstone 86.29 70.08 File  copy  256 48.93 37.75 File  copy  1024 45.96 35.51 File  copy  4096 56.87 43.01 Pipe  throughput 49.02 38.49 Context  switching 205.67 9.43 Execl  throughput 157.00   4.71 Process  creaQon 256.80 4.82 Shell  scripts 95.96 4.18 System  call 29.57 22.73 18 The  overhead  of  nested   virtualization  is  high,   especially  process  creation   and  context  switching. (Relative  performance  normalized  to  the  BM.  Higher  is  better.) L2  VM L1  VM BM KVM KVM IaaS  UVM HaaS  UVM
  19. 19. BYTE  UNIX  benchmark  on  HCC HaaS  UVM  (Virtage) HaaS  UVM  (KVM) Dhrystone 47.27 48.82 Whetstone 77.24 74.86 File  copy  256 125.71 125.00 File  copy  1024 119.84 119.10 File  copy  4096 113.65 98.05   Pipe  throughput 128.23 119.91 Context  switching 1146.68 65.21 Execl  throughput 62.44 4.31 Process  creaQon 177.39 3.19 Shell  scripts 71.99 4.71 System  call 165.04 159.55 19 (Relative  performance  normalized  to  the  BM.  Higher  is  better.) L2  VM  on  Virtage  ≒  L1  VM  on  KVM ⇒Effect  of  EPT  shadowing L2  VM L1  VM BM HaaS  UVM   (Virtage) HaaS  UVM   (KVM) Virtage KVM KVM KVM
  20. 20. Outline •  IntroducQon   •  Iris:  Inter-­‐cloud  Resource  IntegraQon  System   •  Experiment   •  Conclusion 20
  21. 21. Conclusion  and  Future  Work •  We  propose  a  new  inter-­‐cloud  service  model,   Hardware  as  a  Service  (HaaS). •  We  have  developed  Iris  and  demonstrated  the   feasibility  of  our  HaaS  model.    CloudStack  can   seamlessly  deploy  and  migrate  VMs  on  an  inter-­‐ cloud  environment.   –  The  impact  on  the  usability  is  acceptable  when  the  latency   is  less  than  10  ms.   •  Future  Work   –  more  use  cases:  IaaS  migraQon   –  more  evaluaQon   21
  22. 22. Thanks  for  your  aen3on! 22 Acknowledgement:   This  work  was  partly  funded  by  the  FEderated  Test-­‐beds  for   Large-­‐scale  Infrastructure  eXperiments  (FELIX)  project  of   the  NaQonal  InsQtute  of  InformaQon  and  CommunicaQons   Technology  (NICT),  Japan.   We  would  like  to  thank  the  Hitachi  Harmonious  Compu3ng   Center  for  conducQng  a  performance  evaluaQon  of  nested   virtualizaQon  technologies  on  their  equipment.  
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×