[OpenStack Day in Korea] Understanding OpenStack from SDN/NV Viewpoint
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

[OpenStack Day in Korea] Understanding OpenStack from SDN/NV Viewpoint

on

  • 525 views

OpenStack Day in Korea.

OpenStack Day in Korea.

Understanding OpenStack from SDN/NV Viewpoint

Statistics

Views

Total Views
525
Views on SlideShare
522
Embed Views
3

Actions

Likes
1
Downloads
43
Comments
0

1 Embed 3

http://www.slideee.com 3

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

[OpenStack Day in Korea] Understanding OpenStack from SDN/NV Viewpoint Presentation Transcript

  • 1. Date: 2014. 2.18 Place: Sejong University, Seoul Understanding OpenStack-leveraged Service Realization from SDN/NV Viewpoint OpenStack Day in Korea Dr. JongWon Kim Networked Computing Systems Laboratory School of Information and Communications Gwangju Institute of Science & Technology (GIST)
  • 2. Future Landscape GE Industrial Internet
  • 3. Open Computing + Networking for Future Internet Service Realization Contents Cloud Big Data IoT / Social Service Software Open Platform Open Computing + Networking (Tool) Open Infrastructure (Resource)
  • 4. Open Innovation Platform FI Arch. Open Networking Software R&D CCN & DTN Open Networking Research R&D 산학연 R&D Consortium Services DevOps with Testbed Open Networking Testbed R&D
  • 5. Services over Software-Driven Infrastructure
  • 6. Balanced Service Composition based on Programmable (and Virtualized) Resources -5-
  • 7. Software-Defined Data Center: Unified, Programmable & Virtualized Rsources Workload Fluidity Workload Abstraction Application-centric Virtual Playground Templates
  • 8. Configuration/Control/Visibility Challenges for Software-Defined Infrastructure Application-centric Virtual Playground with SmartX Boxes (vNetworking/vCompute/vStorage Capabilities) Virtual Playground (Box, Role, Topology) Templates DevOps Compute Networking Storage X Zero-touch Configuration Instant Visibility Collective Analysis Flexible Control (forwarding, …)
  • 9. Testbed: Wild & Organized Playground Provide Playground with Resources (Provider) -8- DevOps (Power User) Play (Experiment) • Configuration • Control • Visibility
  • 10. Testbed: DevOps + Key Components F3 Racing Team = TB Center Meta-Operation Federation White (=Dummy) Box  SmartX Box Control Framework Open SW Community Experiment Control Instrumentation & Measurement Programmable & Virtualized Resource Pool
  • 11. Cloud + SDN/NV Network  Networking  vNetworking
  • 12. SDN & NfV (Network functions Virtualization) Deployment Targets (v0.6) Content/Application Service Providers Secure Networking Network functions Virtualization vSwitch Networking Cloud Overlay Virtual Data Center Networking Networking Inter-DC / Multi-layer Optical Networking Mobile / Wireless Networking Network Service Providers (+ Multi-campus Enterprises) Last modified: 11/04/2013
  • 13. Futuristic Multilayer-integrated & Convergent Networks (Cloud WAN Fabric + Service-aware Edge) Service-aware Edge (MiddleBox, …) Wireless + Mobile Wireless + Mobile Cloud Data Centers Cloud WAN Fabric Cloud Data Centers (IP+Optical Integration) Wireless + Mobile Cloud DC Cloud DC Cloud Data Centers Last Modified 11/02/2013 IP??, More Switching + Simpler Routing?
  • 14. Cloud Market Trends Cloud DC Traffic Cisco Global Cloud Index • Amazon AWS • Microsoft Azure • OpenStack
  • 15. NFV PoC VM Infrastructure with Unified Resource Pools - 14 -
  • 16. ONF’s SDN Architecture (2013 Dec) - 15 -
  • 17. NFV Architecture Framework & Use Cases
  • 18. Cloud, SDN, NFV: Ericsson - 17 -
  • 19. 3-Tier Application Pattern @ DC
  • 20. CLOS-based DC Networking
  • 21. Overlay vNetworking for Cloud DC
  • 22. Software-Defined Resource Convergence • Toward Software-defined Data Center (Computing / Storaging) ⁞ – MicroServers – Hyper-convergence boxes ⁞ – Networking specialized (with in-network processing) boxes ⁞ • Toward SDN-coordinated MiddleBox (Networking)
  • 23. SmartX Box: Design and Prototyping with OpenStack Leverage Simplified SmartX Rack  SmartX Box VM VM VM VM VM Open vSwitch / NICs COMPUTE NETWORKING STORAGE Pools of SmartX Boxes: Massive scalability and pay-asyou-grow flexibility CPUs / GPUs SSD / HDDs
  • 24. OF@TEIN with SmartX Box vs ON.Lab’s OpenCloud Pilot L2 L2 L2 L3 VM Service layer (experiment layer) L3 VM L2 VM VM VM VM VM VM A Virtual Playground For Experiment A L3 VM ∙∙∙ L2 L2 VM VM VM A Virtual Playground For Experiment Z A Virtual Playground For Experiment B OF@TEIN Underlay Network VCPU Virtual Resource layer VCPU VCPU VCPU VCPU VCPU VCPU VCPU Memory Memory Memory Memory Memory Memory Memory Memory VM#3 VM#1 VM#1 VCPU Vmemory Nova VM#2 vstorage KVM(Hypervisor) Physical resource layer Storage (SSD/HDD) vswitch Neutron Opnestack Cinder Memory Kernel CPU OS NIC SmartX Box #1 VCPU Vmemory Nova VM#2 vstorage vswitch Neutron Opnestack Cinder KVM(Hypervisor) Storage (SSD/HDD) VM#1 Memory Kernel CPU OS NIC SmartX Box #2 ∙∙∙ VCPU Vmemory Nova VM#2 vstorage vswitch Neutron Opnestack Cinder KVM(Hypervisor) Storage (SSD/HDD) VM#3 Memory Kernel CPU OS NIC SmartX Box #K DevOps-based Templates for Virtual Playground + OpenStack Convergent Service APIs + SDN-Coordinated vNetworking ON.Lab OpenCloud Pilot
  • 25. Representing Service Realization Data Model Data Service Engine
  • 26. OpenStackleveraged Service Realization OF@TEIN - 25 -
  • 27. OF@TEIN Infrastructure (2012~2013) OF@KOREN Japan or USA NIA(Seoul) OF@TEIN SDN Tools OF@TEIN Portal Exp. Node (with HD camera) Exp. Node (traffic generator) Exp. Node OpenFlow FlowVisor OpenFlow Controller OpenFlow Production Switch OpenFlow Switch Narinet OFS OF@TEIN Jeju (Jeju) GIST (Gwangju) SmartX Racks (Type C) Philippines Vietnam Postech (Pohang) SmartX Rack EU (SmartFIRE) Networked Tiled Display Indonesia SmartX Racks (Type B) VoD Thailand Korea U (Seoul) Malaysia Last Update: 2013-08-18 26
  • 28. [Part 1] OF@TEIN Infrastructure: System & Network Resources 27
  • 29. [Part 2] Supporting OF@TEIN SDN Experiments User Experiment Visibility System, Network, FlowSpace Monitoring FlowSpace Management Admin. Slice A Virtual Playgroud for OF@TEIN Portal Computing Resource SmartX Racks User Experiment Software Resource Provisioning for FlowVisor Power User Slice Configuration, Control, Visibility Networking & FlowSpace Resources L2 L3 VM L2 VM VM VM VM VM VM
  • 30. OF@TEIN SmartX Rack (Type B & B+) • 3 Tier Nodes (Capsulator, OF Switch, Worker) • 3 Network Planes: Power + Management / Control / Data VM VM VM O F S VM VM VM Remote Power Mng. O F S Box SmartX Rack (Type B+) Role (Function) VM #2 Monitoring Agent VM VM VM O V S VM #1 MediaX-VT Agent OpenFlow Agent Worker VM #2 Worker VM #3 Dataplane OF Switch Data Manag. NF/OVS Capsulator Node VM VM O V S Management Switch SmartXRack Agent Worker VM #1 VM Management VM (SmartX-Rack DevOps wth Chef: Automatic Installation + Configuration (+Verification) … Storage Manage ment / OpenStack / Monitoring Agents) Worker VM #1 Worker VM #2 Worker VM #3 Remote Power Management Open vSwitch SmartX Rack (Type B)
  • 31. OF@TEIN SmartX Box (SmartX Rack Type C): Site Installation GIST KOREN NOC Korea U Postech Jeju Univ SmartX Box C11 SmartX Box C13 SmartX Box C14 SmartX Box C15 Worker nodes Intel ONP SmartX Box C12 Worker nodes Worker nodes Worker nodes Worker nodes Br-int Br-int Br-tun Br-tun Br-int Br-tun Br-int Br-int Br-tun Br-tun KOREN Network Internet IBM M4 OpenStack Orchestration Node Provisioning Center Node GIST SmartX Coordinator Box VLAN ID VLAN ID VLAN ID = 603 = 602 = 601 Gateway Node Br-ex Power / Manage ment Br-tun Br-int SmartX Control Box SmartX SandBox P M Control C Last Update: 2013-11-01 Data D
  • 32. Unified and Virtualized Resources for OF@TEIN Virtual Playground L2 Service layer (experiment layer) L2 L2 L3 VM L3 VM L2 ∙∙∙ L2 VM VM VM VM VM VM A Virtual Playground For Experiment A L3 VM L2 VM VM VM A Virtual Playground For Experiment Z A Virtual Playground For Experiment B OF@TEIN Underlay Network VCPU VCPU VCPU VCPU VCPU VCPU VCPU Memory Virtual Resource layer VCPU Memory Memory Memory Memory Memory Memory Memory VM#3 VM#1 VM#1 VCPU Vmemory Nova VM#2 vstorage KVM(Hypervisor) Physical resource layer Storage (SSD/HDD) vswitch Neutron Opnestack Cinder Memory Kernel CPU OS NIC SmartX Box #1 VCPU Vmemory Nova vstorage vswitch Neutron Opnestack Cinder KVM(Hypervisor) Storage (SSD/HDD) VM#1 VM#2 Memory Kernel CPU OS NIC SmartX Box #2 ∙∙∙ VCPU Vmemory Nova VM#2 vstorage vswitch Neutron Opnestack Cinder KVM(Hypervisor) Storage (SSD/HDD) VM#3 Memory Kernel CPU OS NIC SmartX Box #K
  • 33. OF@TEIN Virtual Playground Creation: Autonomic Installation & Configuration with Templates Configuration Box Role Control Visibility Topology A Virtual Playground Traffic generator Web Server CCNX Default Coordinator Computing L2 VM Images L3 Node Graphs L2 VM VM VM VM Box Template A NOVA Neutron Glance VM V M V M Cinder SSD/ HDDs Open vSwitch Open vSwitch Open vSwitch KVM KVM KVM Glance Box Template B Swift Software VM Hardware NOVA Neutron Swift Cinder Open vSwitch Open vSwitch CPUs/ GPUs V M VM CPUs/ GPUs SSD/ HDDs Overlay Tunnels SmartX Box (Type C) VM Narinet Open vSwitch
  • 34. Dynamic Virtual Playgrounds for SmartX Box (Preliminary for Box/Role/Topology Templates) VP Template G-A’’ 13min 25min VP Template G-A’ 13 min VP 25 min Template G-A 13 min 25 min *Installation time은 추가 설치 내용에 따라서 변경
  • 35. Connecting SmartX Racks (Type A*/B/B+) for Static/Multi-user Playground
  • 36. Site Overlay vNetworking: NVGRE Tunneling & Tagging/Steering/Mapping Flow VLAN-ID Tagging for Hypervisor VMs Flow Steering Flow Mapping with User SDN Controllers with Admin SDN Controller VM VM WAN VM OVS OF Switch HP OF Switch VM VM VM OVS Capsulator HP OF Switch HP OF Switch An OpenFlow Network Island Gateway Router Narinet Capsulator NVGRE Tunnels *
  • 37. Embedding Virtual Nodes into SmartX Box (Partially for Role/Box/Topology Templates) SmartX Rack (Type B+)
  • 38. Overlay vNetworking: Automatic Site Tunnel Configuration & FlowSpace Management (Partially for Topology/Box/Role Templates) Admin SDN Controller Configuration Controller list DPID list Site-Capsulator list (IP, PORT) Allowed flows-tunnel mapping list V M V M V M Set_DPID() Set_controller() HP3500 V M V M V M V M V M V M OVS-Bridge information Capsulator Flow table GRE Tunnel information Tunnel list Add_gre_tunnel() Add_flow_table() Clear_site() Current Bridge state Current Tunnel state V M V M V M HP5400 Management OpenStack Neutron ML2 (Modular Layer 2) Plugin Open DayLight Project OVSDB Integration
  • 39. Running OF@TEIN Experiments OF@TEIN FlowSpace Monitoring Open Software and Demo Demo Visual User Experiment Console OF@TEIN Portal User Experiment Visibility OF@TEIN Network Monitoring OF@TEIN System Monitoring Play SDN (+ Cloud Computing) experiments with your own controller! OF@TEIN (SmartX Box = SmartX Rack Type C) OF@TEIN (SmartX Rack A*/B/B+) 39
  • 40. Supporting Multiple SDN Users with their own Controllers via FlowVisor OF Switch OF@TEIN Networking & FlowSpace Resources OF Switch OF Switch DPIDPortRanges VLAN-based FlowRange Floodlight Controller VLAN-based FlowRange Open Daylight Controller VLAN-based FlowRange VLAN-based FlowRange Floodlight Controller FlowRanges FlowVisor (v1.4) OF@TEIN Admin Script NOX Controller VLAN ID FlowSpace Management
  • 41. Preliminary Design OF@TEIN Experiment LifeCycle Draft Script (BASH) Experiment Credential Remote Access (SU Key Exchange) (SU account ) Establish Management Environment Design Experiment Box Register FlowSpace Register (Horizon) (FlowVisor) Resources Allocation Box Inst. & Conf. Templates Topology (Datapath & Tunnel ) Templates (SSH + Chefs) (OVS+BASH) Configure & Initialize Services Role (function) Inst. & Conf. Templates (SSH +Chefs) Custom Images (Linux+SSH) Data and Image Replication (FTP) Provision Host/IP Resolution (Hostname file) Script Check & Execute Experiment Note: Based on simple PING experiment Log Files (BASH File I/O) Experiment Output/ Status (BASH stdout) Monitor & Analyze Experiment Resource Clean Up Finish (BASH) Clean Up Output or Status (BASH stdout) Script (BASH Execute Experiment Analysis Experiment UI Display (Java Script) Display Management (Java Script)
  • 42. OF@TEIN Demonstration @ HSN 2014
  • 43. Gwangju Institute of Science & Technology Thank you! Thank you! Send Inquiryjongwon@gist.ac.kr to jongwon@gist.ac.kr http://netmedia.gist.ac.kr 43