Open stack china_201109_sjtu_jinyh
Upcoming SlideShare
Loading in...5

Open stack china_201109_sjtu_jinyh






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Open stack china_201109_sjtu_jinyh Open stack china_201109_sjtu_jinyh Presentation Transcript

  • OpenStack@NIC.SJTU Yaohui Jin ( Network & Information Center Shanghai Jiao Tong University© jinyh@sjtu
  • About Me and Team  Professor, Deputy Director of NIC.SJTU  Email:  My research interests: Data Center Network, Big Data Analysis, Converged Broadband Network  Team:  Engineers: Xuan Luo (Ph. D), Jianwen Wei (M. Eng.), Qiang Sun (M. Eng.)  Ph.D Students: Jianxiong Tang, Xiaming Chen, Pengfei Zhang, Siwei Qiang  Master Students: Wei Ye, Xin Yang, Xiujie Feng, Xiaosheng Zuo, Zhaohui Zhang  Interns: Hongbo Fan, and other 10+ undergraduate© jinyh@sjtu 2
  • Agenda  Hardware configuration  Performance monitoring and measurement  Potential applications© jinyh@sjtu 3
  • OpenStack Architecture courtesy of Dell© jinyh@sjtu 4
  • Our Testbed: Sept. 2011© jinyh@sjtu 5
  • Testbed Photo© jinyh@sjtu 6
  • Server Details Name Vendor Configuration Purpose Nova-controller SuperCloud- 2 *E5620/48GB RAM/ Nova-api, Nova-scheduler, R6210-S2 2*1TB SATA (Raid 1)/GE Nova-objectstore, RabbitMQ, MySQL, euca2ools, Dashboard, VNC server, Ganglia Nova-network SuperCloud- 2 *E5620/48GB RAM/ Nova-network R6210-S2 2 *1TB SATA (Raid 1)/2 *10GE Nova-volume IBM x3650 2 *E5620/48GB RAM/ Nova-volume IBM DS3512 + 2 *146GB SAS (Raid 1)/2 *10GE EXP3512 + 96TB SATA (Raid 10) Nova-compute IBM dx360 M3 2 *E5650/96GB RAM/ Nova-compute 2 *146GB SAS (Raid 1)/2 *10GE Glance Dell R610 2 * E5620/8GB RAM/ Glance-api, Glance-registry, 2 * 146GB SAS (Raid 1) /2 *10GE Image Store, puppet server + 320GB SSD Proxy node SuperCloud- 2 * E5620/48GB RAM/ Swauth, Proxy server R6210-S2 2 * 1TB SATA (Raid 1)/2 *10GE Storage node SuperCloud- 2 * E5620/48GB RAM/2 *146GB Account server, Container RE436 SAS (Raid 1) /10GE server, Object server + 34 * 2TB SATA desktop disks© jinyh@sjtu 7
  • Network Details  Data Center Network: 10 GE Switch (BNT&H3C) in 2 domains  Control and Manage: GE Switch (DCRS)  10GE connect to campus network  Fat tree topology; L3: VRRP; L2: LACP+VLAG+MSTP  Security control: SSH, NAT, ACL, VLAN  NIC: Intel X520-DA2; Chelsio T420E-CR  L2-L7 Network tester: IXIA XM2  L2-L3 Network impairment emulator: Apposite Netropy 10G© jinyh@sjtu 8
  • Nova Network Traffic courtesy of Vishvananda Ishaya© jinyh@sjtu 9
  • Swift Details  Raw storage capacity: 400T Bytes  Storage node configuration: No Raid, JBOD, 3 Replicas, 6 Zones  Hardware cost: ~ 1000 RMB/TB (Raw, including servers and switches) Collaboration with StorBridge and SkyCloud Shanghai© jinyh@sjtu 10
  • Nova Cluster Monitoring Monitor by Ganglia© jinyh@sjtu 11
  • VM Provisioning Time VM: Windows7; Image Size: 20GB© jinyh@sjtu 12
  • NOVA I/O Throughput Tested by ATTO© jinyh@sjtu 13
  • VM Network Throughput Co-Located in a single physical machine (CSM) Connected by a single switch (CSS) Distributed in multiple physical machines (DMM) Connected by multiple switches (CMS)© jinyh@sjtu 14
  • Swift Testing  Scalability  Adding a storage node to an existing zone  Adding a storage node as a new zone  No influence on the functions of swift  Reliability  Disk failure/recovery  Storage node failure/recovery  Fault duration: 10 min & 1 hour  No influence on the functions of swift  Performance testing (ongoing)  Throughput  Response time  Concurrency Collaboration with Intel Shanghai© jinyh@sjtu 15
  • Nova Potential Applications  Infrastructure as a service (either private or public)  VM management for DevOps in IT service department  Big data analysis and tools, e.g. noSQL and Map/Reduce  Elastic provisioning of web service, particularly for burst requests of web 2.0 or mobile applications  Next generation high performance computing, virtual cluster provisioning with middleware© jinyh@sjtu 16
  • Syslog Analysis RAW mirrored traffic into DPI: ~6Gbits/s syslog into MongoDB: ~4MBytes/s ( 12000records/s ) MongoDB increases ~400GBytes/day© jinyh@sjtu 17
  • MongoDB Components • Actual data • Needs RAM + Disk IO • Can run as Arbiter • No data • Just votes to elect primary • Stores sharding configuration • Stateless router • Stores small amounts of data • Typically run on App Servers • Infrequently queried/updated courtesy of 10gen by MongoS© jinyh@sjtu 18
  • MongoDB Dataset Provisioning and Primary Results  Cluster in OpenStack:  1 conf server (2CPU + 8GB MEM + 100GB HDD)  1 mongos (2CPU + 8GB MEM + 10GB HDD)  4 mongod (2CPU + 24GB MEM + 2TB HDD)  NO replication  Both volume size and compute nodes can be dynamically changed  No service interruption, no significant performance degradation when data increases  Primary Performance (To be significantly improved)  Aggregate 10min traffic (~7M records) • MongoDB Map/Reduce takes less than 4 minutes  Query “time + 5 tuples” in 900M records • MongoDB returns results in 10 seconds  Target (hopefully not dream )  Query dataset of 30 days, less than 1 sec.© jinyh@sjtu 19
  • Swift Potential Applications  Similar to Amazon S3, so there are many potential applications, such as: Dropbox, Slideshare, Netflix,…  Sector related, such as medicine, education, media, …  Korean Telecom Commercial Deployment (CloudScaling)  Lack of monitoring; no quota restriction; less auto deployment  Our testing:© jinyh@sjtu 20
  • Acknowledgement  Network & Information Center; State Key Lab of Optical Communication  Intel; IBM/BNT; H3C; Dell; Skycloud; Storbridge; IXIA; Apposite; Netronome; Chelsio; Fusion-IO  OpenStack Community© jinyh@sjtu 21