Open stack china_201109_sjtu_jinyh

3,368 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
3,368
On SlideShare
0
From Embeds
0
Number of Embeds
1,331
Actions
Shares
0
Downloads
172
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Open stack china_201109_sjtu_jinyh

  1. 1. OpenStack@NIC.SJTU Yaohui Jin (jinyh@sjtu.edu.cn) Network & Information Center Shanghai Jiao Tong University© jinyh@sjtu
  2. 2. About Me and Team  Professor, Deputy Director of NIC.SJTU  Email: jinyh@sjtu.edu.cn  My research interests: Data Center Network, Big Data Analysis, Converged Broadband Network  Team:  Engineers: Xuan Luo (Ph. D), Jianwen Wei (M. Eng.), Qiang Sun (M. Eng.)  Ph.D Students: Jianxiong Tang, Xiaming Chen, Pengfei Zhang, Siwei Qiang  Master Students: Wei Ye, Xin Yang, Xiujie Feng, Xiaosheng Zuo, Zhaohui Zhang  Interns: Hongbo Fan, and other 10+ undergraduate© jinyh@sjtu 2
  3. 3. Agenda  Hardware configuration  Performance monitoring and measurement  Potential applications© jinyh@sjtu 3
  4. 4. OpenStack Architecture courtesy of Dell© jinyh@sjtu 4
  5. 5. Our Testbed: Sept. 2011© jinyh@sjtu 5
  6. 6. Testbed Photo© jinyh@sjtu 6
  7. 7. Server Details Name Vendor Configuration Purpose Nova-controller SuperCloud- 2 *E5620/48GB RAM/ Nova-api, Nova-scheduler, R6210-S2 2*1TB SATA (Raid 1)/GE Nova-objectstore, RabbitMQ, MySQL, euca2ools, Dashboard, VNC server, Ganglia Nova-network SuperCloud- 2 *E5620/48GB RAM/ Nova-network R6210-S2 2 *1TB SATA (Raid 1)/2 *10GE Nova-volume IBM x3650 2 *E5620/48GB RAM/ Nova-volume IBM DS3512 + 2 *146GB SAS (Raid 1)/2 *10GE EXP3512 + 96TB SATA (Raid 10) Nova-compute IBM dx360 M3 2 *E5650/96GB RAM/ Nova-compute 2 *146GB SAS (Raid 1)/2 *10GE Glance Dell R610 2 * E5620/8GB RAM/ Glance-api, Glance-registry, 2 * 146GB SAS (Raid 1) /2 *10GE Image Store, puppet server + 320GB SSD Proxy node SuperCloud- 2 * E5620/48GB RAM/ Swauth, Proxy server R6210-S2 2 * 1TB SATA (Raid 1)/2 *10GE Storage node SuperCloud- 2 * E5620/48GB RAM/2 *146GB Account server, Container RE436 SAS (Raid 1) /10GE server, Object server + 34 * 2TB SATA desktop disks© jinyh@sjtu 7
  8. 8. Network Details  Data Center Network: 10 GE Switch (BNT&H3C) in 2 domains  Control and Manage: GE Switch (DCRS)  10GE connect to campus network  Fat tree topology; L3: VRRP; L2: LACP+VLAG+MSTP  Security control: SSH, NAT, ACL, VLAN  NIC: Intel X520-DA2; Chelsio T420E-CR  L2-L7 Network tester: IXIA XM2  L2-L3 Network impairment emulator: Apposite Netropy 10G© jinyh@sjtu 8
  9. 9. Nova Network Traffic courtesy of Vishvananda Ishaya© jinyh@sjtu 9
  10. 10. Swift Details  Raw storage capacity: 400T Bytes  Storage node configuration: No Raid, JBOD, 3 Replicas, 6 Zones  Hardware cost: ~ 1000 RMB/TB (Raw, including servers and switches) Collaboration with StorBridge and SkyCloud Shanghai© jinyh@sjtu 10
  11. 11. Nova Cluster Monitoring Monitor by Ganglia© jinyh@sjtu 11
  12. 12. VM Provisioning Time VM: Windows7; Image Size: 20GB© jinyh@sjtu 12
  13. 13. NOVA I/O Throughput Tested by ATTO© jinyh@sjtu 13
  14. 14. VM Network Throughput Co-Located in a single physical machine (CSM) Connected by a single switch (CSS) Distributed in multiple physical machines (DMM) Connected by multiple switches (CMS)© jinyh@sjtu 14
  15. 15. Swift Testing  Scalability  Adding a storage node to an existing zone  Adding a storage node as a new zone  No influence on the functions of swift  Reliability  Disk failure/recovery  Storage node failure/recovery  Fault duration: 10 min & 1 hour  No influence on the functions of swift  Performance testing (ongoing)  Throughput  Response time  Concurrency Collaboration with Intel Shanghai© jinyh@sjtu 15
  16. 16. Nova Potential Applications  Infrastructure as a service (either private or public)  VM management for DevOps in IT service department  Big data analysis and tools, e.g. noSQL and Map/Reduce  Elastic provisioning of web service, particularly for burst requests of web 2.0 or mobile applications  Next generation high performance computing, virtual cluster provisioning with middleware© jinyh@sjtu 16
  17. 17. Syslog Analysis RAW mirrored traffic into DPI: ~6Gbits/s syslog into MongoDB: ~4MBytes/s ( 12000records/s ) MongoDB increases ~400GBytes/day© jinyh@sjtu 17
  18. 18. MongoDB Components • Actual data • Needs RAM + Disk IO • Can run as Arbiter • No data • Just votes to elect primary • Stores sharding configuration • Stateless router • Stores small amounts of data • Typically run on App Servers • Infrequently queried/updated courtesy of 10gen by MongoS© jinyh@sjtu 18
  19. 19. MongoDB Dataset Provisioning and Primary Results  Cluster in OpenStack:  1 conf server (2CPU + 8GB MEM + 100GB HDD)  1 mongos (2CPU + 8GB MEM + 10GB HDD)  4 mongod (2CPU + 24GB MEM + 2TB HDD)  NO replication  Both volume size and compute nodes can be dynamically changed  No service interruption, no significant performance degradation when data increases  Primary Performance (To be significantly improved)  Aggregate 10min traffic (~7M records) • MongoDB Map/Reduce takes less than 4 minutes  Query “time + 5 tuples” in 900M records • MongoDB returns results in 10 seconds  Target (hopefully not dream )  Query dataset of 30 days, less than 1 sec.© jinyh@sjtu 19
  20. 20. Swift Potential Applications  Similar to Amazon S3, so there are many potential applications, such as: Dropbox, Slideshare, Netflix,…  Sector related, such as medicine, education, media, …  Korean Telecom Commercial Deployment (CloudScaling)  Lack of monitoring; no quota restriction; less auto deployment  Our testing:© jinyh@sjtu 20
  21. 21. Acknowledgement  Network & Information Center; State Key Lab of Optical Communication  Intel; IBM/BNT; H3C; Dell; Skycloud; Storbridge; IXIA; Apposite; Netronome; Chelsio; Fusion-IO  OpenStack Community© jinyh@sjtu 21

×