Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK

573 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
573
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK

  1. 1. 1
  2. 2. The Chinese Universityof Hong Kong (CUHK) Established in 1963 Shatin Campus:  1.37 km2  150+ buildings 20,000+ students (UG & PG)  Adding 3,000+ UG students starting 2012 6,000+ staff 8 Faculties + Graduate School 9 Colleges 200+ departments/units 100,000+ alumni 2
  3. 3. Challenges Highly decentralized  Not enough sharing of resources so higher cost  One central data center but a lot of server rooms and small data centers around campus No DR site for data center Out of data center space 2012: More buildings, more colleges and more students and staff Support teaching, learning, research and administration 3
  4. 4. Internal Services Provider Subscription model  Users pay rental charge to enjoy the services  Users do not need to buy hardware and relevant software Optional to users  Need to compete with others  Have to be competitive  Have to be market/user oriented  Have to make continuous improvement 4
  5. 5. Service Catalog VM/PM + SAN + Backup  With or without virtual Firewall Server Racks IP Phone + Data Ports  Support departmental WiFi access points Physical Security  Central Door Access / Carpark / CCTV / Burglar Alarms 5
  6. 6. Key Characteristics For sharing among multiple users  Less wastage Charge by “usage” Can support users’ dynamic resource demand Less customization 6
  7. 7. Design & Support Standardization Scalability Keep things simple Automation Work Order Tracking Inventory / Accounting / Charging Need to keep a balance between support of dynamic demand and simplicity Go for simple charging model 7
  8. 8. Data Center in Pi Chiu Building 24 x 7 operations 30+ years old <400m2 Non-standard raised floor  24” x 24”  10” deep UPS: 300KVA + 400KVA Gen Set: 300KVA + 800KVA CRAC unit: 6 (+ 1 by end of 2011) FM200 Housing ~1,000 physical servers -> running out of space Data Center Management Improvement over the last few years  Stricter Physical Security  Keep track of all machines installed inside  All cabinets locked  No more new data cabling underneath raised floor to improve airflow 8
  9. 9. Twin Site New Data Center of < 800 m2 To be ready in early 2013 800KVA capacity Will use the latest standards (Tier 3 as target)  Will be more advanced than Pi Chiu Active-active, mutual backup Will be manned  Need to think about how to distribute manpower among the 2 data centers HKIX will have a POP there Will support departmental server rack requirements 9
  10. 10. Greener Data Center More efficient power & cooling Equipment placement  Airflow issue  Cold Aisle / Hot Aisle Careful location selection for CRAC units and outlets for cold air Airflow underneath raised floor should not be blocked  No data cables underneath  Data cables will be laid over cable trays above racks 10
  11. 11. Cabling and Networking forData Centers Too many data cables underneath raised floor blocking airflow in Pi Chiu  Have to change this Unified fabric to support main communications needs within data centers: Ethernet (for data) and Fiber Channel (for storage) Top of Rack: Unified networking equipment, Cat5e within rack End of Row / Data Center Core: To support 10G links to individual racks, MMF across racks Only MMF are to be laid above the racks Gradual migration for Pi Chiu 11
  12. 12. VM + SAN + Backup Server Consolidation by virtualization  Look for scalability and easy expansion  Fast deployment of new VM  High availability requirements  VMware site license Storage Integration  Consolidation of SAN networks  Start using FCoE  Central management Centralized Backup  Offsite Backup Development VM platform to replace development servers in workshops 12
  13. 13. New Infrastructure ServiceIdeas Central equipment/servers which can scale up to support many users around campus For consolidation and cost savings, to use existing structured cabling system Everything over the same structured cabling system with star topology  Cat-5e for horizontal and vertical  SMF for inter-building (and vertical if necessary)  MMF (high-end 50μm) for inside data centers Everything over IP / Ethernet  Have to enhance redundancy all around the network infrastructure including power redundancy New service under consideration: Digital Signage 13
  14. 14. Other Issues to Keep in Mind IPv6 DNSSEC Multicast IAM 14
  15. 15. Our Advantages HKIX Options of Fixed Network Operators One of the two HARNET backbone hubs 15
  16. 16. Considerations of Public Cloud Will use public cloud service if/when necessary Feel more comfortable if the service is being provided by servers/storage situated in HK For better connectivity, the public cloud network to be chosen has to be connected to HKIX, if not using dedicated connection (extra cost involved) 16
  17. 17. Thank you! 17

×