• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK
 

Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK

on

  • 374 views

 

Statistics

Views

Total Views
374
Views on SlideShare
374
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK Data center 2.0: Data center built for private cloud by Mr. Cheng Che Hoo of CUHK Presentation Transcript

    • 1
    • The Chinese Universityof Hong Kong (CUHK) Established in 1963 Shatin Campus:  1.37 km2  150+ buildings 20,000+ students (UG & PG)  Adding 3,000+ UG students starting 2012 6,000+ staff 8 Faculties + Graduate School 9 Colleges 200+ departments/units 100,000+ alumni 2
    • Challenges Highly decentralized  Not enough sharing of resources so higher cost  One central data center but a lot of server rooms and small data centers around campus No DR site for data center Out of data center space 2012: More buildings, more colleges and more students and staff Support teaching, learning, research and administration 3
    • Internal Services Provider Subscription model  Users pay rental charge to enjoy the services  Users do not need to buy hardware and relevant software Optional to users  Need to compete with others  Have to be competitive  Have to be market/user oriented  Have to make continuous improvement 4
    • Service Catalog VM/PM + SAN + Backup  With or without virtual Firewall Server Racks IP Phone + Data Ports  Support departmental WiFi access points Physical Security  Central Door Access / Carpark / CCTV / Burglar Alarms 5
    • Key Characteristics For sharing among multiple users  Less wastage Charge by “usage” Can support users’ dynamic resource demand Less customization 6
    • Design & Support Standardization Scalability Keep things simple Automation Work Order Tracking Inventory / Accounting / Charging Need to keep a balance between support of dynamic demand and simplicity Go for simple charging model 7
    • Data Center in Pi Chiu Building 24 x 7 operations 30+ years old <400m2 Non-standard raised floor  24” x 24”  10” deep UPS: 300KVA + 400KVA Gen Set: 300KVA + 800KVA CRAC unit: 6 (+ 1 by end of 2011) FM200 Housing ~1,000 physical servers -> running out of space Data Center Management Improvement over the last few years  Stricter Physical Security  Keep track of all machines installed inside  All cabinets locked  No more new data cabling underneath raised floor to improve airflow 8
    • Twin Site New Data Center of < 800 m2 To be ready in early 2013 800KVA capacity Will use the latest standards (Tier 3 as target)  Will be more advanced than Pi Chiu Active-active, mutual backup Will be manned  Need to think about how to distribute manpower among the 2 data centers HKIX will have a POP there Will support departmental server rack requirements 9
    • Greener Data Center More efficient power & cooling Equipment placement  Airflow issue  Cold Aisle / Hot Aisle Careful location selection for CRAC units and outlets for cold air Airflow underneath raised floor should not be blocked  No data cables underneath  Data cables will be laid over cable trays above racks 10
    • Cabling and Networking forData Centers Too many data cables underneath raised floor blocking airflow in Pi Chiu  Have to change this Unified fabric to support main communications needs within data centers: Ethernet (for data) and Fiber Channel (for storage) Top of Rack: Unified networking equipment, Cat5e within rack End of Row / Data Center Core: To support 10G links to individual racks, MMF across racks Only MMF are to be laid above the racks Gradual migration for Pi Chiu 11
    • VM + SAN + Backup Server Consolidation by virtualization  Look for scalability and easy expansion  Fast deployment of new VM  High availability requirements  VMware site license Storage Integration  Consolidation of SAN networks  Start using FCoE  Central management Centralized Backup  Offsite Backup Development VM platform to replace development servers in workshops 12
    • New Infrastructure ServiceIdeas Central equipment/servers which can scale up to support many users around campus For consolidation and cost savings, to use existing structured cabling system Everything over the same structured cabling system with star topology  Cat-5e for horizontal and vertical  SMF for inter-building (and vertical if necessary)  MMF (high-end 50μm) for inside data centers Everything over IP / Ethernet  Have to enhance redundancy all around the network infrastructure including power redundancy New service under consideration: Digital Signage 13
    • Other Issues to Keep in Mind IPv6 DNSSEC Multicast IAM 14
    • Our Advantages HKIX Options of Fixed Network Operators One of the two HARNET backbone hubs 15
    • Considerations of Public Cloud Will use public cloud service if/when necessary Feel more comfortable if the service is being provided by servers/storage situated in HK For better connectivity, the public cloud network to be chosen has to be connected to HKIX, if not using dedicated connection (extra cost involved) 16
    • Thank you! 17