Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Dcn invited ecoc2018_short

448 views

Published on

ECOC 2018 Workshop DC2

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Dcn invited ecoc2018_short

  1. 1. High Performance Networks Group Exploring multi-dimensional optical switching technologies in DCN Shuangyi Yan Shuangyi.yan@bristol.ac.uk High Performance Networks Group University of Bristol Shuangyi.yan@bristol.ac.uk
  2. 2. High Performance Networks Group Outline • Scaling up DCN requires new switching technologies • Explore possible deployments of optical switching technologies • Optical time-slot switching / OTDM • Recent works from University of Bristol • Conclusion: Opportunities for future DCN?
  3. 3. High Performance Networks Group Hyperscale Data Centers • Hyperscale data centers will grow from 259 in number at the end of 2015 to 485 by 2020. They will represent 47 percent of all installed data center servers by 2020. • Traffic within hyperscale data centers will quintuple by 2020. Hyperscale data centers already account for 34 percent of total traffic within all data centers and will account for 53 percent by 2020. Source: Cisco Global Cloud Index: Forecast and Methodology,2015–2020
  4. 4. High Performance Networks Group Size of a typical hyperscale DCN Hyperscale DCNs • Server numbers: range from 50,000 servers to as many as 80,000 servers • Cluster servers to perform a task • Size: E.g., SuperNAP in Las Vegas with area of 2.2M sq • an SMF reach of 500 m was more than adequate to address nearly all data center but no longer is the case: Minimum reach needs to be 1 km with target of 2 km – A 2 km reach will also address building to building application More servers, better connections !
  5. 5. High Performance Networks Group Edge computing & Cloud computing • Leave the latency-sensitive applications in edge-DCN (Driven by 5G) • SDN-based inter-DCN connections • Intelligent content provision DATA CENTER DATA CENTER DATA CENTER DATA CENTERData CenterMetro Network Edge computing Edge computing Cloud computing Core Network
  6. 6. High Performance Networks Group Link bandwidth, ready for 400G
  7. 7. High Performance Networks Group 400G in DCN • 400G Ethernet have been deploying in data center networks • Ethernet-based connection for DCI Agema® AGC032 12.8tbps 32 ports 400GbE whitebox switch D. Chowdhury, “Incredible speed and the fat pipe: 400 Gigabits Ethernet,” Dhiman Deb Chowdhury’s Blog, 22-Jun-2018. .
  8. 8. High Performance Networks Group Scale up for large scale DCNs • Scale up electrical switch • Switch Capacity • Port count ( radix) FIC BUF PPPHY DB FIC BUF PHYPP DB FIC BUF PPPHY DB FIC BUF PHYPP DB Ingress of Line chip n Ingress of Line chip 1 … Switch chip 1 Switch chip 2 Switch chip 3 Switch chip 4 Switch chip m … egress of Line chip 1 egress of Line chip n CLOS architecture for large capacity switch Source from: George Rapen, OFC 2017, M3k.1 The ITRS projections for signal-pin count and per-pin bandwidth are nearly flat over the next decade. Single-chip bandwidth saturates.
  9. 9. High Performance Networks Group Any chance for optical switching technologies • Challenges to scale up for electrical switches • Power consumptions and cost with O/E/O conversion • The introduced the extra latency (multi-tiers, hop numbers) • Could optical switching offer some solutions? • Hyperscale DCNs • Hybrid solutions with electrical packet switching • Optical space switching • Optical wavelength switching • Optical packet/slot switching
  10. 10. High Performance Networks Group Optical space switching –port switching • Polatis Beam Steering Technologies: space switching • Dark fiber switching • High Radix Port Switching • Possible compact with Multi-core fiber • Deterministic Switch time N. Parsons, A. Hughes, and R. Jensen, “High Radix All-Optical Switches for Software-Defined Datacentre Networks,” in ECOC 2016; 42nd European Conference on Optical Communication, 2016, pp. 1–3. • Large switch time • Low connectivity (OCS)
  11. 11. High Performance Networks Group Optical circuit switching – wavelength switching • Wavelength selective switching • Arrayed-Waveguide Grating Router (AWGR) • Switching time: several µs • High cost of the device Source from: George Rapen, OFC 2017, M3k.1 • Passive device • Could be further combined with fast tunable transmitter or wavelength converter to realize packet/slot switching • Challenges for the fast tunable transmitter Bring colors in DCN should be cautious!
  12. 12. High Performance Networks Group Optical Time (packet/slot) Switching PLZT based optical switching PLZT 4 ✕ 4 Switching system Switching time: around 10ns 4 ✕ 4 PLZT based optical Switch K. Nashimoto et al., “High-speed PLZT optical switches for burst and packet switching,” in 2nd International Conference on Broadband Networks, 2005., 2005, p. 1118–1123 Vol. 2. SOA based packet switching
  13. 13. High Performance Networks Group Switching granularity MaximumPortNumber Switch Granularity (measured by time: ns) 10ns 100ns 1 µs 10 µs 100 µs 1ms 10ms 100ms 1s 10 50 90 130 170 210 1024 Ports 50GbE EPS 62 Ports 100GbE EPS 32 Ports 400GbE EPS 384 Ports Optical Fiber Switching 32 Ports Wavelength Switching 16 Ports TDM/OPS AWGR + Fast tunable laser (TDM/OPS ….
  14. 14. High Performance Networks Group Optical Time-Slot Switching (OTSS) • OTSS provides switching granularity less than optical circuit switching • Simplified slot control than optical packet switching • OTSS and OPS both need centralized control (no optical buffer) • Scheduling and time synchronization are needed • Reduce the complexity of implementation by limiting the network scale • Different slot sizes will offer different deterministic latencies
  15. 15. High Performance Networks Group Inter-Cluster DCN Architecture S. Yan et al., Journal of Lightwave Technology, vol. 33, no. 8, pp. 1586–1595, Apr. 2015.
  16. 16. High Performance Networks Group Algorithms for slot scheduling J. Perry, A. Ousterhout, H. Balakrishnan, D. Shah, and H. Fugal, “Fastpass: a centralized ‘zero-queue’ datacenter network,” 2014, pp. 307–318. Arbiter • A fast and scalable timeslot allocation algorithm • A fast and scalable path assignment algorithm • A replication strategy for the central arbiter: handle network and arbiter failures Software arbiter implementation scales to multiple cores and handles an aggregate data rate of 2.21 Terabits/s. It’s possible to provide scheduling algorithm for a large network. Fastpass: TDM-only Zero-Queue DCN
  17. 17. High Performance Networks Group A DPDK Based Online Timeslot Allocator for TDM B. Guo, S. Li, S. Yin, and S. Huang, “TDM based optical bypass for intra-rack elephant flow with a DPDK based online timeslot allocator,” in 2017 Optical Fiber Communications Conference and Exhibition (OFC), 2017, pp. 1– 3. • sFlow-based elephant flow detection • Online-time slot allocator (Data Plane Development Kit) Time slot scheduling algorithm achieves almost the same throughput (384 Gbps) with Max- Min fain algorithm Artificial Intelligence may have a chance !
  18. 18. High Performance Networks Group DC Data & Control Plane Integration (ECOC Postdeadline 2015) • Full Integration and demo of the LIGHTNESS system • Programmable transport and switching of data flows over the optical flat DCN • Full integration of SDN control plane and optical data plane • Optical switches configuration and monitoring through OpenFlow • End-to-end all-optical network testbed • OF-enabled POLATIS OCS switch, OPS switch, FPGA-based hybrid NIC • OpenDaylight SDN controller • VDC composition application and monitoring VNF
  19. 19. High Performance Networks Group Demonstration of Data center virtualization • TDM/OCS hybrid ToR (TSON) • 2 ✕ 2 4-core MCF switch • 4 ✕ 4 OCS fiber switch • OpenStack platform for orchestrator • OpenFlow enabled TSON and OCS • TSON supports DCN virtualization … demonstrated fully SDN-controlled and orchestrated TSON and OCS enabling granular bandwidth provisioning from the orchestration layer.
  20. 20. High Performance Networks Group Conclusion • Electrical switches couldn’t be scaled up due to the limited switching capacity and edge bandwidth of switching chips • Optical switching could provide some solutions for optical and electrical hybrid DCNs, especially optical slot/time switching, • AI will have opportunities for network configurations and task allocations between cloud/edge DCNs • SDN-integration with DCI over metro/core networks Thank you for your attention !

×