LCU13: High Performance Networking - Deep dive

  • 239 views
Uploaded on

Resource: LCU13 …

Resource: LCU13
Name: High Performance Networking - Deep dive
Date: 29-10-2013
Speaker: Jarmo Hillo / Raj Murali Srinivasan
Video: http://www.youtube.com/watch?v=QTtKOoIteaY

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
239
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
24
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. ©2013 Nokia Solutions and Networks. All rights reserved. High end telecom networking – deep dive LCU 2013
  • 2. 2 ©2013 Nokia Solutions and Networks. All rights reserved. 1000x Packet Traffic Technologies are Maturing … times more capacity 3D silicon process for maximum efficiency in minimum space Optimal architectural split for variety of payload Ultrafast interfaces reaching 100Gbit/s
  • 3. 3 ©2013 Nokia Solutions and Networks. All rights reserved. Single core networking Only priority queuing doable with single core • Static priorities • Highest priority first with starving prevention logic • Packet order not an issue • Does not scale Classify 100%
  • 4. 4 ©2013 Nokia Solutions and Networks. All rights reserved. Multicore SoC Load Sharing Multicore SoC great for load sharing • Static resource allocation • Queue selection typically by hash of incoming packet header or round robin • HW aware applications • Unpredictable, uneven core load Hash 95% 5%60%>100%
  • 5. 5 ©2013 Nokia Solutions and Networks. All rights reserved. Multicore SoC Load Balancing Multicore SoC great for load balancing • Dynamic resource allocation with rules • Fast per packet decision • Event Machine model: each thread asks for new job after current is finished • Automatic scaling – “single thread” programming model Scheduler 65% 65%65%65%
  • 6. 6 ©2013 Nokia Solutions and Networks. All rights reserved. Hickups with multicore scaling Relative speedup does not scale linearly with cores, for example 64 cores, 1% serial results to less than 40x speedup
  • 7. 7 ©2013 Nokia Solutions and Networks. All rights reserved. • Mobile traffic for large amount of cells • Find max throughput - QoS threshold for adding more users Case study: WCDMA HSPA Static core allocation • Users allocated per core based on resource availability to maintain enough peak reserves • High headroom required • Low average load Dynamic core allocation • Allocation of users to a resource pool of cores • Statistical multiplexing gain • Low headroom • High average load Core1 Core2 Core3 Core4 AverageloadPeakreserve Core5 ~50% Core1 Core2 Core3 Core4 Core5 ~90% 80% gain
  • 8. 8 ©2013 Nokia Solutions and Networks. All rights reserved. Case study: Linux userspace PMD vs. socket packet test HW load balancer further improves scaling with number of cores Dataplane in userspace scales linearly Kernel approach saturates at ten cores
  • 9. 9 ©2013 Nokia Solutions and Networks. All rights reserved. HW abstraction of networking SoC Logical functionality and packet flow in typical networking SoC Null latency intelligent packet scheduling to the cores
  • 10. 10 ©2013 Nokia Solutions and Networks. All rights reserved. SW abstraction of networking SoC Open Event Machine SW architecture
  • 11. 11 ©2013 Nokia Solutions and Networks. All rights reserved. Abstraction Scalability Efficiency What do we need 1 x
  • 12. 12 ©2013 Nokia Solutions and Networks. All rights reserved. No need to (re)program as SoC or core resources increase. “Single core” programming model Maintain same architectural split across platforms Agnostic to programming model Application portability Common terminology Abstraction is the key for portability Resources API Functional SoC core core core… Applications
  • 13. 13 ©2013 Nokia Solutions and Networks. All rights reserved. Optimal balance of cores, accelerators and interfaces Eliminates single core bottleneck Allows dynamic load balancing Minimize the serialization Ready for ultrafast interface technologies No packet drop before classification Scalability for optimal capacity HW offload Interfaces Optimal number of resources
  • 14. 14 ©2013 Nokia Solutions and Networks. All rights reserved. User space direct HW access No context switching Idle time power saving Moore’s Law, FinFet Dedicated processing units Efficiency for maximum density Power efficiency Compute efficiency 22nm 7nm
  • 15. 15 ©2013 Nokia Solutions and Networks. All rights reserved. Open Data Plane OpenDataPlane™ ODP will be the de-facto data plane programming model Common API for networking SoC Abstraction with performance Application portability across platforms, time to market Scaling to any number of cores and accelerators Efficiency in mind
  • 16. 16 ©2013 Nokia Solutions and Networks. All rights reserved. How does ODP map to Openstack ODP ODP
  • 17. 17 ©2013 Nokia Solutions and Networks. All rights reserved. How does ODP map to NFV ODP ODP ODP
  • 18. 18 ©2013 Nokia Solutions and Networks. All rights reserved. How does ODP map to Open Daylight ODP ODPODP ODP ODP
  • 19. 19 ©2013 Nokia Solutions and Networks. All rights reserved. Resource and functional abstraction Scalablility for variety of payload Compute efficiency Summary … get ready for 1000x packet compute
  • 20. ©2013 Nokia Solutions and Networks. All rights reserved. 1GB of personalized data per user per day means 1000x capacity compared today. Questions?