Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Design and Implementation of Distributed Mobility Management Entity on OpenStack

158 views

Published on

Presentation slides at PhD Consortium, IEEE CloudCom 2015 (Dec 3, 2015). Further information available at https://users.aalto.fi/~premsag1

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Design and Implementation of Distributed Mobility Management Entity on OpenStack

  1. 1. Design and Implementation of a Distributed Mobility Management Entity on OpenStack Gopika Premsankar, Kimmo Ahokas, Sakari Luukkainen PhD Consortium, CloudCom 2015 December 3, 2015
  2. 2. Agenda • Introduction - Motivation and contribution • Implementation - Architecture choices - 1:N mapping / 3-tier architecture - Testbed • Results • Conclusion and future work 2 03/12/2015 PhD Consortium, CloudCom 2015
  3. 3. INTRODUCTION 3 03/12/2015 PhD Consortium, CloudCom 2015
  4. 4. Evolved Packet Core (EPC) MME S-GW HSS P-GW E-UTRAN IP services EPC SGi S1-MME S1-U S11 S6a S5 Internet 4 UE 03/12/2015 PhD Consortium, CloudCom 2015
  5. 5. Motivation and contribution • How to harness cloud computing benefits? • New architecture for MME • Build resilience into the architecture 5 03/12/2015 PhD Consortium, CloudCom 2015
  6. 6. Architecture choices for MME 6 Standalone MME (UE Context stored on local storage) State database (stores UE Context) Worker Worker Worker Front end 1:1 Mapping 3-tier architecture 1:N Mapping 03/12/2015 PhD Consortium, CloudCom 2015
  7. 7. IMPLEMENTATION 7 03/12/2015 PhD Consortium, CloudCom 2015
  8. 8. Functions of front end 8 • Maintain 3GPP interfaces – How to identify correct node? • Balance requests to workers – Round robin balancing • Initiate creation or deletion of worker nodes State database (stores UE Context) Worker Worker Worker Front end 03/12/2015 PhD Consortium, CloudCom 2015
  9. 9. Functions of worker 9 • Implements actual working logic • Procedures of interest – Attach – Detach • Stateless operation When to store UE context? – After callflow is complete State database (stores UE Context) Worker Worker Worker Front end 03/12/2015 PhD Consortium, CloudCom 2015
  10. 10. State database 10 • Redis cluster – Data sharded across master nodes – Very low latency (in-memory data) – High availability • Different configurations possible – Tradeoff between persistence of data and latency State database (stores UE Context) Worker Worker Worker Front end 03/12/2015 PhD Consortium, CloudCom 2015
  11. 11. System architecture 11 03/12/2015 PhD Consortium, CloudCom 2015
  12. 12. Testbed 12 03/12/2015 PhD Consortium, CloudCom 2015
  13. 13. RESULTS 13 03/12/2015 PhD Consortium, CloudCom 2015
  14. 14. Experimental evaluation 14 • Attach latency • UE context retrieval • Demonstration of scaling 03/12/2015 PhD Consortium, CloudCom 2015
  15. 15. Attach latency 15 Average latency 95% confidence interval Original MME 8.399 ms 0.563 Distributed MME 12.782 ms 0.208 • Measured on eNodeB • Latency = time between sending Attach Request & receiving Attach Accept 03/12/2015 PhD Consortium, CloudCom 2015
  16. 16. Impact of placement on attach latency 16 Placement configuration of worker & FE Average latency 95% confidence interval On different OpenStack clouds 12.914 ms 0.222 On same compute host in same OpenStack cloud 12.368 ms 0.505 On different compute hosts in same OpenStack cloud 13.065 ms 0.288 03/12/2015 PhD Consortium, CloudCom 2015
  17. 17. Distribution of attach latency 17 03/12/2015 PhD Consortium, CloudCom 2015
  18. 18. Time taken to retrieve UE context 18 • Measured on MME • On distributed MME – includes time to send request & receive response from Redis server • On original MME – time to query local storage - uthash – for C structures Average Latency 95% confidence interval Original MME 20.700 us 0.675 Distributed MME 1256.724 us 18.028 03/12/2015 PhD Consortium, CloudCom 2015
  19. 19. Demonstration of scaling 19 03/12/2015 PhD Consortium, CloudCom 2015
  20. 20. CONCLUSION 20 03/12/2015 PhD Consortium, CloudCom 2015
  21. 21. Conclusion and future work 21 • Presented a novel 3-tier architecture for vMME • Leverages cloud computing benefits in a vEPC • Evaluate effect of Redis persistence policies • Evaluate performance with hybrid cloud 03/12/2015 PhD Consortium, CloudCom 2015
  22. 22. Thank you! Questions? 22 03/12/2015 PhD Consortium, CloudCom 2015
  23. 23. Attach Procedure 23 03/12/2015 PhD Consortium, CloudCom 2015
  24. 24. Detach Procedure 24 03/12/2015 PhD Consortium, CloudCom 2015
  25. 25. Components of testbed 25 • Two OpenStack installations – Icehouse release 2014.1.3 – All services on identical blade servers : • 2 compute hosts, 1 controller, 1 networking node • NFS shared storage CPU 2 x Intel Xeon E5-2665 (2.4 GHz, 64-bit, 8 cores, Hyper-Threading enabled) RAM 128 GB DDR3 1600 MHz Hard disk space 150 GB Networking 10GbE interconnect 03/12/2015 PhD Consortium, CloudCom 2015
  26. 26. Software components of testbed 26 • MME components – FE, Worker on different VMs – Redis cluster with 3 master nodes, each on different VM • eNodeB – C program which sends required messages sequentially • Collocated S-GW and P-GW – nwEPC - EPC SAE Gateway 03/12/2015 PhD Consortium, CloudCom 2015
  27. 27. Characteristics of VMs 27 • Small flavor • Medium flavor for Redis VCPU 1 RAM 2048 MB Disk space 10 GB VCPU 2 RAM 4096 MB Disk space 20 GB 03/12/2015 PhD Consortium, CloudCom 2015

×