Cloud vm in edge networks


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Cloud vm in edge networks

  1. 1. IEEE Communications Magazine • July 2013 630163-6804/13/$25.00 © 2013 IEEE INTRODUCTION The original Internet paradigm to reach a given final destination focused on packet forwarding based on IP addresses. This is no longer the case, as in current IP networks packets are pro- cessed in intermediate nodes not only for look- ing up addresses, but also for performing a number of additional functions, such as network address translation, packet filtering, application acceleration over WANs, network monitoring, QoS management, and load balancing. Each middle-box (closed and quite expensive) typically supports a limited set of special functions (layer 4 or higher) and is predominantly built on dedi- cated hardware platforms. Middle-boxes are deployed along most of the paths from sources to destinations: that is why networks have lost the initial end-to-end charac- teristic of the Internet, where packets used to be just forwarded (routed). Beyond this, middle- boxes have also represented a significant fraction of network capital and operational expenses, mostly due to network management complexi- ties. The ossification of the Internet makes it diffi- cult for operators to develop and deploy new network functionalities, services, management policies, and so on, which are essential to cope with the increasing complexity and dynamicity of networks. Today, the launch of new services requires lengthy and expensive processes, which hinder the rapid take-off of new revenues in cur- rent dynamic markets. The innovation cycles of operators’ networks should be simplified by improving network flexibility and adaptability to the market dynamics. Future networks should reduce operational expenditures (OPEX) and capital expenditures (CAPEX). For instance, automated management and configuration of network equipment may reduce the need for human intervention, thus limiting the likelihood of wrong configurations; whereas flexible provisioning of network func- tionalities on top of an optimally shared physical infrastructure may reduce equipment costs and postpone further network investments. Improved performance of standard hardware and emerging technologies such as software defined networking (SDN) and network function virtualization (NFV) may help fulfill the above requirements. This article argues that future network infras- tructures will be made of a huge number of resources (compute, storage, and network I/O) being controlled dynamically, based on users’ demands, quality of service (QoS) and business objectives, or any other changing condition. Data analytics systems and methods will allow a comprehensive autonomic loop to be exploited that is capable of orchestrating virtual functions allocated to such a fabric of resources. The edges of networks are where this innovation wave will take place, for several reasons; for example, migration of intelligence toward the edges, pervasiveness of embedded communica- tions, computing and storage power in user’s devices, and fewer legacies. In less than a decade, edge networks will create distributed environments made of clouds of virtual resources (even operated by diverse players) interconnect- ed by a simpler and less hierarchical core net- work. The core network will become stateless (as the Internet basic protocol, where each packet travels entirely on its own without reference to any other packet), and edge networks (and data ABSTRACT This article addresses the potential impact of emerging technologies and solutions, such as software defined networking and network func- tion virtualization, on carriers’ network evolu- tion. It is argued that standard hardware advances and these emerging paradigms can bring the most impactful disruption at the net- work’s edge, enabling the deployment of clouds of nodes using standard hardware: it will be pos- sible to virtualize network and service functions, which are provided today by expensive middle- boxes, and move them to the edge, as close as possible to users. Specifically, this article identi- fies some of key technical challenges behind this vision, such as dynamic allocation, migration, and orchestration of ensembles of virtual machines across wide areas of interconnected edge networks. This evolution of the network will profoundly affect the value chain: it will cre- ate new roles and business opportunities, reshap- ing the entire ICT world. FUTURE CARRIER NETWORKS Antonio Manzalini and Roberto Minerva, Telecom Italia Franco Callegati, Walter Cerroni, and Aldo Campi, University of Bologna Clouds of Virtual Machines in Edge Networks MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 63
  2. 2. IEEE Communications Magazine • July 201364 centers) will be the only stateful parts of the net- works. This transformation will enable new roles and business opportunities, completely reshaping the value chains in the entire telco-information and communications technologies (ICT) world. In essence, the key enablers of this evolution are the advances in processing, storage, and net- working technologies that, in the short term, will allow the development of network nodes based on standard off-the-shelf hardware, cheap but powerful enough to run virtualized network functions and services. Edge networks will encompass a huge num- ber of inexpensive nodes and users’ devices capable of collapsing the open systems intercon- nect (OSI layers, e.g., from L2 to L7) on stan- dard hardware solutions. One of the main challenges behind this vision is the capability of dynamically instantiating, orchestrating, and relocating multiple virtual machines (VMs) across the providers’ transport networks. Ensembles of VMs will be strictly related and intertwined to implement sets of vir- tual functions and services that network opera- tors, and even users, must be able to configure and program. The rest of this article is organized as follows. Trends and enabling technologies are described. Examples of network scenarios are discussed. A survey of related work is reported. We present some experimental results of a use case pointing out the technical challenges related to the live migration of VMs across the WAN. We elabo- rate about the role of network operators in the envisioned future scenarios and draw some con- clusions. TRENDS AND ENABLING TECHNOLOGIES Progress on the SDN paradigm has recently sparked significant industrial interest in rethink- ing network architectures, control, and manage- ment. The SDN vision consists of decoupling the control plane logic from the forwarding hard- ware and moving the network states to a compo- nent, called a controller. This basic idea is not novel; however, for the first time, processing, storage, and network throughput performance may realistically support this disruption for carri- er grade services. It should be noted that purpose-built hard- ware can still outperform general-purpose hard- ware, but the performance gap is becoming smaller and smaller. Virtualization of physical resources will also have a significant impact on network evolution. In the IT field, virtualization is already well known and widely deployed in data centers to execute multiple isolated instances of a software entity on top of a single physical server. Virtual- ization has several benefits; for example, it increases resource utilization and improves state encapsulation. The extension of IT virtualization principles to network equipment (e.g., routers and switches) offers several advantages in terms of optimal usage of physical resources and deep- er integration of IT and network resources [1]. Moreover, the implementation of network processing functions in software, which is already possible today, allows standard hardware to exe- cute them. As a consequence of this evolution, several tasks, or activities normally carried out in data centers, such as allocation, migration, and cloning of virtual resources and functions (for server consolidation, load balancing, etc.) could also be performed in the network. This means that it should be possible to leverage the enhanced management tools used in data cen- ters today. Network virtualization, for example, allows operators to collocate multiple instances of net- work functions in the same hardware, where each function is executed by one or multiple VMs, as illustrated in Fig. 1. As a result, net- work operators may dynamically instantiate, acti- vate, and re-allocate resources and functions, as well as program them according to dynamic needs, requirements, and policies. It should be noted that NFV is complementary to SDN and does not depend on it: the two con- cepts should not be mixed, even though they can be combined in several ways that can potentially create a great value. To make such a combination feasible and exploitable, some technical problems remain to be solved. The capability of moving and orches- trating sets of VMs across wide area connections (not just locally, as in data centers) is one of those. Available virtualization tools, currently used for intra-data-center applications, offer only limited support for live migration of VMs in WANs, due to the lack of signaling and control tools spanning multiple technologies and domains, and because of the strict constraints in terms of throughput and delay. This is a point of weakness on which future research and develop- ment activities should focus. Some experimental results of a use case addressing these issues and showing its potential feasibility and limitations are presented later in this article. Furthermore, network operators should be able to cope with the increasing complexity of network management and control, exacerbated by the aforementioned needs. This will require Figure 1. Example of generalized node architecture. L2-L7 functions processing (e.g., from virtual switch/router to apps) VM: Virtual machine Interfaces Virtualization layer Kernel OS Standard hardware (processing, memory, packet forwarding) ... ... VM manager Mini OS VM Mini OS VM Mini OS VM MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 64
  3. 3. IEEE Communications Magazine • July 2013 65 the integration of autonomic and cognitive capa- bilities within virtualization solutions [2]. As an example, one may imagine an edge network capable of self-learning, that is, extracting knowl- edge from the environment and using such knowledge to improve performance. This means embedding into its nodes and devices a number of autonomic and cognitive functions with a set of local rules capable of changing, for instance, the “characteristics” of the local interconnec- tions of a node with the immediate neighbors. APPLICATION SCENARIOS This section reports some application scenarios that could be of particular interest for network operators, all fitting into the reference network scheme depicted in Fig. 2. One example is network portability: network services and functions could be initially deployed and tested using a certain network and cloud environment (e.g. in a given domain or country); in a second phase, all services and functions (i.e., network and server configurations, states, etc.) could be moved to another physical network and cloud environment (e.g., in another domain or another country) even by leasing processing and network physical resources from other local cloud and infrastructure providers. Another example is network federation: differ- ent virtual networks (putting together virtual IT and communication resources) can be seamlessly federated, in spite of being geographically remote. This is the ideal scenario for an opera- tor wishing to provide de-perimeterized services across different domains or countries. Network partitioning is another possible sce- nario: a virtual network, providing certain ser- vices, can be seamlessly partitioned into smaller subnetworks to simplify administrative tasks. Maintenance can be performed on a subset of the infrastructure, without causing any notice- able downtime in the provided services. From a longer-term perspective, this network transformation will create the conditions where- by users will literally “decide and drive” future ICT networks and services. This will have a big impact. This floating “fog” of ICT resources at the edge will give rise to new business models based on new forms of competition and coopera- tion between existing providers and new players entering the arena, including utilities, car manu- facturers, consumer electronics, public adminis- trations, communities, and so on. A galaxy of new ecosystems will be created, rewarded directly by the market itself, which will be essential encouragement for further invest- ments. We already see that the declining costs of computation, communication, and storage are moving the means of information and entertain- ment production from a limited number of com- panies to hundreds of millions of people around the planet. Hence, ideally, at the edge it will be possible to create, program, instantiate, or migrate dynamically different types of virtual functionali- ties and services as well as alternatives of the same. No more ossified architectures, but a sort of ephemeral (temporary) virtual network of resources capable of self-adapting elastically and flexibly to human dynamics. Also, we should not forget the rise of large- scale cooperative efforts under the form of open source software and hardware development and production, which might soon create a further ripple in the telco-ICT vendor markets. These trends, in turn, will influence the net- work transformation itself by making open source software and hardware available for carri- ers’ class pieces of equipment. RELATED WORKS AND OPEN CHALLENGES SURVEY OF RELATED WORK VM live migration should be essentially trans- parent to applications: in principle, this is already supported by most of the virtualization platforms for data centers [3]. For example, most virtual- ization environments support live migration, allowing administrators to move a VM between physical hosts within a LAN platform while run- ning (e.g., XenMotion and VMotion).1 However, when considering moving a VM across WANs, low bandwidth and high latencies over network connections may dramatically reduce the performance of the VM migration and consequently the QoS/quality of experience (QoE) of applications. Some commercial solu- tions were recently announced for WAN migra- tion, but are viable only under very constrained conditions (i.e., 622 Mb/s link bandwidth and less than 5 ms network delay). The challenge of live migration of VM across WAN was analyzed in [4], where the proposed solution (CloudNet) interconnected local net- works of multiple data centers at layer 2 so that WAN-based cloud resources looked like local LAN resources, thus allowing LAN-based proto- cols to seamlessly operate across WAN sites. An overlay approach to create private groups of VMs across multiple grid computing sites was also investigated in [5]; but it remains to be seen how this approach would scale for NFV in a car- rier class network. A WAN migration system focusing on effi- ciently synchronizing disk state during migration was described in [6]. In this work, the Xen block driver is modified to support storage migration, Figure 2. Future network scenario. Multiple edge networks Multiple edge networks Data centers Users’ resource networks Users’ resource networks Stateless core network Data analytics (orchestration of virtual functions in edge networks and data centers) 1 See the respective web- sites at http://www. xenmotion.php and roducts/vmotion/. MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 65
  4. 4. IEEE Communications Magazine • July 201366 and VM disk accesses are limited when the write requests occur faster than the network allows. A solution for high fault tolerance using asyn- chronous VM replication was reported in [7]. This method implements a quick filter of clean pages and maps the entire physical memory of the guest domain for reducing the mapping overhead. Finally a scheme for VM migration in a fed- erated cloud environment was presented in [8]. This solution is used to detect overloaded servers and automatically initiate the migration to a new location in the cloud, thus eliminating hot spots and balancing the load considering CPU, memo- ry, and network as a whole. In summary, as the prior art in the literature and commercially available solutions are still showing insufficient performance results for car- rier grade networks, we argue that the vision of an edge network made of clouds of VMs is pre- sumably a viable solution, although there are still open challenges to be addressed by research and development communities. OPEN CHALLENGES One of the main challenges is to make the migra- tion of VMs as seamless as possible, without deteriorating the QoE. Such a migration can be seen from two different perspectives. The first relates to the migration of VMs running an application (e.g., video streaming, interactive multimedia gaming). Motivations to migrate a VM running an application could be for intra- or inter-data-center load balancing (e.g., to avoid performance degradations due to hot spots), or following Users moving to other network attach- ment points (e.g., for QoE optimization). The second considers the migration of VMs running a virtual middle-box function (fully developed in software); this might be even more challenging, especially when the migration is executed while traffic is flowing. Motivations could again be net- work load balancing, traffic engineering (avoid- ing performance degradation due to host spots and congestion), energy consumption optimiza- tion, and so on. Typically, moving a VM between two hosts involves the following steps: 1.Establish connectivity (e.g., layer 2 for intra- data-center operations) between the hosts. 2.Transfer the whole disk state. 3.Transfer the memory state of the VM to the target host as the source continues running without interruption. 4.Once the disk and most of the memory states have been transferred, freeze the VM execution for the final transition of remain- ing memory dirty pages and processor states to the target server. Implementation and performance issues to successfully complete these actions are well understood for LANs, but not for WANs, where bandwidth constraints and latency still adversely affect steps 2 and 3, and IP address consistency is still an issue in step 4. It is important to match the performance parameters (e.g., total migration time and total downtime) of moving a VM, running applica- tions characterized by a given dirty page rate, with the network performance indicators (throughput, latency, etc). For example, the total migration time is roughly given by the number of dirty pages (depending on the VM workload) expected to be sent during the whole migration process divided by the available connection bandwidth. Another challenge is presented by applications sensitive to the duration of the required pause or state changes occurring during the live migration. Furthermore, the migration of a batch of VMs is also very challenging. The scenario here considered envisages networks of intertwined VMs, implementing IT and network resources. Besides this, VM live migration requires homogenous virtualization solutions. A compre- hensive analysis of the above technical problems is still missing and requires further investigation. Moreover, network operators will have to face the increased complexity of management and control, as well as the orchestration of the virtual functions and resources previously pre- sented. This will require, among other important functionalities, the exploitation of autonomic “local vs. global” capabilities, in order to create a sort of “network operating system.” Ultimately, an application layer with an open application programming interface (API) for programming the network at various levels will complete the vision. USE CASE AND EXPERIMENTAL RESI;TS This section discusses, through a practical exam- ple, the degree of feasibility of one of the possi- ble scenarios previously introduced. Based on this reference case, a testbed supporting the net- work function migration was developed. Experi- mental results show that the orchestration function may be successfully implemented by extending existing protocols (Session Initiation Protocol, SIP, in this case) spanning different logical layers and network functions, whereas the migration of VMs still represents a performance bottleneck for carrier grade operations, which require further investigation and engineering effort. CASE STUDY The basic scenario is a simple example of net- work portability and federation, as depicted in Fig. 3. A user is watching a video available on a video server, through both a physical (wireless or wired network access) and a virtual network infrastructure (virtual router). The former is responsible for the pure connectivity between the user and the data center where the service is hosted; the latter is responsible for the addition- al service profiling that may be required by the specific application, that is, bandwidth reserva- tion, traffic shaping and/or isolation, and so on. It is assumed that the network operator, at a certain point, considers it more efficient to migrate the network service, state, and virtual infrastructure to a different data center. This could be motivated by many factors: for example, initially the user could be watching the video on the move using mobile access, while at a later time, he/she reaches home and connects to the Network operators will have to face the increased complexity of management and control, as well as the orchestration of the virtual functions and resources previ- ously presented. This will require, among other important functionalities, the exploitation of auto- nomic “local vs. global” capabilities, in order to create a sort of “network operating system.” MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 66
  5. 5. IEEE Communications Magazine • July 2013 67 fixed broadband access network. In this case, the operator decides to move the virtual resources, which were hosted in a data center serving the mobile network, to another data center optimally connected to the fixed user access network. This will allow, for example, the user to take advan- tage of a larger bandwidth now available and watch the video stream at a higher definition. The main actions necessary to perform this migration are depicted in Fig. 3. Basically, what is needed is a kind of orchestration of the migra- tion procedure, involving the whole set of VMs that compose the virtual network infrastructure, and must be moved from one data center to another one over the WAN, without affecting the end user experience. In other words, we expect the migration to be completely transpar- ent to the user, who should keep enjoying the same video stream from the same video server at the same network address. Such orchestration requires a signaling plat- form able to carry cross-layer information, which is used for coordinating all of the tasks to be performed. An example of this orchestration was recently reported in [9], where a video server is migrated while reconfiguring the underlying net- work in order to keep the reservation of the bandwidth sufficient for good customer QoE. The use case reported here exploits the same signaling, but with the additional complexity of orchestrating the migration of a whole set of VMs and related network state. Our use case focuses on both network and IT virtualization. Nonetheless, SDN could be part of this scheme: for instance, in case of a more complex physical network topology, SDN can be the key compo- nent for the orchestration of network resources, which must be properly reconfigured to success- fully complete the migration. EXPERIMENTAL SETUP The experimental testbed for the proof of con- cept was implemented on purpose using off-the- shelf technologies. The aim was to understand the weak points, if any, and areas where ad hoc development and engineering would be required to meet carrier grade standards in the overall operation. The signaling platform for service manage- ment and orchestration is implemented using SIP, just as an example (for more details see [10]). In summary, taking advantage of the SIP session management capability, it is possible to create and maintain the network state necessary to guarantee full consistency during the migra- tion, while the body of SIP messages is able to carry the information used to specify what has to be migrated, to where, and when. The signaling scheme is supported by a SIP proxy implement- ed with the OpenSIPS platform with minor add- ons to allow the correct stripping of the message body with the additional information. The signal- ing terminals at the user and network operator sides consist of SIP user agents implemented as web applications using the PHP language. The hosting infrastructure is implemented by two multicore servers equipped with a Linux CentOS distribution running VirtualBox as the VM hypervisor. The VMs used in the experi- ment include two virtual single-core Linux boxes, one acting as the video server, the other as the access router connecting the user to the server. In order to keep the migration latency as small as possible, the two VMs were dimensioned with the minimum amount of memory (512 Mbytes for both) and disk space (7.2 Gbytes for the video server, 1.3 Gbyte for the access router) needed to perform their functions. Live migra- tion of the whole network infrastructure is per- Figure 3. Schematic graphical example of the service and network migration experiment. VMs Host hardware 2 Host hardware 1 Video server Accept incoming infrastructure ONOFF WAN Access router BroadbandMobile Migrate infrastructure Operator User I am now at home and can watch the video in HD Let’s see whether there is a better location for your video retrieval network The experimental testbed for the proof of concept was implemented on pur- pose using off-the- shelf technologies. The aim was to understand the weak points, if any, and areas where ad hoc development and engineering would be required to meet carrier grade stan- dards in the overall operation. MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 67
  6. 6. IEEE Communications Magazine • July 201368 formed through the VM teleporting function natively available in VirtualBox. The two hosting servers emulating two remote data centers are connected by an ad hoc link between interfaces that are separate from those used for communicating with the user. This setup emulates the WAN interconnection between data centers. In the experiment report- ed here, this link is implemented with a Gigabit Ethernet (1000baseT) point-to-point intercon- nection with negligible propagation delay. This choice again is motivated by the fact that the goal of this preliminary experiment is to prove that the concept is feasible with reference to the overall system architecture. A detailed investiga- tion of the role of the WAN parameters on the performance is for further study. In the experiment, the signaling platform exe- cutes the full live migration of network and IT resources, in the sense that both video server and access router VMs are migrated together with the corresponding virtual network. Once the migration is completed, the same signaling platform triggers a live reconfiguration of the stream from low quality (LQ) to high quality (HQ), since the user is now connected to the video source with broader bandwidth. EXPERIMENTAL RESULTS Figure 4 reports the video throughput (i.e., the measured video stream bit rate) as seen by the final user during the whole experiment. The blue solid line represents the variable bit rate (VBR) LQ video stream, which fluctuates around 1 Mb/s, whereas the red dotted line represents the VBR HQ stream, with average bit rate between 2 and 3 Mb/s, although some peaks reach 6 Mb/s. The inset in Fig. 4 shows an enhanced view (zoom in) of the interval when the live migration occurs. The vertical arrows indicate the time instants when the live migration of the video server starts (T1) and ends (T2), when the live migration of the router starts (T3) and ends (T4), and when the stream switches from LQ to HQ (T5). When the migration starts at T1, the user receives the LQ stream from the video server VM instance still running at the source host. However, as soon as the video server migration is complete at T2, the LQ stream is interrupted because there is no connectivity between the access router still running on the source host and the video server now running on the desti- nation host. The interruption lasts for about 5 s, which is the time needed in our testbed to com- plete the migration of the access router VM (from T3 to T4) and restore the connectivity between the user and the video server. Then the LQ stream resumes, as measured by the blue solid curve rising again after T4. After a few sec- onds, at T5 the stream switches from LQ to HQ, as shown by the end of the blue solid curve as it is replaced by the red dotted one. The system outage caused by the live migra- tion in another experiment is reported in detail in Fig. 5, which shows the packet capture of a ping session where ICMP ECHO messages are sent every 100 ms. The video server is at IP address, while the user terminal is at IP address This capture shows that the packets with sequence numbers from 216 to 245 are missing. These 30 packets are lost during the VM migration, which accounts for a network outage of about 3 s. It is worth noting that, apart from this, the ping packet flow is exactly the same before and after Figure 4. The video data flow as seen by the user terminal. Time (s) 0 1e+06 0 Videostreambitrate(b/s) 2e+06 3e+06 4e+06 5e+06 6e+06 7e+06 10 20 30 40 50 60 70 80 90 100 110 120 HQ stream LQ stream Time 50 T1 T2 T3 T4 T5 45 1e+06 0 2e+06 3e+06 4e+06 55 60 65 70 75 Figure 5. Packet capture of a continuous ping session from the video server ( to the user terminal ( dur- ing live VM migration. MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 68
  7. 7. IEEE Communications Magazine • July 2013 69 the migration, showing that IP addresses and network state are kept unchanged. Finally, Fig. 6 shows a capture of the SIP sig- naling that triggers the VM migration. The cap- ture shows the dialog between the SIP user agent at the source (IP:, UDP port: 50601, controlled by either the end user or the network/service manager), the SIP proxy (IP:, UDP port: 5060), and the SIP user agent at the destination (IP:, UDP port: 5060). CONCLUSIONS AND OPEN CHALLENGES FOR TELCO OPERATORS In this article, it is argued that future networks infrastructures will be made up of a huge num- ber of virtual resources (compute, storage, and network I/O) being controlled dynamically based on users’ demands, QoS, and business objectives, as well as any other changing conditions. In par- ticular, standard hardware advances and emerg- ing paradigms, such as SDN and NFV, will enable this remarkable disruption at the edge of current networks. Virtualized network and ser- vice functions, supported today by expensive middle-boxes, will run at the edge of the net- work, as close as possible to the users. The amazing increase in smart nodes and devices at the edges will globally make available enough processing power, data storage capacity, and communications bandwidth to provide sev- eral services with local edge resources. Actually, we are already witnessing this evolu- tion if we consider the current shift of value from the network to the terminals. Services are more and more often provided at the edge of the net- work, and this trend will continue in the future. This transformation will turn the edge into a business arena composed of a multiplicity of interacting subdomains, operated by diverse (both private and public) players and user com- munities. Network operators should closely follow up on this invaluable transformation and evolve their business models accordingly: pursuing the traditional approach, adopting walled gardens and being conservative, will be detrimental for their business in the long run, limiting their offering to mere pipe connectivity. On the other hand, enabling open virtual environments at the edge will offer several business opportunities: in fact, future services and data will be broadly delivered through multiple devices, machines, and objects, mostly by using local resources. Network operators could also play the role of infrastructure providers of edge networks, in con- junction with public administrations or other players willing to cooperate. The role is to help reduce the complexity and act as an “anchor” around which to organize complex and dynamic edge systems. In this scenario, operations normally carried out in data centers, such as allocation, migration, and cloning of virtual resources and functions could be advantageously applied at network edges. This means that it should be possible to underpin those techniques, properly enhanced, and management tools commonly used today in data centers. Nevertheless, this implies the need to overcome several technical challenges, among them the seamless allocation and migration of VMs across multiple distributed servers. In this sense, it will also be possible to over- come routing processing limitations by optimiz- ing the use of the huge amount of computing and storage power available today in large data centers and accessible tomorrow at the edges of networks. Software router architectures, for example, may be capable of parallelizing routing functionality across multiple servers. In principle, this would change the (econom- ic) equation of the network: overprovisioning connectivity rather than just overprovisioning Figure 6. SIP signaling sequence triggering the VM migration. The source SIP user agent is at, the SIP proxy is at, and the destination SIP user agent is at MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 69
  8. 8. IEEE Communications Magazine • July 201370 bandwidth (capacity). The former pays off better than the latter: it would be possible to create a very large number of topologies from which to choose, even almost randomly, or program and control the QoS at higher levels. As of today, overprovisioning connectivity in a network is more expensive than overprovisioning capacity, but tomorrow this equation may change. The feasibility of this vision — under some practical application scenarios, taking advantage of existing technologies to implement the ses- sion-based signaling platform required to main- tain the network state while migrating the virtual resources — was demonstrated using an ad hoc testbed. The experimental results attained using this proof of concept proved future edge net- works made of clouds of VMs (running virtual- ized network functions and services) to be potentially feasible, as long as the performance limitations imposed by the current technology are improved. It is left for further studies how data analytics may enable a global autonomic loop (comple- menting local actions) for orchestrating clouds of VMs at the edges and in the data centers. REFERENCES [1] G. Schaffrath et al., “Network Virtualization Architec- ture: Proposal and Initial Prototype,” Applications, Technologies, Architectures, and Protocols for Comput- er Communication, Proc. 1st ACM Wksp. Virtualized Infrastructure Systems and Architectures, 2009. [2] A. Manzalini et al., “Self-Optimized Cognitive Network of Networks,” Oxford Journals’ The Computer Journal, 2010, vol. 54, issue 2, pp 189–96. [3] T. Wood et al., “CloudNet: A Platform for Optimized WAN Migration of Virtual Machines,” Univ. of MA tech. rep., 2010. [4] P. Ruth et al., ”Autonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infra- structure,” Proc. 2006 IEEE Int’l. Conf. Autonomic Com- puting, Washington, DC, 2006. [5] A. I. Sundararaj and P. A. Dinda, “Towards Virtual Net- works for Virtual Machine Grid Computing,” Proc. 3rd Conf. Virtual Machine Research and Tech. Symp., 2004. [6] R. Bradford et al., “Live Wide-Area Migration of Virtual Machines Including Local Persistent State,” Proc. 3rd Int’l. Conf. Virtual Execution Environments, San Diego, CA, 2007, pp. 169–79. [7] S. Hacking and B. Hudzia, “Improving the Live Migra- tion Process of Large Enterprise Applications,” Proc. ACM Int’l. Wksp. Virtualization Tech. in Distrib. Com- puting, New York, NY, 2009, pp. 51–58. [8] Y. Xu and Y. Sekiya, “Scheme of Resource Optimization Using VM Migration for Federated Cloud,” Proc. Asia- Pacific Advanced Network 2011, vol. 32, pp. 36–44. [9] F. Callegati, A. Campi, and W. Cerroni,, “Application Scenarios for Cognitive Transport Service in Next-Gener- ation Networks,” IEEE Commun. Mag., vol. 50, no. 3, pp. 62–69. [10] F. Callegati, A. Campi, and W. Cerroni, “Automated Trans- port Service Management in the Future Internet: Concepts and Operations,” J. Internet Services and Applications, Springer, vol. 2. no. 2, Sept. 2011, pp. 69–79. BIOGRAPHIES ANTONIO MANZALINI ( received an M.Sc. degree in electronic engineering from the Politecnico of Turin. In 1990 he joined CSELT, which then became Telecom Italia Lab. He started his activities on research and development of technologies and architec- tures for future optical networks. In this R&D area, he was actively involved in leading positions in several EURESCOM and EC funded projects (e.g. MWTN, LION, NOBEL, CAS- CADAS). He chaired two ITU-T Questions on transport net- works. He is the author of a book on network synchronization (for SDH), and his R&D results are pub- lished in more than 80 papers. He has five patents on net- works and systems. He has been a member of technical and program committees of several IEEE conferences. He is currently joining the Strategy Department of Telecom Italia, addressing R&D activities mainly concerning future telco- ICT networks and services technologies and solutions (e.g., software defined networks, network function virtualization, autonomic/cognitive, and self-networks). ROBERTO MINERVA ( received a Laurea in computer science cum laude from the University of Bari in 1987. He works in Telecom Italia’s Future Center, where he leads the Innovative Architecture Group. He also is a contract professor at Turin’s Polytech- nic, where he teaches a course on mobile services. He has worked in industrial research for more than 25 years deal- ing with topics such as network intelligence, SIP, service architectures, and next generation networks. His current research topics include edge networks, peer-to-peer sys- tems, autonomics and cognitive systems, personal data, and the Internet of Things. He has served on the TPCs of many international conferences, and has published more than 40 papers in peer reviewed and international confer- ences. FRANCO CALLEGATI [M’98, SM’11] ( is an associate professor of telecommunication networks at the University of Bologna, Italy. His research interests are in the field of teletraffic modeling and performance evalua- tion of telecommunication networks. He has well estab- lished research expertise in optical networking, optical packet and burst switching, service oriented networks, autonomic networks, and network security. He has been active in EU-funded research projects since FP4, where he led activities and participated in various steering commit- tees. ALDO CAMPI ( holds a post-doctoral position at the Center for Industrial Research on Informa- tion and Communication Technologies of the University of Bologna. In 2007 he spent 10 months at the University of Essex, United Kingdom, as a visiting researcher working on application-aware networking. His research interests include optical networks, scheduling algorithms, SIP, grid networking, service-oriented networks, NGN architectures, and network infrastructures for cloud computing. WALTER CERRONI [M’01] ( is an assis- tant professor of telecommunication networks at the Uni- versity of Bologna. Previously, he was a research associate at the Italian National Inter-University Consortium for Telecommunications (CNIT). In 2008 he was a visiting assis- tant professor at the School of Information Sciences, Uni- versity of Pittsburgh, Pennsylvania. His research interests include architectures and performance of dynamic optical networks, next-generation cognitive and programmable networks, software-defined networking, and network secu- rity. The experimental results attained using this proof of concept proved future edge networks made of clouds of VMs to be potentially feasible, as long as the performance limitations imposed by the current technology are improved. MANZALINI LAYOUT_Layout 1 6/26/13 11:52 AM Page 70