Now you’ve got to think about
different resource needs (e.g. CPU- vs. memory-bound).
different SLAs (fast vs. slow).
different load (request rate)
different reliability requirements
different clients (e.g. end users vs. API clients)
(http://203.softover.com/2015/03/16/services-declaration-of-independence/)
We’ve been working on a project called XIFI since 2013 which is becoming a capacity in its own right this year.
This was part of the FIPPP (Future Internet Public Private Partnership). The pre-production service is being called FIWARE Lab and there are announcements of production services:
http://atos.net/en-us/home/we-are/news/press-release/2015/pr-2015_03_03_01.html
Service Providers (e.g., NRENs) offering network connectivity between the XIFI DCs need to protect their operational networks from external applications. As a consequence, the XIFI network controller is not allowed to directly control the NRENs network resources. Therefore, specific switching resources allocated at DCs will be deployed as part of the XIFI network, acting as demarcation points for XIFI connectivity services. They will be fully managed by the XIFI network controller by means of OpenFlow. These OpenFlow-based XIFI switches will connect to the NRENs infrastructure enabling the possibility of providing with E2E inter-DC connectivity services. In order to separate and isolate the different traffic flows of end-users, the connectivity service provides with overlay networks on top of the physical infrastructure among the XIFI demarcation points (Figure 1). Since the connectivity between the NRENs is IP-based, a tunneling mechanism is needed to build such overlay networks. XIFI proposes two potential solutions, either based on layer 2 (e.g., VLAN, Q-in-Q) or layer 3 (e.g., IP-in-IP, GRE) tunnels. The final choice will depend on the capabilities offered by the NREN that is providing the connectivity to the DC, and the functionalities supported by the OpenFlow version used. For instance, the XIFI switch could deliver VLAN-tagged frames to the NREN access router, which encapsulates the traffic in a GRE tunnel with destination a remote DC. Thus, the overall interconnection topology among DCs can be built based on a “mesh of tunnels” among the involved sites, so that different paths can be selected from the centralized XIFI network controller to accommodate the end-user flows according to their needs.
http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6702561
Coordinated by a number of national research and experimentation networks (NRENs) and GEANT, the GN3Plus SA3T3 is a multi domain virtual private network formed using a combination of technologies and federated layer 2 and layer 3 VPN technology.
In TSSG’s case, our MD-VPN access is provided by HEAnet services and TSSG site equipment. This allows out XIFI infrastructure to talk directly to other XIFI sites over this private network – at present primarily for the exchange of management traffic. This MDVPN allows us as a project to define our own shared IP space.
This is based on a combination of MPLS (multi protocol label switching) and BGP (border gateway protocol). Not as sexy as OpenFlow but software defined networking technology all the same. This is scalable solution available on almost all boxes right now.
WIT hosts XIFI Ireland Node. Also part of FINESCE and helping Phase III Accelerators
XIFI is providing us with a number of SDN related opportunities:
We’ve got 2 PICA8 SDN switches as part of our core node deployment (http://www.pica8.com/open-switching/1gbe-10gbe-40gbe-open-switches.php). We’ve also participated in an SDN use case with HEAnet.
While a lot of the traffic for managing XIFI goes over the public internet, it also has a private network working over a multidomain VPN (next slide)