My Ph.D. Defense - Software-Defined Systems for Network-Aware Service Composition and Workflow Placement
Software-Defined Systems for
Network-Aware Service Composition and
Supervisors: Prof. Luís Veiga
Prof. Peter Van Roy
Good afternoon everyone. I am Pradeeban
Kathiravelu. Today, I am presenting my Ph.D.
thesis on “Software-Defined Systems for Network-
Aware Service Composition and Workflow
Service providers and tenants in the cloud ecosystem.
– Challenges in interoperability and control.
Network Softwarization: Management, Control, & reusability.
Network Softwarization typically focus on a single provider.
Network-awareness for multi-domain workflows.
The cloud ecosystem consists of several service providers.
Tenants, the third-party end users, consume these services
rather than hosting and managing their own services on-
premise. However, these providers lack interoperability
among themselves. Furthermore, they provide limited control
and flexibility to the tenants. These factors prevent the tenants
from efficiently composing a workflow spanning multiple
Network softwarization makes networks programmable through
software constructs, by separating the networks into network
infrastructure, network control, and network services. Network
Softwarization aims to resolve several challenges in network
management, control, and reusability.
However, network softwarization typically limits its focus to a
single domain, a network managed by a single provider. A
tenant workflow placement across multiple providers requires
network-awareness beyond a single domain.
Network Functions Virtualization (NFV)
– Network middleboxes → Virtual Network Functions (VNFs)
Software-Defined Systems (SDS)
– Storage, Security, Data center, ..
– Improved configurability
Software-Defined Networking and Network Functions
Virtualization are two core enablers of network
SDN unifies the control of the network devices into a logically
centralized controller. The controller has a global view and
control of the data plane devices such as network switches
and routers. It is developed in a high-level programming
language such as java or python. Therefore, SDN supports
efficient management of the networks.
NFV, on the other hand, makes network middleboxes such as
load balancer and firewall into virtual network functions and
lets the users host them on servers.
While SDN limits its focus to networks, Software-Defined
Systems, or SDS, expands its scope to various aspects
such as storage, security, and data center. SDS either
extend SDN or follow an approach inspired by SDN. SDS
improves the configurability of the environment by
separating the mechanisms from the policies.
Enhanced control for tenants in service workflow placements.
– Tenant Policies and Service Level Objectives (SLOs)
Address workflow challenges: technical, economic, and policy
We need to bring the control of the service workflow
executions back to the tenant user, despite sharing
the infrastructure with several tenants. The tenants
have their policies and Service Level Objectives
for their workflows. The workflows should satisfy
these user-defined policies and ensure quality of
service to the tenant users.
We aim to address the technical, economic, and
policy challenges in efficiently composing and
placing tenant workflows beyond the data center
Service Composition and
The goal of this thesis is to facilitate network-aware
service composition and workflow placement on
environments of a varying scale: from intra-domain
networks, multi-domain networks, and edge
environments, to the Internet.
Our contributions are at the intersection of network
softwarization, service-oriented architecture, and
big data. We propose to make the wide area multi-
domain networks programmable, by extending
SDN with SOA.
We identify a set of research questions and build
software-defined systems to address them. Next,
we look into our research questions individually.
Q1: Execution Migration Across
scale and migrate
and deployment stages?
(CoopIS’16, SDS’15, and IC2E’16)
First, we look into the potential for an efficient deployment and
migration of network applications and architectures across
multiple execution environments, by extending network
We aim for seamless deployment and scaling of networks
across the development stages such as simulations and
emulations, and various deployment environments in a
cluster or a data center.
Q2: Economic & Performance Benefits
to the end users?
Data center →
(Networking’18 and IM’17)
Second, we look into how such a network
softwarization can offer economic and performance
benefits to the end users, from data centers to
inter-cloud environments .
Q3: Service Chain Placement
Can we efficiently
edge and cloud providers
to compose tenant workflows,
by federating SDN deployments
of the providers, using
(ETT’18, ICWS’16, and SDS’16)
Third, we look into the potential to compose tenant
workflows from service instances of multiple edge
and cloud providers.
We federate the SDN deployments with SOA to
compose tenant workflows spanning several
networks, and place them in multi-domain edge
Can we enhance the
Data center →
(CLUSTER’18, DAPD’19, SDS’17, and CoopIS’15)
Fourth, we look into the interoperability of the network
application executions from the scale of data
centers to multi-domain and edge environments.
Can we improve the communication and coordination
across the diverse distributed applications by
exploiting network softwarization and SOA, and
consequently, enhance their interoperability?
Q5: Application to Big Data
Can we improve the
of big data applications,
Data center →
(CCPE’19 and SDS’18)
Finally, we look into how our contributions apply to
big data processing, from data centers to the
Specifically, we seek to improve the performance,
modularity, and reusability of big data applications
by leveraging network softwarization and SOA.
Q1: Seamless Development & Deployment of cloud networks
Q2: Economic & Performance Benefits:
Q3: Service Chain Placement:
Q4: Interoperability of multi-domain service workflows
Q5: Application to Big Data
Cloud-Assisted Networks as an
Alternative Connectivity Provider.
Network Service Chain Orchestration at the Edge.
Our contributions address these 5 research
Today we look into the 2 core contributions among
First, the economic and performance benefits of
Second, service chain placement at the edge.
I) Cloud-Assisted Networks as an
Alternative Connectivity Provider
Kathiravelu, P., Chiesa, M., Marcos, P., Canini, M., Veiga, L.
Moving Bits with a Fleet of Shared Virtual Routers.
In IFIP Networking 2018. May 2018. pp. 370 – 378.
Now we discuss our first
Cloud-Assisted Networks as an
alternative connectivity provider.
Increasing demand for bandwidth.
Decreasing bandwidth prices.
Pricing Disparity. E.g. IP Transit price per Mbps, 2014
– USA: 0.94 $
– Kazakhstan: 15 $
– Uzbekistan: 347 $
What about latency?
The demand for bandwidth keeps increasing. At the same
time, bandwidth pricing keeps decreasing.
Although this is a promising trend, there is still a significant
pricing disparity between geographical regions. For
example, consider the IP transit price per Mbps. As of 2014,
it was less than a dollar in the USA, 15 dollars in
Kazakhstan and 347 dollars in Uzbekistan.
Such a disparity is not limited to the cost. The developing
Internet regions rely on long-haul Internet links to the major
Internet hubs to connect with the other regions.
Consequently, the developing Internet regions also suffer
from high latency. This state of affairs makes them inefficient
for latency-sensitive web applications such as online
gaming, high-frequency trading, and remote surgery.
Dedicated connectivity* of the cloud providers.
– Increasing geographical presence.
– Well-provisioned network → Low latency network links.
– Can a network overlay built over cloud instances be a
better connectivity provider?
* James Hamilton, VP, AWS (AWS re:invent 2016).
Major cloud providers such as Amazon web services, typically
manage their own global backbone network. Hence, they
avoid having to route their cloud traffic, including those
between their regions, through the public Internet. Their
geographical presence keeps increasing with number of
regions, availability zones, and points of presence. By using
their own well-provisioned network exclusively, each major
cloud provider manages to offer low-latency network links
among their VMs.
Cloud-Assisted Networks refer to overlay networks that are
built on top of the cloud VMs. We ask, whether an overlay
network of cloud VMs can operate as a better connectivity
provider. Specifically, such a provider should provide better
performance and cost-effectiveness than the current
connectivity providers, such as the Internet service
providers, enterprise MPLS networks, and transit providers.
Our Proposal: NetUber
• A Cloud-Assisted Network as a third-party virtual
connectivity provider with no fixed infrastructure.
– Better network paths compared to the Internet.
We propose NetUber, a cloud-assisted overlay network that
functions as a third-party virtual connectivity provider, with
no fixed infrastructure. NetUber aims for a better control
over the network path, compared to the Internet paths.
Each VM in a cloud-assisted network such as NetUber
functions as a virtual router and route the network traffic of
the end users among each other. A cloud user can build
such a cloud-assisted network on top of VMs of multiple
cloud providers, and offer it as an alternative connectivity
option for the end users. The end user can then use
NetUber to efficiently send data between their origin server
and the destination server.
Each cloud region of NetUber consists of at least a broker
instance. Based on the bandwidth demand from the
NetUber end users and the current instance pricing, the
brokers scale the NetUber overlay by purchasing more
instances in their respective regions. Hence, the brokers
ensure that the region has sufficient VMs for the data
NetUber Application Scenarios
• Cheaper data transfers between two endpoints.
• Higher throughput and lower latency.
• Network services.
• Alternative to Software-as-a-Service replication.
We identify several application scenarios for NetUber,
in addition to cheaper, yet high throughput and low
latency data transfers between two endpoints.
NetUber could deploy network services such as
compression and encryption on its cloud VMs.
NetUber can then optionally execute these network
services on its network flows, to improve the data
transfer efficiency or as an on-demand value-added
NetUber also provides an alternative to software-as-
a-service replication. We will look into that next.
NetUber Inter-Cloud Architecture
• Deploy SaaS applications in one or a few regions.
– Fast access from more regions with NetUber.
Ohio London Belgium
As an inter-cloud architecture, NetUber builds its overlay on top of
VMs from multiple cloud providers.
This architecture enables a better alternative to typical Software-as-
a-Service replications across multiple cloud regions. NetUber lets
us deploy the cloud applications on one or a few regions and then
access them from the other regions via its cloud overlay. This
approach avoids the need for the cloud user to replicate and
manage their service instances in multiple regions, while still
offering low-latency to their end users.
Cloud providers have a few overlapping regions, and a few regions
are covered by just one provider. The inter-cloud architecture
enables low-latency access to all the cloud regions of the
underlying cloud infrastructures. For example, Amazon Web
Services, AWS, has presence in regions Ohio and London,
whereas Google Cloud Platform, GCP has presence in London and
Belgium. So we can build a low-latency overlay network spanning
regions Ohio, London, and Belgium on top of AWS and GCP, by a
direct interconnection between the VMs of both cloud providers in
London. Thus, NetUber enables low-latency network connectivity
between Ohio and Belgium, which would be impossible with just
one of the cloud providers. Consequently, NetUber offers the end
users low-latency access to more regions.
Monetary Costs to Operate NetUber
A.Cost of Cloud VMs (per second)
– Spot instances: volatile, but up to 90% savings.
B.Cost of Bandwidth (per transferred data volume).
C.Cost to connect to the cloud provider (per port-hour).
NetUber is a third-party service not affiliated with any cloud provider.
Therefore, we must consider the operational costs of the overlay, paid to
the cloud provider.
A: the cost of cloud VMs. Cloud providers charge the users per second for
their VM usage. This amount is still high. Therefore, NetUber uses spot
instances. Spot instances are volatile, but are otherwise identical to the
regular on-demand cloud instances. Using spot instances saves up to 90%
of the cost to acquire the cloud instances. The AWS EC2 spot instances
have fluctuating pricing with different prices across the availability zones, of
any given region. Availability zones are physically separated cloud data
centers from the same region that are connected by low-latency links. We
cannot predict the AWS Spot instance pricing over time. NetUber acquires
spot instances from the cheapest availability zone of each region at the
moment and maintains the cheap ones over time.
B: the cost of bandwidth. Cloud providers charge the bandwidth use per
transferred data volume. This is very high, and unfortunately, there is no
cheaper alternative similar to the spot instances.
C: the cost to connect the end user’s on-premise server to the cloud. Typically
the end user pays to the cloud provider directly to connect their on-premise
servers to the cloud via a Direct connect. The cloud providers charge the
end user per port-hour for the Direct Connect – for example, how many
hours a 10 Gbps Ethernet port is used.
The NetUber end user must incur lower total cost compared to what they
spend for their existing connectivity providers, with better performance, to
consider NetUber economically viable.
• NetUber prototype with AWS r4.8xlarge spot instances.
• Cheaper point-to-point connectivity.
Better throughput and reduced latency & jitter.
– Origin: RIPE Atlas Probes and our distributed servers.
– Destination: VMs of multiple AWS regions.
Network Services: Compression
We evaluate the cost and performance of NetUber. We use
AWS as the cloud provider for our evaluations. We use
r4.8xlarge memory-optimized spot instances, each with 10
Gbps network interface to build our NetUber cloud overlay
We benchmark NetUber against 2 enterprise connectivity
providers for its cost-effectiveness.
We then benchmark NetUber against ISPs for latency,
throughput, and jitter. We send data from RIPE Atlas
Probes and our distributed servers, towards the AWS spot
instances from multiple regions. The RIPE Atlas gives us
access to physical nodes across the Internet.
We also evaluate the potential for network services –
1) Cheaper Point-to-Point Connectivity
• Cost for 10 Gbps flat connectivity: from EU & USA.
– Cheaper for data transfers <50 TB/month.
First, we benchmark the cost for 10 Gbps flat connectivity for
data transfers from the EU and the USA.
We benchmark NetUber regular deployment, and a
deployment with a 75% compression on data transfers,
against two connectivity providers.
The provider 1 uses an overlay on its large global
infrastructure to provide connectivity - A basic connectivity
and a more expensive premium one that provides faster
internet routes by interconnecting with premium networks.
The provider 2 is a transit provider.
We observe that NetUber is cheaper for data transfers up to
50 terabytes per month, compared to the considered 2
connectivity providers for the same regions. With data
compression on the network flows, NetUber can remain
cheaper for larger volumes of data.
2) Low Latency with Cloud Routes
• NetUber data transfer A → Z via the path A → B → Z.
– Cloud region B is closer to the origin server A.
– B and Z are cloud VMs connected by NetUber overlay.
Next, we benchmark the latency of NetUber against
the ISP-based Internet paths for data transfer
between two endpoints. NetUber relies on the
nearest cloud region to route its traffic through. In
this sample scenario, cloud region B is closer to
A, and B is connected to the cloud region Z by the
Netuber overlay. Here when we send data from A
to Z using NetUber, we send the data from A to
the cloud region B first via ISP, and then from B to
Z using the NetUber cloud overlay. We compare
the latency of this NetUber data transfer against
sending data from A to Z directly using the public
Internet, with the ISP network connectivity.
In this example, we have Vladivostok as the origin,
and Sao Paolo as the destination region. Tokyo is
the nearest cloud region to the origin.
Ping times – ISP vs. NetUber
(via region, % Improvement)
• NetUber cuts Internet latencies up to 30%.
• Direct Connect would make NetUber even faster.
We evaluated the latency of data transfers between
several pairs of origin and destination. This table
lists the ping time latencies via ISP-based public
Internet, and NetUber – together with the cloud
region which NetUber uses to route its traffic
through for each transfer, as well as the
percentage reduction in latency with NetUber. We
observe that NetUber cuts the internet latencies
up to a factor of 30%.
We highlight that the use of Direct connect would
make NetUber even faster.
3) Throughput: ISP, NetUber, and
Selectively Using NetUber
Better throughput with NetUber via near cloud region.
– Selective use of overlay when no proximate region.
We then benchmark the NetUber throughput against the ISP-
based public Internet. We first connect our origin server in
Atlanta to the NetUber overlay via ISP. Our nearest cloud
region is North Virginia. We observe that sending data with
NetUber via the nearest cloud region can be more stable
and offer high throughput, rather than sending the data to
the destination via the public Internet paths. NetUber avoids
slow long-haul Internet links as it covers a significant portion
of the data transfer network path.
We then repeat the experiment across multiple regions of
origin and destination. Using a cloud overlay may not
always provide better throughput, especially if there is no
cloud region near to the origin or destination. As shown in
this case, the end-user device can be configured to use the
NetUber overlay selectively, using it only when it provides a
better performance, and using the public Internet paths
4) Low Jitter with Cloud Overlay
NetUber for latency-sensitive web applications.
We finally benchmark the jitter of NetUber against
that of ISP as latency variations. In case of
NetUber, we connect the origin and destination
endpoints with the cloud overlay using 2
approaches – through the ISP-based public
Internet, and through a simulated Direct Connect,
modeled with realistic latency values.
We observe minimal latency variations with NetUber.
We note that latency variation in NetUber in most
cases is due to the variations in the ISP’s network
connecting the user endpoint servers with the
nearest cloud region. The cloud Direct Connects
promise a fixed dedicated connectivity to the end
users to connect their on-premise servers to the
cloud. Therefore, latency variations are negligible in
the Cloud Direct Connects.
Consequently, latency variations in NetUber with a
direct connect represent the actual jitter caused by
the NetUber overlay. The minimal jitter observed in
NetUber highlights its suitability for latency-
sensitive web applications.
• Connectivity provider that does not own the infrastructure
– Low latency cloud-assisted overlay network.
– Better data rate than ISPs.
• Previous research do not consider economic aspects.
– A cheaper alternative (< 50 TB/month).
• Similar industrial efforts.
– Voxility, an alternative to transit providers.
– Teridion, Internet fast lanes for SaaS providers.
Finally, to summarize:
NetUber is a connectivity provider that does not own the
infrastructure. NetUber offers low latency end-to-end data
transfer through its cloud-assisted network. We observed up to
30% reduction in latency, even without using the Direct
Connects. We observe that the ISPs typically limit their data rate
to 100 Mbps, that too often with a cap. NetUber can provide a
better data rate for end users compared to the ISPs.
Previous research works on cloud-assisted networks do not
consider economic aspects. We looked in detail on the
economics of using a cloud-assisted network as a connectivity
provider. NetUber is cheaper than the considered connectivity
providers for up to 50 Terabytes per month. There are a few
companies that follow an approach similar to NetUber. Voxility
operates as an alternative to transit providers using an overlay
network built on top of its global infrastructure. Teridion offers
internet fast lanes for Software-as-a-Service providers. We
conclude that cloud-assisted networks are growing popularity in
research and enterprise, and NetUber provides a first look into its
potential as a connectivity provider, both from technological and
II) Network Service Chain
Orchestration at the Edge
Kathiravelu, P., Van Roy, P., & Veiga, L.
Composing Network Service Chains at the Edge: A Resilient and Adaptive Software-
In Transactions on Emerging Telecommunications Technologies (ETT). Aug. 2018. Wiley. pp. 1 – 22.
Now we discuss our second
Network Service Chain
Orchestration at the Edge
Network Services: On-Premise vs. Centralized Cloud? Edge!
Network Service Chaining (NSC)
Finding optimal service chain at the edge abiding by the tenant SLOs.
Cloud environments mitigate the resource scarcity on-premise to execute
complex user workflows. However, centralized clouds suffer from high
latency. Edge environments provide a balance – low latency with sufficient
resources. Therefore, more and more service providers choose to deploy
their network services at the edge of the network, close to their users.
Network service chaining refers to a workflow of network services chained
together, with the output of one or more services sent as the input to the
next services in the chain. Consider this sample service chain: The
Internet traffic reaches the user through a chain of network services – video
optimizer, cache, anti-virus, and finally the firewall. But when it comes to a
child accessing the Internet, we have a slightly different workflow – The
traffic goes through parental control first before reaching the other services
and then the child.
Selecting the optimal service instances to compose service workflows at the
edge is challenging due to the volume and variety of the service instances
and the number of tenant users. We should find the optimal service chain
for the user workflow, abiding by its service level objectives. Such service
chain placement is considered to be an NP-Hard problem.
Geographical proximity is a deciding factor in service deployment at the edge.
But consider this sample service chain: the edge nodes n1 and n2 are
close to the user. However, the related services next in line in the service
chain are not available in the same nodes. Therefore, choosing n1 and n2
to host the service workflow leads to more inter-node data flow, and
consequently high latency. On the other hand, although n3 and n4 are
farther from the user, n4 consists of 3 related services in the workflow.
Therefore, choosing n3 and n4 reduces the inter-node communication
overheads. These are additional constraints specific to the service
workflows that do not apply to stand-alone service executions.
Our Proposal: Évora
Graph-based algorithm to incrementally construct
user workflows as service chains at the edge.
SDN With Message-Oriented Middleware (MOM).
– For multi-domain edge environments.
– Place and migrate user service chains.
Adhering to the user policies.
We propose Evora, a graph-based algorithm to incrementally
construct and deploy user workflows as service chains at
Evora architecture extends SDN to multi-domain edge
environments with message-oriented middleware. It enables
placing and migrating user service chains, adhering to the
Distributed execution: Orchestrator in each user device.
In the Evora deployment, each user device consists of an
orchestrator. The orchestrator executes the Evora
algorithms to place and migrate service chains, in a
decentralized and distributed manner.
A few edge nodes are equipped with an SDN controller,
extended with a message broker. The controller centrally
manages its network domain, while communicating and
coordinating with the other controllers at the edge. Each
user device and edge node consists of an Event Manager.
The Event Manager publishes the status of the node and
the respective services to the broker as event notifications.
It also receives the relevant status details of the edge nodes
and services, from the broker. Therefore it functions as both
an event publisher and an event subscriber.
The black lines indicate static network links across the edge
nodes. The dotted red lines indicate dynamic links among
the edge nodes as well as between the edge nodes and the
user devices. These dynamic links are enabled by
messages through the public Internet .
1) Initialize Orchestrator in each Device
Construct a service graph in the user device.
― As a snapshot of the service instances at the edge.
Evora orchestration consists of 3 major steps.
First, a one-time initialization of the Orchestrator for
each user device.
During the initialization, the orchestrator constructs a
service graph in the device, as a snapshot of the
available service instances at the edge.
2) Identify Potential Workflow
Construct potential chains incrementally.
– Subgraphs from service graph to match user chain.
– Noting individual service properties.
A complete match?
– Save as a potential service chain placement.
Then, identifying the potential workflow placements for each
user service chain. The orchestrator traverses the service
graph and incrementally identifies potential workflow
placements by matching its subgraphs against the user-
defined service chain. The orchestrator also notes each
service properties such as monetary cost, throughput,
and end-to-end latency for the potential chains that it
Subgraphs of the service graph that completely match the
user service chain, are identified as the potential
candidates for the service chain placement and saved in
the user device.
The algorithm halts its execution once it has completely
traversed all the service graph nodes.
Subsequent executions of the same workflow require no
3) Service Chain Placement
Calculate a penalty value for potential placements.
– Normalized values: Cost, Latency, and Throughput.
– α,β,γ ← User-specified weights.
Place NSC on composition with minimal penalty value.
– Mixed Integer Linear Problem.
– Extensible with powers and more properties.
The orchestrator computes a penalty value for each
potential chain, using normalized values for the
service properties and the user-assigned weights to
It then places the user service workflow on the
service composition with minimal penalty value
among the potential service compositions. The
workflow placement is solved by a mixed integer
We can extend the workflow placement with more
properties and powers. Evora also migrates the
workflows upon changes in the edge nodes or the
services. When a service becomes unavailable or
unresponsive, the orchestrator chooses the next
potential service composition for the affected
workflow, and schedules the subsequent requests
to the workflow accordingly.
Model sample edge environment.
– Service nodes and a user device.
– User policies for the service workflow.
Microbenchmark Évora workflow placement.
– Effectiveness in satisfying user policies.
– Efficacy in closeness to optimal results
↡Penalty value ➡ ↟Quality of experience
We model an edge environment with service nodes
and a user device. The user composes service
workflows with their policies and uses Evora
orchestrator to find optimal workflow placement at
We evaluate the effectiveness of Evora in satisfying
those user policies in workflow placement. We also
assess the efficacy in closeness to optimal results.
Workflow placements with minimized penalty values
should offer the user a high quality of experience.
User Policies with Two Properties
Equal weights to 2 properties among C, L, and T.
Darker circles – compositions with minimal penalty.
– The ones that Évora chooses (circled).
T ↑ and C ↓ T ↑ and L ↓ C ↓ and L ↓
We first evaluate Evora with user policies consisting of
two properties among cost, end-to-end latency, and
throughput – with equal weights. The location of the
circles in the plots indicate the values of the properties
among potential service chains. Darker circles indicate
the chains with minimal penalty values, the ones the
user prefers. The chains that Evora chooses are
indicated by the pink circles in these plots.
First, the user defines her policies, preferring high
throughput and low cost. As we observe, the darkest
circles were indeed among the high throughput and
relatively lower cost – a trade-off, considering both
The second one successfully chose the service
compositions with both highest throughput with lowest
The third one indeed chose the ones with lowest cost
and lowest latency.
One given more
(weight = 10),
than the other two
(weight = 3).
Radius of the circles –
Next, we evaluate Evora with all three properties – but one of
the properties is given prominence with a weight of 10 while
the other 2 have a weight of 3. The radius of the circles
indicates the monthly cost in these plots, with the x-axis
representing throughput and y-axis representing latency.
First, we maximize throughput. The far right shows the darkest
circle, as desired – choosing the chain placements with the
highest throughput. Evora also has chosen the composition
with low latency.
Second, we minimize the cost. We notice that Evora has
chosen the composition with the lowest cost and also the
lowest latency. However, as the priority was given to cost, we
note that the chosen service compositions suffer from low
Third, we minimize latency. Here we observe Evora has
chosen the compositions with lowest latency and highest
Two given more
(weight = 10),
than the third
(weight = 3).
the user policies
– multiple properties
with different weights.
We repeated the experiment, this time giving more
prominence to 2 properties equally among the 3.
First, maximize throughput and minimize cost. We note that
the compositions with high throughput have been chosen.
We also note the preference for the cheaper service chains
as can be seen from the dark smaller circles.
Then we maximize throughput and minimize latency. We
note, the dark circles are in the bottom right, correctly
choosing the workflow placements with the highest
throughput and the lowest latency.
Finally, we minimize both cost and latency. We observe, the
smallest circles in the bottom left have been chosen. Here
we observe that while minimizing both cost and latency,
Evora has chosen compositions with lower throughput. This
is a trade-off we had to make due to the higher cost of the
high throughput service instances.
From the position of the dark circles, we observe that Evora
adequately satisfies the user policies in the service chain
Bring control back to the users for edge workflows.
Previous research focus on single NSC provider.
Évora efficient workflow placement.
– Abiding by the user policies.
– Multi-domain edge with multiple providers.
– Extending SDN with MOM to wide area networks.
Network-aware execution from user devices.
– Decentralized and distributed.
Finally, to summarize:
We should bring the control of the user workflows
back to the users, to efficiently compose workflows
using network services from multiple service
providers at the edge. Previous works mostly focus
on a single provider for the entire workflow
Evora proposes an effiient workflow placement for
the multi-domain edge environments with multiple
providers, abiding by the user policies. Evora
extends SDN with message-oriented middleware to
wide area networks. Evora executes its network-
aware workflow placement algorithms from the user
devices in a decentralized and distributed manner.
Seamless migration across development and deployments.
A case for Cloud-Assisted Networks as a connectivity provider.
Composing & placing workflows in multi-domain networks.
Increased interoperability with network softwarization & SOA.
Applicability of our contributions in the context of Big Data.
NetUber as an enterprise connectivity provider.
Adaptive network service chains on hybrid networks.
Thank you! Questions?
In this dissertation, we proposed a set of Software-Defined
Systems to address several shortcomings in the current services
First, we enabled a seamless migration of network algorithms and
architectures across development stages and deployment
Second, we demostrated cloud-assisted networks as a cost-
efficient and high-performant connectivity provider.
Third, we composed and placed user workflows in multi-domain
networks with network softwarization and SOA.
Fourth, we highlighted how our network softwarization approach,
extended with SOA, increases the interoperability in the services
Finally, we discussed the applicability of our contributions in the
context of big data.
As future work, we propose to deploy NetUber on more cloud
providers and evaluate on more regions. We also propose to look
into the feasibilities of NetUber as an enterprise connectivity
provider, in practice – including the challenges and opportunities.
Further, we also propose to research adaptive network service
chains on hybrid networks with hardware middleboxes in addition
to VNFs. As NFV is still not widely adopted on several enterprise
networks, supporting hybrid networks will enable service
compositions at Internet scale, with several service and
Thank you for your attention, and now I open the session for