2. Attribution
• The material contained inside is intended for teaching.
• This document is licensed under the CC BY-NC-SA license.
• Relevant sources are listed on the following References slide.
• All figures and text borrowed from these sources retain the rights of
their respective owners.
2/80
3. References
• ETSI GS NFV 001 V1.1.1 (2013-10)
Network Functions Virtualisation (NFV); Use Cases
• ETSI GS NFV 002 V1.2.1 (2014-12)
Network Functions Virtualisation (NFV); Architectural Framework
• ETSI GS NFV 003 V1.2.1 (2014-12)
Network Functions Virtualisation (NFV); Terminology for Main Concepts
• ETSI GS NFV-INF 001 V1.1.1 (2015-01)
Network Functions Virtualisation (NFV); Infrastructure Overview
• ETSI GS NFV-SWA 001 V1.1.1 (2014-12)
Network Functions Virtualisation (NFV); Virtual Network Functions
Architecture
3/80
4. Table of Contents
1. Define the architectural framework, the terminology, network
services
2. Understand functional blocks and reference points
3. Describe four use cases (virtualisation of mobile core network and
of mobile base station, virtualisation of the home environment, of
CDNs and fixed access networks)
4. Understand the NFVI architectural overview, virtualisation and
associated interfaces, multiplicity and decomposition
5. Understand the VNF architecture, VNF design patterns, and VNF
states and transitions
4/80
8. NFV Terminology
• Network Function (NF): functional block within a network
infrastructure that has well-defined external interfaces and well-
defined functional behaviour
• Virtualised Network Function (VNF): implementation of an NF that
can be deployed on a virtualisation infrastructure
• Network Functions Virtualisation Infrastructure (NFVI): totality of all
hardware and software components that build up the environment in
which VNFs are deployed
8/80
9. NFV Objectives
• Improve capital efficiencies by using commercial-off-the-shelf hardware to
provide NFs through software virtualisation techniques.
• Improve flexibility in assigning VNFs to hardware thus allowing software to
be located at the most appropriate places.
• Provide rapid service innovation through software-based service
deployment.
• Improve operational efficiencies by using common automation and
operating procedures.
• Reduce power usage by migrating workloads and powering down unused
hardware.
• Standardize open interfaces between VNFs, the infrastructure, and
associated management entities so that such decoupled elements can be
provided by different vendors.
9/80
10. Differences in network service provisioning
• Decoupling software from hardware
• As the network element is no longer a collection of integrated hardware and
software entities, the software can progress separately from the hardware, and vice
versa.
• Flexible network function deployment
• The detachment of software from hardware helps reassign and share the
infrastructure resources, thus together, hardware and software, can perform
different functions at various times. Network operators can also deploy new network
services faster over the same physical platform.
• Dynamic operation
• The decoupling of the functionality of the NF into instantiable software components
provides greater flexibility to scale the actual VNF performance in a more dynamic
way and with finer granularity, for instance, according to the actual traffic for which
the network operator needs to provision capacity.
10/80
11. High-Level NFV Framework
• NFV envisages the implementation of NFs as
software-only entities that run over the NFVI.
• Three main working domains are identified in
NFV
• The NFVI includes the diversity of physical
resources and how these can be virtualised. The
NFVI supports the execution of the VNFs.
• A VNF is a software implementation of a NF
which is capable of running over the NFVI.
• NFV Management and Orchestration covers the
orchestration and lifecycle management of
physical and/or software resources that support
the infrastructure virtualisation, and the lifecycle
management of VNFs. NFV Management and
Orchestration focuses on all virtualisation-
specific management tasks necessary in the NFV
framework.
11/80
12. Forwarding Graph
• A NF Forwarding Graph (NF-FG) is a graph of logical links connecting
NF nodes for the purpose of describing traffic flow between these NFs
• A VNF Forwarding Graph (VNF-FG) is a NF-forwarding graph where at
least one node is a VNF
12/80
13. Network Services in NFV
• A network service is a composition of NFs defined by its functional and
behavioural specification.
• A network service can be viewed as a forwarding graph of NFs
interconnected by a supporting network infrastructure.
• These NFs can be implemented in a single operator network or interwork
between different operator networks.
• The network service behaviour is a combination of the behaviour of its
constituent functional blocks, which can include individual NFs and NF
Forwarding Graphs.
• A tenant domain provides combinations of VNFs into network services,
and is responsible for their management and orchestration, including their
functional configuration and maintenance.
13/80
14. End-to-end Network Service
• The figure illustrates the representation of an end-
to-end network service that includes a nested NF
Forwarding Graph as indicated by the network
function block nodes in the middle of the figure
interconnected by logical links.
• The end points are connected to NFs via network
infrastructure, resulting in a logical interface
between the end point and a network function.
• These logical interfaces are represented in the
figure with dotted lines.
• The outer end-to-end network service is made up
of End Point A, the inner NF Forwarding Graph, and
End Point B, while the inner NF Forwarding Graph
is composed of network functions NF1, NF2 and
NF3. These are interconnected via logical links
provided by the Infrastructure Network 2.
14/80
15. End-to-end network service with VNFs and
nested forwarding graphs
• The figure shows an example of an end-to-end
network service and the different layers that are
involved in its virtualisation process.
• The VNF-FG corresponding to the previous NF-FG is
shown here.
• The decoupling of hardware and software in NFV is
realized by a virtualisation layer which abstracts
hardware resources of the NFVI.
• The NFVI Points of Presence (PoPs) includes
resources for computation, storage and networking
deployed by a network operator.
• The figure also depicts the case of a nested VNF-FG
(i.e., VNF-FG-2) constructed from other VNFs.
• A VNF instance can be implemented on different
physical resources and/or be geographically
dispersed as long as its overall end-to-end service
performance and other policy constraints are met.
15/80
19. Overview of the Functional Blocks
• Operations and Business Support Systems (OSS/BSS) of the operator.
• Element Management (EM) units perform the management functionality
for one or several VNFs.
• The NFV Orchestrator is in charge of the orchestration and management of
NFV infrastructure and software resources, and of realizing network
services on NFVI.
• VNF Manager(s) are responsible for VNF lifecycle management (e.g.
instantiation, update, query, scaling, termination).
• A VNF Manager may be deployed for each VNF, or may serve multiple VNFs.
• Virtualised Infrastructure Manager(s) control and manage the interaction
of a VNF with computing, storage and network resources under its
authority, as well as their virtualization.
19/80
20. Virtualised Network Function (VNF)
• A VNF is a virtualisation of a network function in a legacy non-virtualised
network.
• Examples of NFs are 3GPP Evolved Packet Core (EPC) network elements (e.g. Mobility
Management Entity (MME), Serving Gateway (SGW), Packet Data Network Gateway
(PGW)); elements in a home network (e.g. Residential Gateway (RGW)); and
conventional network functions (e.g. Dynamic Host Configuration Protocol (DHCP)
servers, firewalls, etc).
• Functional behaviour and state of a NF are largely independent of whether
the NF is virtualised or not. The functional behaviour and the external
operational interfaces of a Physical Network Function (PNF) and a VNF are
expected to be the same.
• A VNF can be composed of multiple internal components.
• In some cases, one VNF can be deployed over multiple VMs, where each VM hosts a
single component of the VNF. In other cases, the whole VNF can be deployed in a
single VM as well.
20/80
21. NFV Infrastructure (NFVI)
• The NFVI is the totality of all hardware and software components which build up the environment
in which VNFs are deployed, managed and executed.
• The NFVI can span across several locations (i.e. places where NFVI-PoPs are operated). The
network providing connectivity between these locations is regarded to be part of the NFVI.
• From the VNF's perspective, the virtualisation layer and the hardware resources look like a single
entity providing the VNF with desired virtualised resources.
• In NFV, the physical hardware resources include computing, storage and network that provide
processing, storage and connectivity to VNFs through the virtualisation layer (e.g. hypervisor).
• Computing hardware is assumed to be COTS as opposed to purpose-built hardware.
• Storage resources can be differentiated between shared network attached storage (NAS) and storage that
resides on the server itself.
• The virtualisation layer abstracts the hardware resources and decouples the VNF software from
the underlying hardware, thus ensuring a hardware independent lifecycle for the VNFs.
21/80
22. Service, VNF and Infrastructure Description
• This data-set provides information regarding the VNF deployment
template, VNF Forwarding Graph, service-related information, and
NFVI information models.
• These templates/descriptors are used internally within NFV
Management and Orchestration.
• The NFV Management and Orchestration functional blocks handle
information contained in the templates/descriptors and may expose
(subsets of) such information to applicable functional blocks, as
needed.
22/80
23. Virtualised Infrastructure Manager(s)
• The Virtualised Infrastructure Manager (VIM) controls and manages the interaction of a VNF with
computing, storage and network resources under its authority, as well as their virtualisation.
• The VIM performs the following resource management functions
• Inventory of software (e.g. hypervisors), computing, storage and network resources dedicated to NFV
infrastructure.
• Allocation of virtualisation enablers (e.g. VMs onto hypervisors, compute resources, storage, and relevant
network connectivity).
• Management of infrastructure resource and allocation (e.g. increase resources to VMs, improve energy
efficiency, and resource reclamation).
• The VIM performs the following operations
• Visibility into and management of the NFVI.
• Root cause analysis of performance issues from the NFVI perspective.
• Collection of infrastructure fault information.
• Collection of information for capacity planning, monitoring, and optimization.
• Multiple VIM instances may be deployed.
23/80
24. Reference points
• The main (named) reference points and execution reference points
are shown by solid lines and are in the scope of NFV. These are
potential targets for standardization.
• The dotted reference points are available in present deployments but
might need extensions for handling network function virtualisation.
However, the dotted reference points are not the main focus of NFV
at present.
24/80
25. Reference points
• Virtualisation Layer - Hardware Resources - (Vl-Ha)
• This reference point interfaces the virtualisation layer to hardware resources to create an
execution environment for VNFs, and collect relevant hardware resource state information
for managing the VNFs without being dependent on any hardware platform.
• VNF - NFV Infrastructure (Vn-Nf)
• This reference point represents the execution environment provided by the NFVI to the VNF.
It does not assume any specific control protocol. It is in the scope of NFV in order to
guarantee hardware independent lifecycle, performance and portability requirements of the
VNF.
• NFV Orchestrator - VNF Manager (Or-Vnfm)
• This reference point is used for resource related requests (e.g. authorization, validation,
reservation, allocation, by the VNF Manager), sending configuration information to the VNF
manager, so that the VNF can be configured appropriately to function within the VNF
Forwarding Graph in the network service, and collecting state information of the VNF
necessary for network service lifecycle management.
25/80
26. Reference points
• Virtualised Infrastructure Manager - VNF Manager (Vi-Vnfm)
• This reference point is used for resource allocation requests by the VNF Manager and
virtualised hardware resource configuration and state information (e.g. events)
exchange.
• NFV Orchestrator - Virtualised Infrastructure Manager (Or-Vi)
• This reference point is used for resource reservation and/or allocation requests by
the NFV Orchestrator and virtualised hardware resource configuration and state
information (e.g. events) exchange.
• NFVI - Virtualised Infrastructure Manager (Nf-Vi)
• This reference point is used for specific assignment of virtualised resources in
response to resource allocation requests, forwarding of virtualised resources state
information and hardware resource configuration and state information (e.g. events)
exchange.
26/80
27. Reference points
• OSS/BSS - NFV Management and Orchestration (Os-Ma)
• This reference point is used for
• Requests for network service lifecycle management.
• Requests for VNF lifecycle management.
• Forwarding of NFV related state information.
• Policy management exchanges.
• Data analytics exchanges.
• Forwarding of NFV related accounting and usage records.
• NFVI capacity and inventory information exchanges.
• VNF/EM - VNF Manager (Ve-Vnfm)
• This reference point is used for
• Requests for VNF lifecycle management.
• Exchanging configuration information.
• Exchanging state information necessary for network service lifecycle management.
27/80
29. Objectives
• Show a subset of the use cases presented in the normative document
• Use Case #1: Virtualisation of Mobile Core Network and IMS
• Use Case #2: Virtualisation of the Home Environment
• Use Case #3: Virtual Content Delivery Network (vCDN)
• Use Case #4: Fixed Access Network Functions Virtualisation
29/80
30. Use Case #1: Virtualisation of Mobile Core
Network and IMS
• Mobile networks are populated with a
large variety of proprietary hardware
appliances.
• NFV aims at reducing the network
complexity and related operational issues
by leveraging standard IT virtualisation
technologies to consolidate different
types of network equipment onto
industry standard high volume servers,
switches and storage. Such consolidation
of hardware is expected to reduce Total
Cost of Ownership (TCO).
• Flexible allocation of NFs on such
hardware resource pool could highly
improve network usage efficiency in day-
to-day network operation.
30/80
31. Description
• The use case is composed of the following steps
1. The network operator’s orchestration platform detects a resource shortage
on vEPC network functions.
2. The network operator’s orchestration platform expands the capacity of the
vEPC. This results in the allocation of more resources to the vEPC.
3. End users continue to connect to mobile network without noticing any
congestion on the network.
• The following NFs need to be virtualised
• Mobile Core NFs
• EPC Core & Adjunct NFs (e.g. MME, S/P-GW, PCRF, etc).
• 3G/EPC Interworking NFs (e.g. SGSN, GGSN, etc).
• All IMS NFs (e.g. P/S/I-CSCF, MGCF, AS).
31/80
32. Coexistence of Virtualised and Non-
Virtualised Network Functions
• NFV-based virtualised mobile
core network will coexist with
non-virtualised mobile core
network, as the mobile core
networks already deployed are
not based on NFV.
• Network operators should have
the freedom to choose the NFV
deployment according to their
desired migration plan from
non-virtualised network to NFV-
based virtualised network.
32/80
33. Partial virtualisation of mobile core network
• The virtualisation of some
components of the mobile core
network is illustrated.
• In this case only some NFs are
virtualised. They can be EPC
control functions (e.g.
MME/SGSN), HSS or IMS nodes
(e.g. CSCF).
33/80
34. Service specific mobile core network
virtualisation
• In the case of coexistence of
virtualised and non-virtualised
mobile core networks, the operator
deploys a complete virtualised core
network while still having the non-
virtualised one.
• The virtualised core can be used for
specific services and/or devices
(e.g. machine-to-machine) or for
traffic exceeding the capacity of
the non-virtualised network.
34/80
35. Use Case #2: Virtualisation of the Home
Environment
• Current network operator provided home services are architected
using network-located backend systems and dedicated devices
located as part of the home network.
• The availability of high bandwidth access (such as offered by fibre)
and the emergence of NFV facilitate the virtualisation of the home
environment, requiring only a simple physical connectivity and low
cost and low maintenance physical devices at the customer premises.
35/80
36. No Home Virtualisation
• These Customer Premises Equipment (CPE) devices mark the operator and/or
service provider presence at the customer premises and usually include a
Residential Gateway (RGW) for Internet and VoIP services, and a Set Top Box (STB)
for media services normally supporting local storage for PVR services.
36/80
37. Home Virtualisation functionality
• NFV facilitates the virtualisation of services and the functionality migration from home
devices to the NFV Cloud on the service provider side. This use case creates virtualised
replicas of the original devices (i.e. the RGW migrates into a vRGW and STB into vSTB) in
order to maintain the original Interfaces to the virtualised devices.
37/80
38. Description
• The use case is composed of the following steps
1. The service provider deploys vRGW or vSTB in its cloud by orchestrating NFVI
resources.
2. The service provider provisions vRGW or vSTB.
3. The service provider either deploys or replaces simplified layer 2 device at
customer premises.
4. The customer remotely configures its vRGW or vSTB by login, in the same way as
the physical device located in the home.
• The virtualisation of the home will result in three disaggregated forms of
functional components
• vRGW or vSTB in the form of software deployed in the service provider NFV cloud
• layer 2 physical device still residing at customer premises and functioning as a bridge
• a logic link connecting both the physical device and the virtual functionality (i.e.
vRGW, vSTB) by point-to-point
38/80
39. Coexistence with Non-Virtualised Network
Functions
• The figure shows a use case where both RGW and STB for Home #2 are
virtualised. The vSTB uses a Public IP address to communicate with the
vRGW and its service platforms (IPTV or Internet platforms via the BNG).
39/80
40. Use Case #3: Virtual Content Delivery
Network (vCDN)
• Delivery of video content is a major challenge for network operators due to a massive growing
amount of traffic to be delivered to end customers of the network.
• The growth of video traffic is driven by the shift from broadcast to unicast delivery via IP, by the
variety of devices used for video consumption and by the increased quality of video delivered via
IP networks in terms of resolution and frame rate.
• Integrating Content Delivery Network (CDN) node into operator networks can be an effective and
cost-efficient way to answer the challenges of video delivery. Producing the content streams out
of compute/storage nodes nearer to the end customer saves interior network resources and
allows delivering streams with higher bandwidth and more reliable quality.
• Operators are using CDNs integrated into their own networks to deliver their own managed video
services but also to offer wholesale CDN services and to address Over the Top (OTT) video traffic
via transparent caching.
• When CDN providers ask operators to deploy their proprietary cache nodes into the ISP network,
the challenge is that eventually the operators will host a zoo of different cache devices side by
side in their premises.
40/80
41. Description
• The use case is composed of the following steps
1. The provider decomposes the request and identifies what NFs are needed to
support the vCDN service.
2. The provider responds back to the consumer with an indication that the vCDN
service has been instantiated.
3. Alternately, given that the vCDN service instance may take some time to create, an
order mechanism may be used. In this case, the provider would initially respond to
the consumer with an indication that an order has been created and provide a
handle to the order object (so that the consumer can check on the progress of the
order).
4. If the vCDN provider does not handle the application specific aspects of the NFs,
the service might not yet be fully configured. In this case, the vCDN consumer
needs to send additional configuration requests to another functional block that
can handle application-specific configuration for the supporting NFs.
41/80
42. Virtualisation Target
• The CDN controller objective is to select a cache node (or a pool of
cache nodes) for answering the end-user request, and then redirect
the end-user to the selected cache node.
• The Cache Node shall answer to the end-user request and deliver the
requested content to the end user. The CDN controller is a centralized
component, and CDN cache nodes are distributed within the network
PoPs.
• All components of a CDN could be virtualized, but virtualizing the
cache nodes for improving performances would have the highest
impact.
42/80
43. Different vCDN cache nodes deployment
• Deploying CDN nodes as VNFs on a standardized environment shall overcome most of the
challenges mentioned above
• Resources can be allocated to other VNFs during weekdays and business hours.
• Operational process of resources for different parties can be harmonized.
• It is easy to replace or add VNFs in case of new requirements in content delivery.
• Running CDN nodes as VNFs on an operator owned infrastructure will allow a new kind of wholesale business
towards CDN providers if there is a standardized way to deploy and operate 3rd party CDN nodes in a controlled
way inside the operator network.
43/80
44. Use Case #4: Fixed Access Network Functions
Virtualisation
• The main costs and bottlenecks in a network occur in the access. For fixed access
networks, the prevalent broadband access technology is ADSL2+ which has a
maximum downstream bit rate of ~26Mb/s.
• Access network virtualisation moves complex processing to the head-end and
simplification of the remote node reduces cost and power consumption
• The Optical Line Terminal (OLT) terminates FTTH traffic, and performs computing
tasks that can be virtualized as a “Virtual OLT” (VOLT). The VOLT can virtualize
functions including VLAN tagging, layer 2/3 forwarding or SDN control,
discovery/initialization, QoS enforcement, and traffic management.
• Current access network equipment is normally owned and operated by a single
organisational entity. Virtualisation supports multiple tenancy, where more than
one organisational entity can either be allocated, or given direct control of, a
dedicated partition of a virtual access node. This provides the ability to use a
unique access infrastructure by multiple Virtual Network Operators.
44/80
45. New Access Technologies
• The trend is to replace exchange-based equipment with
equipment based on VDSL2 in street cabinets (FTTcab, upper
left) or G.fast in distribution point terminals (FTTdp, lower
right) by using fibre backhaul.
• VDSL2 can provide bit rates of up to ~100Mb/s.
• ITU-T/G.fast will provide very high data rates of up to ~1Gb/s
on the existing short copper drop wires connecting end-user
premises (FTTdp).
• Both FTTcab/VDSL2 and FTTdp/G.fast systems require
electronic systems to be deployed in remote nodes located in
the street or in multiple-occupancy buildings.
• These systems need to be small and energy efficient to
minimise thermal problems and to allow novel powering
schemes including reverse powering from the customer
premises.
• These new low power remote nodes and the corresponding
customer modems, need to be as simple as possible with
particular regard to OAM and have a long service life.
45/80
46. Virtualised and Legacy Access Networks
• Legacy and virtual access nodes
can co-exist and share the fibre
access network and the common
aggregation and service platforms
• In NFV networks (upper), all access
nodes have their control plane
virtualised in the CO or datacentre.
• In the hybrid case (middle), the
access node supports both legacy
(xDSL & FTTH) and Virtualised (FTTdp)
access nodes whose control functions
are implemented in the CO.
• In legacy access networks (bottom),
each network function contains its
own control plane.
46/80
47. Access Network Virtualisation and Open
Interfaces
• Marked in yellow are network
elements whose management &
control plane functionality may
be separated and run in a NFV
enabled Central Office (CO).
• The use of general purpose
processors versus dedicated
hardware will need to be
considered when identifying the
appropriate demarcation
between VNFs and PNFs.
47/80
48. Higher-Layer Virtualizable Functions
• The possible virtualizable higher layers functions are
• The access node focuses on basic connectivity while VLAN tagging is performed in the NFV
system.
• Per subscriber and per service Quality of Service (QoS) and Class of Service (CoS)
enforcement (e.g. policing or shaping)
• Initialization, sign-on, address assignment, authentication, authorization and accounting.
• Traffic management, traffic filtering, traffic shaping, flow control, load balancing.
• Traffic steering and forwarding, multicast group control.
• Network slicing with data sharing for multi-operator network control and management. Here
an abstraction layer connects the physical network to multiple virtual access node VNFs, each
of which allows control and data dissemination to each Virtual Network Operator (VNO) for
particular functions.
• Control and configuration where each Virtual Network Operator 5NVO) controls and
configures their own virtual access node dataset of configuration objects.
• State information where each VNO accesses virtual functions providing test, diagnostic,
performance, and status information.
48/80
49. Layer 1-2 Virtualizable Functions
• The possible virtualizable L1-L2 layers functions are
• The control of Dynamic Rate Allocation (DRA) which schedules traffic, such as G.fast
Dynamic Time Assignment (DTA), or PON Dynamic Bandwidth Allocation (DBA). The
configuration of DRA such as setting traffic triggers can involve complex non-real
time trade-offs in policy and subscriber management that can be virtualized.
• The dynamic resource assignment.
• The on-line reconfiguration management and Dynamic Spectrum Management
(DSM) for DSLs and G.fast.
• Line diagnostics and optimization.
• The Power Control Entity (PCE), a cross-layer low-power mode control, for G.fast that
involves thresholds and other settings for low-power modes of individual
transceivers that can be determined by a virtualized PCE.
• Vectoring control and TDD scheduling for G.fast vary in a complex set of
dependencies that affects performance, throughput, and power usage, thus a virtual
access node function can select a good trade-off.
49/80
52. Definitions
• NFVI: totality of all hardware and software components which build
up the environment in which VNFs are deployed.
• The NFVI can span across several locations. The network providing
connectivity between these locations is regarded to be part of the NFVI.
• NFVI-Node: physical device deployed and managed as a single entity
providing the NFVI functions required to support the execution
environment for VNFs
• NFVI-PoP: single geographic point of presence (i.e. location) where a
number of NFVI-Nodes are located.
52/80
53. Definitions cont’d
• compute domain: domain within the NFVI that includes compute and
storage nodes (i.e., servers)
• virtualisation container: partition of a compute node that provides
an isolated virtualized computation environment
• network domain: domain within the NFVI that includes all
networking that interconnects compute/storage infrastructure
• network controller: functional block that centralizes some or all of
the control and management functionality of a network domain and
may provide an abstract view of its domain to other functional blocks
via well-defined interfaces
53/80
54. Objectives of the NFV Infrastructure
• The objective of the NFVI is to support the
NFV ecosystem. The NFV Use Cases document
identifies 9 fields of use cases for NFV.
• The NFVI is deployed as a distributed set of
NFVI-nodes in various locations to support
the locality and latency requirements of the
different use cases and the NFVI provide the
physical platform on which the diverse set of
VNFs are executed.
• From a functional perspective, the NFVI
provide the technology platform with a
common execution environment for the NFV
use cases.
• The NFVI provide the infrastructure that
support one or more of these use cases
simultaneously and is dynamically
reconfigurable between these use cases
through the installation of different VNFs.
54/80
55. Architectural Principles
• Two of the three functional blocks have been
implemented as a virtualised network function
executing on a host function in the NFVI.
• The VNF depends on the host function for its
existence, and if the host function were to be
interrupted, or even disappear, then the VNF will
also be interrupted or disappear. Likewise, the
container interface reflects this existence
dependency between a VNF and its host function.
• The VNF is an abstract view of the host function
when the host function is configured by the VNF.
• The NFV architecture is therefore defined using not
just functional blocks and their associated
interfaces, but is defined using the following
entities:
• Host functions with their associated container
interfaces and associated infrastructure interfaces.
• VNFs with their associated used container interfaces
and virtualized interfaces.
55/80
56. Management and Orchestration When
Network Functions are Virtualised
• The objective of NFV is to separate
the VNFs from the infrastructure,
and this includes their
management.
• As shown, the management and
orchestration (M&O) functions are
divided between the M&O of the
NFVI and the M&O of the VNFs.
• The M&O of the NFVI is an integral
and essential part of the NFV
framework and is specified within
the GS NFV MANO documentation
(see related modules).
56/80
57. Domain Architecture
• The next figure illustrates the application of the principle of domains to the NFVI
and exhibits the following points:
• The architecture of the VNFs is separated from the architecture hosting the VNFs (i.e. NFVI).
• The architecture of the VNFs may be divided into a number of domains with consequence for
the NFVI and vice versa.
• Given the current technology and industrial structure, compute (including storage),
hypervisors, and infrastructure networking are already largely separate domains and are
maintained as separate domains with the NFVI.
• Management and orchestration tends to be sufficiently distinct from the NFVI as to warrant
being defined as its own domain, however, the boundary between the two is often only
loosely defined with functions such as element management functions in an area of overlap.
• The interface between the VNF domains and the NFVI is a container interface and not a
functional block interface.
• The Management and Orchestration functions are also likely to be hosted in the NFVI and
therefore also likely to sit on a container interface.
57/80
59. A single compute platform supporting a
multiplicity of VNFCs
• One host function can host more
than one virtual function.
• The figure shows a compute
node which hosts a hypervisor
able to host many virtual
machines (VMs), each of which
can host a VNF.
• A single VNF hosted directly on a
single VM (a one to one
mapping) is called a VNF
Component (VNFC).
59/80
60. A composed, distributed VNF hosted across a
multiplicity of compute platforms
• The hypervisors hosted on the compute nodes
provide VM container interfaces while the
infrastructure network provides infrastructure
connectivity container interfaces.
• These infrastructure connectivity container
interfaces provides connectivity services such as E-
Line and E-LAN services as defined by the Metro
Ethernet Forum. These services are virtual
functions hosted on the infrastructure network.
• VM container interfaces and virtual network
container interfaces together provide a distributed
NFVI container interface which can host distributed
VNFs.
• The figure shows a composite, distributed VNF
hosted on the composite, distributed NFVI
container interface. It also shows the constituent
VNFCs hosted on the constituent VM container
interfaces and the virtual interfaces of the VNFCs
hosted on virtual network container interfaces.
60/80
61. Decomposition of VNFs and Relationships
between VNFs
An individual constituent VNF can have the
following deployment cases
• a 1:1 implementation of single Network
Element's NF by a single VNF as shown in figure.
• an N:1 case where there are N parallel
constituent VNFs implementing the capacity of a
single Network Element's NF.
• a 1:N case where N Network Elements's NFs are
implemented by a single VNF.
61/80
62. N:1 Implementation of a Network Element by
Parallel VNFCs
• A VNF may be decomposed as a set of
parallel VNFCs. This may be done within a
VNF by vendor implementation to
improve efficiency, scaling, and/or
performance.
• The instances of VNFa may be executing
in different NFVI nodes in different NFVI-
PoPs.
• The combination of NFVI nodes 1 and 2
and the split/merge functions are
equivalent to external interfaces i1 to in.
• In this example the splitting and merging
(load balancing) functions are allocating
traffic across instances of the same type
of VNF (VNFa).
62/80
63. 1:N Multiplexed Implementation of Multiple
Network Elements by a Single VNF
• The 1:N case is where N NFs are
implemented by a single VNF and
where each individual NF is defined
by at least is individual state.
• The figure provides an example of
three identical NFs implemented in
different NEs in different locations.
• The equivalent VNF supports a
larger number of interfaces (ij to ik)
from a single instance.
63/80
66. Internal Architecture of a VNF
• A Virtualised Network Function (VNF) is a
Network Function capable of running on an
NFV Infrastructure and being orchestrated by
a NFV Orchestrator (NFVO) and VNF Manager.
• It has well-defined interfaces to other NFs via
SWA1, the VNF Manager, its EM, and the NFVI
and a well-defined functional behaviour.
• A VNF may implement a single network entity
with interfaces and behaviour defined by
standardisation organizations while another
VNF may implement groups of network
entities.
• When a group of entities is implemented the
internal interfaces between them do not need
to be exposed.
66/80
67. VNF Components
• When designing and developing
the software that provides the VNF,
VNF providers may structure it into
software components and package
those components into one or
more images.
• These VNF provider defined
software components are called
VNF Components (VNFCs).
• VNFs are implemented with one or
more VNFCs and it is assumed that
a VNFC instance maps 1:1 to the
NFVI Virtualised Container
interface
67/80
68. VNF and VNFC Instances
• A VNF is an abstract entity that allows the software contract
to be defined, and a VNF Instance is the runtime instantiation
of the VNF.
• A VNFC is a VNF provider specific component of a VNF, and
VNFC Instances (VNFCIs) are the executing constituents which
make up a VNF Instance.
• In order to instantiate a VNF, the VNF Manager creates one or
more VNFCIs, where each VNFCI is in its own virtualisation
container.
• These VNFCIs provide the functionality of the VNF, and
expose whatever interfaces are provided by that VNF.
• Each VNF has exactly one associated VNF Descriptor (VNFD)
and the requirements for initial deployment state are
described in the VNFD, including the connections between
VNFCIs which are internal to the VNF, and not visible to
external entities at the VNF level.
• Post-deployment operation capabilities, such as migration of
the VMs containing VNFCIs, scale up/down/in/out, changes
to network connections, etc., are also described in the VNFD.
68/80
69. VNF Interfaces
• An interface is a point of interaction
between two entities. The entities can be
software and/or hardware services
and/or resources.
• Software interfaces separate the
specification of how software entities
communicate from the implementation
of the software. They are used to create
well-defined points of interaction
between software entities, and restrict
communication between software
entities to those interaction points.
• The NFV architectural framework defines
a reference point as an external view of a
function block. It is synonymous with
interface.
69/80
70. Relevant interfaces for VNFs
• SWA1: interfaces enable communication between
various NFs within the same or different network
services. They may represent data and/or control
plane interfaces of the NFs. A VNF may support
more than one SWA-1 interface.
• SWA-2: refer to VNF internal interfaces, i.e. for
VNFC to VNFC communication. These interfaces
are defined by VNF providers, and are therefore
vendor-specific.
• SWA-3 (Ve-Vnfm-vnf): interfaces the VNF with the
NFV management and orchestration, specifically
with the VNF Manager.
• SWA-4 is used by the EM to communicate with a
VNF.
• SWA-5 (Vn-Nf) corresponds to VNF-NFVI interfaces
which provide access to a virtualised slice of the
NFVI resources allocated to the VNF. Thus the
SWA-5 interface describes the execution
environment for a deployable instance of a VNF.
70/80
71. VNF Internal Structure
• A VNF may be composed of one or multiple
components, called VNFC. A VNFC in this case
is a software entity deployed in a
virtualisation container as shown on figure.
• A VNF realized by a set of one or more VNFCs
appear to the outside as a single, integrated
system.
• The same VNF may be realized differently by
each VNF Provider. For example, one VNF
Provider may implement a VNF as a
monolithic, vertically integrated VNFC (left),
another VNF Provider may implement the
same VNF using separate VNFCs, say one for
the control plane, one for the data plane and
one for element management (right).
• VNFCs of a VNF are connected in a graph. For
a VNF with only a single VNFC, that internal
connectivity graph is the null graph.
71/80
72. VNF Parallelization
• Each VNFC of a VNF is either
parallelizable or non-
parallelizable:
• If it is parallelizable, it may be
instantiated multiple times per
VNF Instance, but there may be a
constraint on the minimum and
maximum number of parallel
instances (right).
• If it is non-parallelizable, it shall be
instantiated exactly once per VNF
Instance (left).
72/80
73. VNFC States
• Each VNFC of a VNF may need to handle state
information.
• A VNFC that does not have to handle state
information is a stateless VNFC (left).
• A VNFC that needs to handle state
information may be implemented either as a
stateful VNFC (middle) or as a stateless VNFC
with external state (i.e. state data is held in a
data repository external to the VNFC, right).
• Statefulness will create another level of
complexity, e.g. a session (transaction)
consistency has to be preserved and has to be
taken into account in procedures such as load
balancing.
• The data repository holding the externalized
state may itself be a stateful VNFC in the same
VNF.
• The data repository holding the externalized
state may be an external VNF.
73/80
74. VNF Load Balancing Models
• VNF-internal Load Balancer (top)
• 1 VNF instance seen as 1 logical NF by a Peer NF.
The VNF has at least one VNFC that can be
replicated and an internal load balancer (which is
also a VNFC) that scatters/collects
packets/flows/sessions to/from the different
VNFC instances.
• VNF-external Load Balancer (bottom)
• N VNF Instances seen as 1 logical NF by a Peer
NF. A load balancer external to the VNF (which
may be a VNF itself) scatters/collects
packets/flows/sessions to/from the different
VNF instances (not the VNFCs!).
• If the VNFCs are stateful, then the LB shall
direct flows to the VNFC instance that has the
appropriate configured/learned state.
• E2E and infrastructure network LBs also exist.
74/80
75. VNF Scaling Models
• Auto scaling where the VNF Manager
triggers the scaling of VNF according to
the rules in the VNFD (top).
• On-demand scaling in which a VNF
Instance or it EM monitor the states of
the VNF Instance's constituent VNFC
Instances and trigger a scaling operation
through explicit request to the VNF
Manager to add/remove VNFC instances
or VNFCI resources (bottom left).
• Manually triggered scaling (e.g. by NOC
operators) or OSS/BSS triggered scaling
according to the rules in the VNFD by
issuing requests to the NFVO via an
appropriate interface (bottom right).
75/80
76. VNF Component Re-Use
• A common VNFC B* is factored out of
VNFs X and Y and turned into a proper
VNF (by adding a VNFD to it) that may or
may not come from a different VNF
Provider. All of those VNFs are then
handled like any other VNF:
• VNFs X and Y do not remain the same
function as a result, but become new
functions A and C.
• The internal interface SWA2 becomes an
external interface SWA1.
• Most importantly, the SLAs of VNFs X and Y
towards the NFVO change: A and C are no
longer responsible for the performance (or
lack of performance) of VNF B.
• This is the only valid model for
"Component Reuse" in the context of ETSI
NFV.
76/80
77. VNF States and Transitions
• A VNF can assume a number of
internal states to represent the
status of the VNF.
• Transitions between these states
provide architectural patterns for
VNF functionalities.
• Before a VNF can start its lifecycle,
it is a prerequisite that the VNF was
on-boarded (process of registering
the VNF with the NFVO and
uploading the VNF data (VNFD, SW
images, etc.).
77/80
78. VNF Instance State Transitions
• Transition actions and
reverse actions
• Instantiate/Terminate
• Configure
• Start/Stop
• Scale out/Scale in
• Scale up/Scale down
• Update/update rollback
• Upgrade/upgrade rollback
• Reset
78/80
79. VNF Instantiation Example
• A VNF is made up of 4 VNFCs. All VMs
used by each VNFC instance are located
on the same VLAN. In step #1, the VNFM
creates 4 complete VNFC instances. The
VNF, consisting of 4 interconnected VNFC
instances, is created during step #2.
• Each VNFC instance uses a function that
broadcasts messages to other VNFC
instances looking for a VNFC instance that
implements a master function that will
organize the VNFC instances, coordinate
their actions, and make them act as one
functional unit.
• The big smiley in the figure represents the
master function which uses the Ve-Vnfm-
vnf interface to interact with the VNF
Manager.
79/80
80. VNF Descriptor's Role in VNF Instantiation
• The VNFD is a specification template provided by
the VNF Provider for describing virtual resource
requirements of a VNF. It is used by the NFV
Management and Orchestration functions to
determine how to execute VNF lifecycle
operations.
• The figure shows a VNF instance that is made up of
4 VNFC instances, which are of 3 different types:
'A', 'B' and 'C'. Each VNFC type has its own
requirements on the operating system (OS) and
the execution environment (e.g. the virtual
machine).
• The VNFD describes the requirements for virtual
resources and their interconnectivity, as well as
unambiguous references to VNF binaries, scripts,
configuration data, etc., that are necessary for the
NFV Management and Orchestration functions to
configure the VNF properly.
80/80