Tungsten Fabric is an open source network virtualization solution for providing connectivity and security for virtual, containerized or bare-metal workloads. Savannah will cover the overall architecture of Tungsten Fabric and the DPDK vRouter, which performs packet forwarding and enforces network and security policies.
2. 2
Agenda
- About Me
- Overview of Tungsten Fabric
- Community & Code
- Tungsten Fabric
- Features, Use Cases
- Architecture
- vRouter Architecture
- DPDK vRouter
- vRouter Datapath
3. 3
About Me
Savannah Loberger
- Computer Science Student at OSU
- Interning with NPG Arch team at
Intel, currently working on
vRouter/TF Optimizations
- Experience working with DPDK
related technologies
- Active volunteer with local robotics
and STEM Outreach Opportunities
4. 4
TF Mission
Build the world’s most ubiquitous, easy-to-use, scalable, secure,
and cloud-grade SDN stack, providing a network fabric connecting
all environments, all clouds, all people.
5. 5
Community & Code
Resource: https://tungsten.io/community/
Join the Community!
• tungsten.io/slack
• tungsten.io/community
• Bug fixes and new
features are driven by the
community.
9. 9
Ethernet / IP
underlay network
TF CONTROLLER, API & GUI
scale-out control and
management container
micro-services
REST
XMPP
ORCHESTRATION NODES
XMPP
Layer-3 Layer-2
network federation
TF
Orchestration plug-ins
Control
LEGACY COMPUTE
NODE
COMPUTE NODE 2…
TF
vRouter
COMPUTE NODE 1
TF
vRouter
Compute Runtime Compute Runtime
WAN
Control
TF Architecture
10. 10
TF Controller and vRouter
Two Key Software Components to TF:
• Tungsten Fabric Controller– a set of software services that
maintains a model of networks and network policies, typically
running on several servers for high availability
• Tungsten Fabric vRouter– installed in each host that runs
workloads (virtual machines or containers), the vRouter
performings packet forwarding and enforces network and
security policies.
https://tungstenfabric.github.io/website/Tungsten-Fabric-Architecture.html
14. 14
Further Information
TF Website:
https://tungsten.io
TF Architectural Overview:
https://tungstenfabric.github.io/website/Tungsten-Fabric-
Architecture.html
Deploying Tungsten Fabric:
https://github.com/Juniper/contrail-ansible-deployer/wiki
OpenContrail Documentation:
http://www.opencontrail.org/opencontrail-architecture-
documentation/
Editor's Notes
Broad goal of open source SDN with multi orchestrator support
TF was developed as a response from telco/NFV/service chaining
- How do you replace those big appliances and create a software driven services in networks?
Completely open sourced code base
Bugs and blueprints are driven by the community
The following are common use cases:
Enable Platform-as-a-Service and Software-as-a-Service with high scalability and flexibility in OpenStack-managed datacenters
Virtual networking with Kubernetes container management system, including with Red Hat OpenShift
Allow new or existing virtualized environment running VMware vCenter to use Tungsten Fabric virtual networking between virtual machines
Connect Tungsten Fabric virtual networks to physical networks using gateway routers with BGP peering with networking overlays, and directly through the data center underlay network
________________
Visualizing TF
TF previously was just a hypervisor based SDN solution and now it supports hypervisor, container host, and public cloud
It has overlay tunnels to ensure the separation of underlay and overlay state so you can shift around service end points
It is also a flow based and aware SDN Solution. We can keep track of all types of flows between the interfaces that connect the VMs/containers. Because of this we can implement flow validation and rich load balancing.
Does a lot more tha L2/L3 forwarding
Analytics is a huge part of TF, you can see traffic flows + further information that can be used for troubleshooting and is especially important when scaling over large workloads
Visualizing TF, here is an example of a simple deployment
Traditional SDN like architecture with separate control and forwarding plane
In each compute/control node there is a vrouter that controls the dataplane
For connected VMs and containers, the vrouter replaces the linux bridge/traditional ip stack, all forwarding and security application is controlled thorugh the vRouter
vRouter uses XMPP (in XML) to connect and transmit data to the controller
Software Accelerated Dataplane allows us to:
1 control the flow of traffic
2 appy security policies
3 establish overlay tunnels
Forwarding tables:
Interface table
Nexthop table
htable (flow table)
mtries (routing table/FIB)
Bridge table (for each interface)
Tunnels/Overlays:
MPLSoGRE/MPLSoUDP,
VXLAN Work together with Physical Gateway
Control Plane:
XMPP (XML)
The vRouter agent runs in the user space of the host operating system, while the forwarder can run as a kernel module, in user space when DPDK is used, or can run in a programmable network interface card, also known as a “smart NIC”. These options are described in more detail in the section [Deployment Options for vRouter]. The more commonly use kernel module option is illustrated here.
Each VRF has its own forwarding and flow tables, while the MPLS and VXLAN tables are global within the vRouter. The forwarding tables contain routes for both the IP and MAC addresses of destinations and the IP-to-MAC association is used to provide proxy ARP capability. The values of labels in the MPLS table are selected by the vRouter when VM interfaces come up, and are only locally significant to that vRouter. The VXLAN Network Identifiers are global across all the VRFs of the same virtual network in different vRouters within a Tungsten Fabric domain.
Each virtual network has a default gateway address allocated to it, and each VM or container interface receives that address in the DHCP response it gets when initializing. When a workload sends a packet to an address outside its subnet, it will ARP for the MAC corresponding to the IP address of the gateway IP, and the vRouter responds with its own MAC address. Thus, the vRouters support a fully distributed default gateway function for all the virtual networks.
Kernel Module– This is the default deployment mode
DPDK– Forwarding acceleration is provided using an Intel library
SR-IOV– Provides direct access to NIC from a VM
Smart NIC– vRouter forwarder is implemented in a programmable NIC
When a packet arrives from the physical network, the vRouter first checks if the packet has a supported encapsulation or not. If not, the packet is sent to the host operating system. For MPLS over UDP and MPLS over GRE, the label identifies the VM interface directly, but VXLAN requires that the destination MAC address in the inner header be looked up in the VRF identified by the VLAN Network Identifier (VNI). Once the interface is identified, the vRouter can forward the packet immediately if there is no policy flag set for the interface (indicating that all protocols and all TCP/UDP ports are permitted). Otherwise the 5-tuple is used to look up the flow in the flow table and the same logic as described for an outgoing packet is used.
You can check out the TF website as well as documentation in the github that walks you through various deployments:
You can install using ansible, helm, openshift, docker, and more.
There is also the link to the old OpenContrail site that has a lot of good information. They are still working on migrating the documentation since joining the Linux Foundation last year.