Building a secure multi-tenant cloud necessitates proper tenant isolation and access control. Key network and security functions must scale independently based on the dynamic resource requirements across each tenant. Additionally, On-demand and self-service provisioning are required for achieving operational efficiencies. Robust, dynamic and elastic software abstractions are imperative to support applications built to run such complex environments.
This slide deck covers:
• Architectural design choices
• Implementation blueprints
• Operational best practices
that have been made to build OpenStack cloud at Symantec.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
Cisco® Application Centric Infrastructure (ACI) is an innovative architecture that radically simplifies, optimizes, and accelerates the entire application deployment lifecycle. Cloud, mobility, and big data applications are causing a shift in the data center model. Cisco ACI redefines the power of IT, enabling IT to be more responsive to changing business and application needs, enhancing agility, and adding business value. Cisco ACI delivers a transformational operating model for next-generation data center and cloud applications. This Cisco ACI hands lab will step you through from the ACI Fabric concepts to deployment. • Cisco ACI Overview • ACI Fabric Discovery • ACI Building Basic Network Constructs • ACI Building Policy Filters and Contracts • : Deploying a 3-Tier Application Network Profile • ACI Integrating with VMware • Deploying a Service Graph with Application Network Profile • Exploring Monitoring and Troubleshooting
Running or planning on deploying a large ClearPass cluster? See what others are doing in larger environments to improve their deployments This session is designed to help customers that run the largest and most demanding networks learn how to deal with multiple locations, 100k+ endpoints, and strict SLA’s. Come to this session to discuss architecture for distributed deployments and how to better design your install for high performance, high availability needs. This is the one session where we’ll include the most experienced ClearPass team members for what will be a highly interactive session.
OpenStack 운영을 통해 얻은 교훈을 공유합니다.
목차
1. TOAST 클라우드 지금의 모습
2. OpenStack 선택의 이유
3. 구성의 어려움과 극복 사례
4. 활용 사례
5. 풀어야 할 문제들
대상
- TOAST 클라우드를 사용하고 싶은 분
- WMI를 처음 들어보시는 분
Cisco® Application Centric Infrastructure (ACI) is an innovative architecture that radically simplifies, optimizes, and accelerates the entire application deployment lifecycle. Cloud, mobility, and big data applications are causing a shift in the data center model. Cisco ACI redefines the power of IT, enabling IT to be more responsive to changing business and application needs, enhancing agility, and adding business value. Cisco ACI delivers a transformational operating model for next-generation data center and cloud applications. This Cisco ACI hands lab will step you through from the ACI Fabric concepts to deployment. • Cisco ACI Overview • ACI Fabric Discovery • ACI Building Basic Network Constructs • ACI Building Policy Filters and Contracts • : Deploying a 3-Tier Application Network Profile • ACI Integrating with VMware • Deploying a Service Graph with Application Network Profile • Exploring Monitoring and Troubleshooting
Running or planning on deploying a large ClearPass cluster? See what others are doing in larger environments to improve their deployments This session is designed to help customers that run the largest and most demanding networks learn how to deal with multiple locations, 100k+ endpoints, and strict SLA’s. Come to this session to discuss architecture for distributed deployments and how to better design your install for high performance, high availability needs. This is the one session where we’ll include the most experienced ClearPass team members for what will be a highly interactive session.
In this slide, we discussed the IPVS, including the introduction, demonstration, implementation, and integration in Kubernetes.
IPVS was based on the netfilter and we discussed how it works with iptables and also compares the detail implementation in Kubernetes to show why IPVS has a better performance in IPTABLES.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
오픈스택이 가진 기술에 대하여 설명합니다.
1. 오픈소스기반 OpenStack 클라우드 시스템
2. OpenStack 기술 개요 및 동향
3. OpenStack 의 Community 개발 체계
4. OpenStack HA를 위한 방안
5. OpenStack SDN 개발 동향
6. Neutron OVS-DPDK 가속화와 구현방안
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Netfilter: Making large iptables rulesets scalebrouer
Howto make large iptables firewall rulesets scale under Linux.
Presentation given at OpenSourceDays 2008 (and similar at Netfilter Developers Workshop 2008).
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
In this slide, we discussed the IPVS, including the introduction, demonstration, implementation, and integration in Kubernetes.
IPVS was based on the netfilter and we discussed how it works with iptables and also compares the detail implementation in Kubernetes to show why IPVS has a better performance in IPTABLES.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
오픈스택이 가진 기술에 대하여 설명합니다.
1. 오픈소스기반 OpenStack 클라우드 시스템
2. OpenStack 기술 개요 및 동향
3. OpenStack 의 Community 개발 체계
4. OpenStack HA를 위한 방안
5. OpenStack SDN 개발 동향
6. Neutron OVS-DPDK 가속화와 구현방안
Introduction to the Container Network Interface (CNI)Weaveworks
CNI, the Container Network Interface, is a standard API between container runtimes and container network implementations. These slides are from the Cloud Native Computing Foundation's Webinar, and explain what CNI is, how you use it, and what lies ahead on the roadmap.
Netfilter: Making large iptables rulesets scalebrouer
Howto make large iptables firewall rulesets scale under Linux.
Presentation given at OpenSourceDays 2008 (and similar at Netfilter Developers Workshop 2008).
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
OpenStack Ceph & Neutron에 대한 설명을 담고 있습니다.
1. OpenStack
2. How to create instance
3. Ceph
- Ceph
- OpenStack with Ceph
4. Neutron
- Neutron
- How neutron works
5. OpenStack HA- controller- l3 agent
6. OpenStack multi-region
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
How we took our server side application to the cloud and liked what we got, B...DevOpsDays Tel Aviv
Taking traditional Java server-side applications to the multi-tenant Cloud introduces lots of challenges. In this session, we will share our experience of creating a SaaS offering, which is currently being used successfully by the Java community. We will start by reviewing the challenges we faced during the SaaS conversion. Next, we will share our experience with the EC2 platform. We will discuss the importance of automation and how we use tools like Chef and Puppet for SaaS provisioning. Finally, we will describe how creating a SaaS version of our product shifted our way of thinking about software release. We will recommend what’s required to successfully release both SaaS and downloadable versions of your product.
Presented in the Case Studies and Tools track at DevOps Con Israel 2013
Here is the slide deck presented at our March 16, 2016 Kubernetes meetup by Aniket Daptari, Sr. Product Manager of Cloud Networking, Juniper Networks. It covers OpenContrail with Kubernetes. Sponsored by StackPointCloud and Concur.
Docker and Windows: The State of the UnionElton Stoneman
Session from Docker London, covering Docker on Windows:
- the Docker platform on Windows
- limitations and differences
- Dockerizing Windows applications
- running a hybrid swarm
Networking in CloudStack is full-featured, full of bells and whistles and by necessity complicated. This session will take cloud operators through the ins-and-outs of CloudStack Networking. Attendees will learn the motivations behind how CloudStack networking is architected, solutions to common networking requirements, gotchas, troubleshooting CloudStack networking and finally some future directions for theses features.
It is assumed that the audience will have some experience administering CloudStack clouds.
As more OpenStack clouds move into production, the limits of scale and performance of the cloud need to be known as a pre-requisite to building a predictable operations plan. PLUMgrid ONS is based on a fully distributed architecture that is built for scale. Since forwarding decisions are distributed and made at each individual server, every new server added to the cloud increases the cloud’s forwarding capacity. This unique distributed architecture allows any OpenStack cloud built using the PLUMgrid Open Networking Suite to scale to tens of thousands of workloads across multiple racks. This joint PLUMgrid and Ixia session between will highlight the latest scale and performance numbers for PLUMgrid ONS. In addition, it will cover the various scale targets that were achieved, the testing methodology plus the Ixia IxChariot product used to measure them.
Integrating OpenStack To Existing InfrastructureHui Cheng
1. How to integrate OpenStack environment to our existing infrastructure.
2. How to efficiently interconnect the SAE & SWS, while preserving security properties and seamless connection.
3. The challenges we are facing when building & providing OpenStack-based public cloud service and how we solved it.
http://openstackconferencespring2012.sched.org/event/370f9d74a4e9e938a7f6f1e2af0958fe?iframe=yes&w=990&sidebar=no&bg=no#?iframe=yes&w=990&sidebar=no&bg=no#sched-body-outer
Slides presented to OpenStack developer summit during the "Quantum Overview" session (note: these are not the slides presented during the conference, these slides are more technical, and less polished)
Network and Service Virtualization tutorial at ONUG Spring 2015SDN Hub
Tutorial at ONUG Spring 2015 on Network and Service Virtualization. The tutorial covers three converging trends 1) Network virtualization, 2) Service virtualization, 3) overlay networking for Docker and OpenStack. The talk concludes with pointers to the hands-on portion of the tutorial that uses LorisPack, and the operational lessons learned.
PLNOG 13: Michał Dubiel: OpenContrail software architecturePROIDEA
Michał Dubiel – TBD
Topic of Presentation: OpenContrail software architecture
Language: Polish
Abstract:
OpenContrail is a complete solution for Software Defined Networking (SDN). Its relatively new approach to network virtualization in data centers utilizes the overlay networking technology in order to achieve full decoupling of the physical infrastructure from the tenant’s logical configurations.
This presentation describes the software architecture of the system and its functional partitioning. A special emphasis is put on a compute node components: the vRouter kernel module and the vRouter Agent. Also, selected implementation details are presented in greater details along with an analysis of their impact on an overall system’s exceptional scalability and great performance.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
The Internet of Things (IoT) is a revolutionary concept that connects everyday objects and devices to the internet, enabling them to communicate, collect, and exchange data. Imagine a world where your refrigerator notifies you when you’re running low on groceries, or streetlights adjust their brightness based on traffic patterns – that’s the power of IoT. In essence, IoT transforms ordinary objects into smart, interconnected devices, creating a network of endless possibilities.
Here is a blog on the role of electrical and electronics engineers in IOT. Let's dig in!!!!
For more such content visit: https://nttftrg.com/
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
1. ARCHITECTING AND BUILDING A
SECURE MULTI-TENANT CLOUD
FOR SAAS APPLICATIONS
Dilip Sundarraj
Cloud Solutions Architect, Juniper Networks
April 8th, 2015
3. What is Network Virtualization?
• Independent of Physical Network Location or State
• Logical Network across any server, any rack, any cluster, any data-center
• Virtual Machines can migrate without requiring any reworking of security
policies, load balancing, etc
• New Workloads or Networks should not require provisioning of physical network
• Nodes in Physical Network can fail without any disruption to Workload
• Full Isolation for Multi-tenancy and Fault Tolerance
• MAC and IP Addresses are completely private per tenant
• Any failures or configuration errors by tenants do not affect other applications or
tenants
• Any failures in the virtual layer do not propagate to physical layer
4. OpenContrail
• OpenSource Network Virtualization Platform for Cloud
• Primary Use Cases:
• Cloud Networking
– IaaS, VPCs for Cloud SP, Private Cloud for Enterprises or SPs
• NFV in SP networks
– Value added services for SP edge networks.
6. Analytics
CONTRAIL CONTROLLER
ControlConfiguration
x86 Host + Hypervisor
ORCHESTRATOR
x86 Host + Hypervisor
Physical IP Network
(no changes)
vRouter vRouter
Gateway
Internet / WAN
Legacy Infra.
(VLAN, etc.)
Bi-directional real-time message bus using XMPP
Network orchestration
Standard protocol (M-
BGP) to talk with other
Contrail controller
instances
Compute / Storage
orchestration
Accepts and converts
orchestrator requests for
VM creation, translates
requests, and creates
network
Interacts with network
elements for VM network
provisioning and ensures
uptime
Real-time analytics
engine collects, stores
and analyzes network
elements
vRouter: Virtualized routing element handles
localized control plane and forwarding plane
work on the compute node
Gateway: MX Series (or other router)
serve as gateway improving scale &
performance
8. Openstack integration
Horizon
Nova API
Compute
Driver
Virtual-IF
Driver
Nova Compute
Contrail
Agent
vRouter
(kernel)
Virtual Router
Nova
Scheduler
Neutron
Driver
Neutron
Plugin
Configuration
Node
Control
Node
1
Create an Instance (VM Info,
Network, IPAM, Policies, etc)
2 Schedule an Instance on the
Compute Node
3
VM Network
Properties
4
Create VM
Interface
6 Publish VM
Intf on IFMap
5 Add Port
7
VM Interface
Config over XMPP
Scripts
9. OpenContrail – Control Node
• All Control Plane Nodes are active
active
• Each vRouter uses XMPP to connect
with multiple Control Plane nodes for
redundancy
• Each Control Plane Node connects
to multiple configuration nodes for
redundancy
• Control Plane Nodes federate using
BGP
Control Node
"BGP module"
Proxies
XMPP
Control
Node
Control
Node
Compute Node Compute Node
Configuration
Node
Configuration
Node
IF-MAP
XMPP
IBGP
IF-MAP Client
Gateway
Routers
Service Nodes
11. Compute node – Hypervisor, vRouter
Compute Node
Virtual
Machine
(Tenant B)
Virtual
Machine
(Tenant C)
Virtual
Machine
(Tenant C)
vRouter Forwarding Plane
Virtual
Machine
(Tenant A)
Routing
Instance
(Tenant A)
Routing
Instance
(Tenant B)
Routing
Instance
(Tenant C)
vRouter Agent
Flow Table
FIB
Flow Table
FIB
Flow Table
FIB
Overlay tunnels
MPLS over GRE or VXLAN
JUNOSV CONTRAIL CONTROLLER
JUNOSV CONTRAIL CONTROLLER
XMPP
Eth1Kernel
Tap Interfaces (vif)
pkt0
User
Eth0 EthN
Config
VRFs
Policy
Table
Top of Rack Switch
XMPP
12. Compute node – Forwarding/Tunneling
Overlay tunnels
MPLS over GRE or VXLAN
Compute Node
vRouter Forwarding Plane
Virtual
Machine
(VN-IP1)
Routing
Instance
Flow Table
FIB
Eth1 (Phy-IP1)
Tap Interfaces (vif)
Compute Node
vRouter Forwarding Plane
Virtual
Machine
(VN-IP2)
Routing
Instance
Flow Table
FIB
Eth1 (Phy-IP2)
Tap Interfaces (vif)
VIRTUAL
PHYSICAL
Virtual-IP2
Payload
Virtual-IP2
Payload
MPLS / VNI
Phy-IP2
Virtual-IP2
Payload
Virtual-IP2
Payload
MPLS / VNI
Phy-IP2
1. Guest OS ARPs for destination
within subnet or default GW
2. VRouter receives the ARP and
responds back with VRRP MAC
3. Guest OS sends traffic to the
VRRP MAC, Vrouter encapsulates
the packet with appropriate
MPLS/VNI tag and GRE header
1. Physical Fabric Routers on
Physical IP Address
1. Returning packets get forwarded to
appropriate Routing Instance by
the MPLS/VNI tag
1. VRouter de-capsulates the packet,
and forwards it to the Guest OS
14. DNSaaS
Contrail offers 4 different DNS modes
• Default DNS server
• The host OS’s configured DNS server
• Tenant DNS server
• Tenants can use their own DNS servers (different from host OS’s DNS server)
• Virtual DNS server
• Contrail Controller provides a per tenant DNS server
• None
• VMs don’t have any DNS resolution capability
One of these modes is selected when an IPAM instance is
created for a domain.
15. Contrail Virtual DNS
DNS Record Creation
• Each IPAM -> Virtual DNS servers configured
• Virtual Networks and VMs in IPAM use DNS domain of Virtual
DNS server specified in IPAM
• When a VM is spawned,
• A & PTR records are added into the vDNS server of the virtual network’s
IPAM
NOTE:
• DNS Records can also be added statically.
• A, CNAME, PTR and NS records are also supported.
16. Contrail Virtual DNS
DNS Resolution:
1. DNS requests from VM trapped to the vRouter agent on the
hypervisor
2. vRouter agent then forwards DNS request to the controllers (which
run BIND) for resolution.
3. BIND has the concept of views and every virtual DNS instance has
its own isolated view
view "default-domain-contrailtestdns" {
rrset-order {order random;};
forwarders {172.16.70.254; };
zone "6.6.6.in-addr.arpa." IN {
type master;
file "/etc/contrail/dns/default-domain-contrailtestdns.6.6.6.in-addr.arpa.zone";
allow-update {127.0.0.1;};
};
zone "contrail.us" IN {
type master;
file "/etc/contrail/dns/default-domain-contrailtestdns.contrail.us.zone";
allow-update {127.0.0.1;};
};
};
17. DNS & IPAM Relationship
• Neutron network maps to Contrail Virtual Network
• network-ipam & virtual-DNS (Contrail specific constructs)
• virtual-DNS object has domain as parent
• network-ipam has project as parent.
• So:
• virtual-network ==refers-to==> network-ipam ==refers-to==> virtual-DNS
18. Contrail Virtual DNS @SYMC
• By default, Contrail API server creates default-network-
ipam object under the default-domain -> default-project
hierarchy
• However, using Contrail API hooks mechanism
automatically
• Create default-network-ipam object within a newly created project
• Create default-virtual-DNS object within a newly created domain
• Link them to provide vDNS functionality.
• So, a new virtual-network when created, it is automatically linked to
the project specific default-network-ipam and corresponding virtual
DNS object
19. Floating IPs
• Neutron supports the concept of floating IP (routable IP).
• Instances are unaware of their Floating IP.
• Every Virtual Network -> Routing Instance
• Routing Instances
• Define network connectivity between VMs in the Virtual Network
• Contain routes only for VMs in the virtual network
• Two Routing Instances (Virtual Networks) can be connected using
• Neutron L3 agent
• Contrail Network Policy (explained later)
• By default, Virtual Network do not have access to a “public” (routable) network
• A Gateway must be used to provide connectivity to "public" network from a virtual-network.
• Floating IP support can be provided with
• Simple Gateway – x86 based Software GW
• Routing Device such as Juniper MX
20. Floating IP using Neutron L3 Router
• Create an external network
• neutron net-create public --router:external True
• Create a router
• neutron router-create router1
• Add interfaces from Virtual network to this router
• neutron router-interface-add router1 SUBNET1_UUID
• Set router-gateway-set on router instance
• Connects a router to an external network, which enables that router
to act as a NAT gateway for external connectivity.
• neutron router-gateway-set router1 EXT_NET_ID
21. Spine Spine
Leaf LeafLeaf
BMS
BMS
BMS
BMS
Node
Node
Node
Node
Node
Node
Node
Node
Mountain View DC
MX Router
Internet
Spine Spine
Leaf LeafLeaf
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Boston DC
Public
VRF
Intra-
site
VRF
Internet
MX Router
Intra-
site
VRF
Public
VRF
Intra-site VPN
Multiple Floating IPs per VM @SYMC
VMs in the MTV DC
1. Internet
Routable
Floating IP
2. IP Routable from
Boston DC
22. LBaaS
• LBaaS load balancer enables
• Pool of VMs servicing apps accessible via a virtual IP.
• Contrail LBaaS features:
• Load balancing of traffic from clients to a pool of backend servers.
The load balancer proxies all connections to its virtual IP.
• Provides load balancing for HTTP, HTTPS, and TCP
• Provides health monitoring capabilities for applications
• Floating IP association to virtual IP for public access to the backend
pool.
24. Contrail LBaaS Implementation
• Supports OpenStack LBaaS Neutron APIs
• Creation of virtual-ip, loadbalancer-pool, loadbalancer-member, and
loadbalancer-healthmonitor.
• Creates a Service Instance when a loadbalancer-pool is
associated with a virtual-IP object.
• Service scheduler launches a namespace & spawns HAProxy on
it.
• HAProxy parameters obtained from the load balancer objects.
• HA of namespaces/HAProxy -> Active/Standby (2 diff computes)
25. Link Local Services
Provides VMs access to specific services on IP Fabric
infrastructure.
• @SYMC
• Keystone, Github, NTP, Logging, Monitoring and
Metering services
Once the link local service is configured, VMs can access
the service using the link local address.
• OpenStack Metadata Service on 169.254.169.254:80 is also
implemented using Link Local Service
(169.254.169.XXX, Service port) <-> (Destination IP, Service TCP/UDP port)
26. Contrail Network Policy
• Enforces connectivity and policy enforcement between
Virtual Networks
• Follows the 5-tuple semantics
• SRC/DST Virtual Network, SRC/DST Port, Protocol
27. Contrail Network Policy
• Connectivity between two Virtual Networks is established by leaking
routes between two Routing Instances when a network policy is
created interconnecting the two VNs
• Policy is enforced for specific traffic types by flow table programming
in every vRouter which has the relevant Virtual Networks
Compute Node
Virtual
Machine
(Tenant B)
Virtual
Machine
(Tenant C)
Virtual
Machine
(Tenant C)
vRouter Forwarding Plane
Virtual
Machine
(Tenant A)
Routing
Instance
(Tenant A)
Routing
Instance
(Tenant B)
Routing
Instance
(Tenant C)
vRouter Agent
Flow Table
FIB
Flow Table
FIB
Flow Table
FIB
Eth1Kernel
Tap Interfaces (vif)
pkt0
User
Eth0 EthN
Config
VRFs
Policy
Table
28. Environments & Operations
Environments
• Lab: > 10 nodes
• CI/CD test environment for SDN related features and functions
• Staging: > 50 nodes
• True IaaS for PaaS applications
• Production: > 250 nodes
• PaaS for end-user applications
Operations:
• Monitoring & Troubleshooting
• Contrail Analytics feeds into OpsView & LMM
• Upgrade
• Phased upgrades during maintenance windows without application downtime.
30. DEVSTACK + OPENCONTRAIL
• WHAT?
• Run OpenStack and OpenContrail on your laptop or in a VM
• WHY?
• Use to build & test OpenStack and OpenContrail code
• Just play with OpenStack/OpenContrail features
• HOW?
• Ubuntu server/VM with 4GB RAM, access to github
Tenants can use their own DNS servers using this mode. A list of servers can be configured in the IPAM
DNS Domain received via DHCP DOMAIN-NAME option.
Each record takes the type (A / CNAME / PTR / NS), class (IN), name, data and TTL values.
While the core network resource in Neutron maps to virtual-network in Contrail, network-ipam and virtual-DNS are resources introduced by Contrail. network-ipam is also defined as a Neutron extension and can be used via Neutron API as Horizon does it here.. virtual-DNS will also be added as a Neutron extension in future.
virtual-DNS object has domain as parent and network-ipam has project as parent. So:
virtual-network ==refers-to==> network-ipam ==refers-to==> virtual-DNS
Simple Gateway is a restricted implementation of gateway which can be used for experimental purposes. Simple gateway provides access to "public" network to virtual-networks.
Explain about VRFs and RTs
Metadata service is also a link-local service, with a fixed service name (metadata), a fixed service address (169.254.169.254:80), and a fabric address pointing to the server where the OpenStack Nova API server is running. All of the configuration and troubleshooting procedures for Contrail link-local services also apply to the metadata service.
However, for metadata service, the flow is always set up to the compute node, so the vrouter agent will update and proxy the HTTP request. The vrouter agent listens on a local port to receive the metadata requests. Consequently, the reverse flow has the compute node as the source IP, the local port on which the agent is listening is the source port, and the instance’s metadata IP is the destination IP address.