This document discusses Brocade's cloud-optimized data center fabric architectures. It describes how data center networking architectures have evolved from traditional three-tier designs focused on north-south traffic to modern scale-out designs optimized for east-west traffic in cloud environments. The white paper outlines Brocade's networking solutions including virtualization options, data center interconnect fabrics, and automation tools to help architects and engineers design networks that meet their technical and business needs.
Datacenter and cloud architectures continue to evolve to address the needs of large-scale multi-tenant data centers and clouds. These needs are centered around dimensions such as scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. Data centers are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-speed optical interfaces and dense wavelength-division multiplexing optical transport are used to provide for high-capacity transport intra- and inter-datacenter. This presentation will provide some brief descriptions on the working principles of Cloud & Data Center Networks.
Cloud architectures can be thought of in layers, with each layer providing services to the next. There are three main layers: virtualization of resources, services layer, and server management processes. Virtualization abstracts hardware and provides flexibility. The services layer provides OS and application services. Management processes support service delivery through image management, deployment, scheduling, reporting, etc. When providing compute and storage services, considerations include hardware selection, virtualization, failover/redundancy, and reporting. Network services require capacity planning, redundancy, and reporting.
The document discusses how cloud computing and virtualization can support grid infrastructures. It introduces key concepts like virtualization platforms, distributed virtual machine management, and provisioning virtual resources as a cloud service. The RESERVOIR project aims to integrate these technologies with grid computing to provide dynamic, on-demand access to resources like a utility. Virtualization can help address barriers to adopting grid computing by isolating workloads and dynamically allocating resources.
This document discusses cloud computing and related topics. It begins with definitions of cloud computing and cloud storage. It then covers cloud architecture, virtualization, cloud services and service models (SaaS, PaaS, IaaS). The document discusses private, public and hybrid cloud types and provides examples. It also discusses cloud management strategies and tools. Opportunities and challenges of cloud computing are presented.
Open source grid middleware packages – Globus Toolkit (GT4) Architecture , Configuration – Usage of Globus – Main components and Programming model - Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce functions, specifying input and output parameters, configuring and running a job – Design of Hadoop file system, HDFS concepts, command line and java interface, dataflow of File read & File write.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Key characteristics of cloud computing include broad network access, resource pooling, rapid elasticity, and measured service. Cloud computing provides many advantages such as lower costs, improved performance and collaboration, universal access, and unlimited storage. However, it also has disadvantages like reliance on a stable internet connection, potential security and reliability issues, and limited features compared to desktop software.
Cloud computing reference architecture from nist and ibmRichard Kuo
The document summarizes cloud computing reference architectures from NIST and IBM. It discusses why reference architectures are useful, including providing common understanding, reducing complexity, and enabling interoperability. It then provides overviews of the NIST cloud computing reference architecture, including essential cloud characteristics, service models, deployment models, and architectural components. It also summarizes the main IBM cloud computing reference architecture, focusing on roles, tools, management platforms, and portals.
This document summarizes a research paper on dynamic consolidation of virtual machines in cloud data centers to manage overloaded hosts while maintaining quality of service constraints. It proposes using a Markov chain model and control algorithm to optimally detect host overloads by maximizing the average time between VM migrations, while meeting a specified QoS goal. The algorithm handles unknown workloads using a multisize sliding window approach. Evaluation shows the algorithm efficiently solves the problem of host overload detection as part of dynamic VM consolidation in cloud computing systems.
Datacenter and cloud architectures continue to evolve to address the needs of large-scale multi-tenant data centers and clouds. These needs are centered around dimensions such as scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. Data centers are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-speed optical interfaces and dense wavelength-division multiplexing optical transport are used to provide for high-capacity transport intra- and inter-datacenter. This presentation will provide some brief descriptions on the working principles of Cloud & Data Center Networks.
Cloud architectures can be thought of in layers, with each layer providing services to the next. There are three main layers: virtualization of resources, services layer, and server management processes. Virtualization abstracts hardware and provides flexibility. The services layer provides OS and application services. Management processes support service delivery through image management, deployment, scheduling, reporting, etc. When providing compute and storage services, considerations include hardware selection, virtualization, failover/redundancy, and reporting. Network services require capacity planning, redundancy, and reporting.
The document discusses how cloud computing and virtualization can support grid infrastructures. It introduces key concepts like virtualization platforms, distributed virtual machine management, and provisioning virtual resources as a cloud service. The RESERVOIR project aims to integrate these technologies with grid computing to provide dynamic, on-demand access to resources like a utility. Virtualization can help address barriers to adopting grid computing by isolating workloads and dynamically allocating resources.
This document discusses cloud computing and related topics. It begins with definitions of cloud computing and cloud storage. It then covers cloud architecture, virtualization, cloud services and service models (SaaS, PaaS, IaaS). The document discusses private, public and hybrid cloud types and provides examples. It also discusses cloud management strategies and tools. Opportunities and challenges of cloud computing are presented.
Open source grid middleware packages – Globus Toolkit (GT4) Architecture , Configuration – Usage of Globus – Main components and Programming model - Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce functions, specifying input and output parameters, configuring and running a job – Design of Hadoop file system, HDFS concepts, command line and java interface, dataflow of File read & File write.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Key characteristics of cloud computing include broad network access, resource pooling, rapid elasticity, and measured service. Cloud computing provides many advantages such as lower costs, improved performance and collaboration, universal access, and unlimited storage. However, it also has disadvantages like reliance on a stable internet connection, potential security and reliability issues, and limited features compared to desktop software.
Cloud computing reference architecture from nist and ibmRichard Kuo
The document summarizes cloud computing reference architectures from NIST and IBM. It discusses why reference architectures are useful, including providing common understanding, reducing complexity, and enabling interoperability. It then provides overviews of the NIST cloud computing reference architecture, including essential cloud characteristics, service models, deployment models, and architectural components. It also summarizes the main IBM cloud computing reference architecture, focusing on roles, tools, management platforms, and portals.
This document summarizes a research paper on dynamic consolidation of virtual machines in cloud data centers to manage overloaded hosts while maintaining quality of service constraints. It proposes using a Markov chain model and control algorithm to optimally detect host overloads by maximizing the average time between VM migrations, while meeting a specified QoS goal. The algorithm handles unknown workloads using a multisize sliding window approach. Evaluation shows the algorithm efficiently solves the problem of host overload detection as part of dynamic VM consolidation in cloud computing systems.
Bliv klar til cloud med Citrix Netscaler (pdf)Kim Jensen
This document discusses how Citrix NetScaler outperforms other application delivery controllers (ADCs) in enabling enterprise networks to be cloud ready. It provides 9 key areas where NetScaler beats the competition: 1) Pay-As-You-Grow elasticity to scale capacity on demand; 2) Superior ADC consolidation with higher density; 3) Ability to cluster up to 32 appliances to expand capacity; 4) Full featured virtual ADCs with performance parity to hardware; 5) Cloud bridging functionality for hybrid cloud environments; 6) Open application visibility; 7) SQL load balancing; 8) Intuitive policy engine; 9) Faster SSL performance. The document examines these areas in detail and compares NetScaler's capabilities to other ADC
This document discusses improving latency in distributed cloud data centers through virtualization and automation. It begins by introducing the benefits of distributed data centers over centralized ones, such as lower latency and reduced costs. It then discusses how virtualizing data centers allows more dynamic resource sharing and improves flexibility. Automating operations further reduces costs and complexity. The document proposes building virtual distributed data centers connected by networks optimized for low latency. Automating configuration management helps adapt rapidly to dynamic cloud environments. Overall, virtualizing distributed data centers and automating operations can improve latency and reduce costs in cloud computing.
This presentation educates you about Cloud Computing, Cloud computing services and in it SaaS (Software-as-a-Service), PaaS (Platform-as-a-Service), IaaS (Infrastructure-as-a-Service), Types of cloud computing and Cloud security.
For more topics stay tuned with Learnbay.
VKCREATIONS provides information about cloud computing. It defines cloud computing as a model for enabling access to configurable computing resources via the internet on demand. It shares characteristics with client-server models, utility computing, and other distributed computing approaches. The document discusses cloud service models including SaaS, PaaS, and IaaS. It also covers cloud deployment models such as private cloud, public cloud, hybrid cloud, and more. Benefits for small businesses are outlined as well as examples of cloud computing services and major providers. Limitations including control and legal issues are also mentioned.
Netscaler for mobility and secure remote accessCitrix
The document discusses new differentiators for selling NetScaler integrated solutions with XenDesktop deployments in 2013. It identifies four new compelling reasons: 1) HDX Insight for visibility into the end-user experience, 2) a simplified wizard for deploying NetScaler, 3) simplified transition from Web Interface to StoreFront, and 4) increased marketing of ICA Proxy to secure XenDesktop from data leaks. These address customer pain points around troubleshooting, deployment complexity, and security.
The document discusses the shift towards more design-driven and innovative data center solutions as enterprises turn to cloud computing to accelerate business innovation and contain costs. It notes that data from a growing number of sources is pouring into data centers, requiring more flexible architectures to handle diverse and real-time workloads. It outlines some of the key challenges with traditional hierarchical data center designs in supporting modern requirements around server-to-server communications, performance, security and availability. Finally, it discusses emerging trends like data center consolidation, virtualization, new application models, and the need for low-latency, high-performance networks optimized for the cloud.
Compute servers, storage servers, and management servers work together in Novell's new data center automation solution. Compute servers host virtual machines using hypervisors like Xen. Storage servers pool and protect storage accessed by compute servers on behalf of virtual machines. Management servers provide centralized control over the lifecycle of operating systems, including imaging, remote control, inventory, and software management of both physical and virtual systems.
Cloud reference architecture as per nistgaurav jain
The document discusses a cloud reference architecture. It describes how cloud resources provide computing power through large datacenters containing hundreds or thousands of linked computer systems. The infrastructure can vary depending on available resources and includes database systems and storage devices. The core middleware manages the physical infrastructure to provide a runtime environment for applications and optimize resource utilization using virtualization technologies. This allows hardware resources to be partitioned to meet user requirements. Cloud applications available on demand meet the needs of users without them needing to build their own systems. Infrastructure management through the core middleware supports functions like quality of service negotiation, execution monitoring, and billing.
Overview of the WebSphere branded appliance family and general use case information. This document covers DataPower, Cast Iron, Workload Deployer and Caching Technologies
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
This chapter introduces and describes several of the more common foundational cloud architectural models, each exemplifying a common usage and characteristic of contemporary cloud-based environments. The involvement and importance of different combinations of cloud computing mechanisms in relation to these architectures are explored.
Service Oriented Software Engineering: Services as reusable components, Service Engineering, Software Development with Services. Service-oriented architectures, RESTful services
Load Balancing Tactics in Cloud Computing: A Systematic Study Raman Gill
Cloud computing has recently emerged as new paradigm in field of technology. Cloud computing is attractive to business owners and IT people. It is still in its infancy and many issues are to be addressed. This paper covers the cloud computing basics and discusses load balancing in cloud computing environment as the one of the major challenges of cloud computing. It also discusses the various existing load balancing algorithms.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
The document discusses cloud computing and data centers. It defines cloud computing as a style of computing where scalable IT capabilities are provided as an internet-based service. It describes the different cloud service models including SaaS, PaaS, and IaaS. It also discusses cloud deployment models like public, private, community and hybrid clouds. The document then explains how growing adoption of cloud computing requires large data centers to host servers. It provides details on data center network architectures, use of Ethernet, and the role data centers play in processing user requests and transactions.
This document discusses enabling technologies for cloud computing, focusing on service oriented architecture and representational state transfer (REST) systems. It describes service oriented architecture as a design approach involving independent services that communicate with each other over a network. It outlines the layered architecture for web services and grids, and compares grids and clouds, noting that grids apply static resources while clouds emphasize elastic resources. It provides a brief overview of REST, describing it as a way to get information content from websites by reading designated web pages containing XML files that describe and include preferred content.
This document discusses distributed, grid, and cloud computing as well as the future of computing. It explains that distributed computing involves sharing resources across multiple systems, grid computing uses large numbers of interconnected computers to solve complex problems, and cloud computing allows organizations to access computing resources over the internet rather than maintaining their own infrastructure. The future of computing is seen to involve more distributed, grid, and cloud-based systems with data centers serving as hubs to power technologies like blockchain, AI, autonomous vehicles and virtual reality. Blockchain in particular is highlighted as a distributed system used for cryptocurrency that relies on peer-to-peer networks and immutable transaction records stored across blocks.
Urwish Patel is seeking a position as a software developer where he can grow and help increase a company's reputation and profitability. He has over 3 years of experience developing web applications using technologies like ASP.NET, MySQL, and APIs. He has a Master's degree in Computer Science from The University of Texas at Arlington with a focus on software engineering and databases. His skills include languages like Java, C#, and scripting languages as well as technologies like HTML, CSS, databases, and AWS services. He has developed several projects including a web and mobile application for blood donation, a weather tracking application, and an encrypted file sharing application using the Dropbox API.
Steven Duncan is a senior change manager and project management professional with over 15 years of experience managing projects, programs, and portfolios in various industries including banking, insurance, software development, and consulting. He has a track record of successfully delivering solutions on time and within budget by creating and implementing change management processes, project management offices, and governance procedures.
Bliv klar til cloud med Citrix Netscaler (pdf)Kim Jensen
This document discusses how Citrix NetScaler outperforms other application delivery controllers (ADCs) in enabling enterprise networks to be cloud ready. It provides 9 key areas where NetScaler beats the competition: 1) Pay-As-You-Grow elasticity to scale capacity on demand; 2) Superior ADC consolidation with higher density; 3) Ability to cluster up to 32 appliances to expand capacity; 4) Full featured virtual ADCs with performance parity to hardware; 5) Cloud bridging functionality for hybrid cloud environments; 6) Open application visibility; 7) SQL load balancing; 8) Intuitive policy engine; 9) Faster SSL performance. The document examines these areas in detail and compares NetScaler's capabilities to other ADC
This document discusses improving latency in distributed cloud data centers through virtualization and automation. It begins by introducing the benefits of distributed data centers over centralized ones, such as lower latency and reduced costs. It then discusses how virtualizing data centers allows more dynamic resource sharing and improves flexibility. Automating operations further reduces costs and complexity. The document proposes building virtual distributed data centers connected by networks optimized for low latency. Automating configuration management helps adapt rapidly to dynamic cloud environments. Overall, virtualizing distributed data centers and automating operations can improve latency and reduce costs in cloud computing.
This presentation educates you about Cloud Computing, Cloud computing services and in it SaaS (Software-as-a-Service), PaaS (Platform-as-a-Service), IaaS (Infrastructure-as-a-Service), Types of cloud computing and Cloud security.
For more topics stay tuned with Learnbay.
VKCREATIONS provides information about cloud computing. It defines cloud computing as a model for enabling access to configurable computing resources via the internet on demand. It shares characteristics with client-server models, utility computing, and other distributed computing approaches. The document discusses cloud service models including SaaS, PaaS, and IaaS. It also covers cloud deployment models such as private cloud, public cloud, hybrid cloud, and more. Benefits for small businesses are outlined as well as examples of cloud computing services and major providers. Limitations including control and legal issues are also mentioned.
Netscaler for mobility and secure remote accessCitrix
The document discusses new differentiators for selling NetScaler integrated solutions with XenDesktop deployments in 2013. It identifies four new compelling reasons: 1) HDX Insight for visibility into the end-user experience, 2) a simplified wizard for deploying NetScaler, 3) simplified transition from Web Interface to StoreFront, and 4) increased marketing of ICA Proxy to secure XenDesktop from data leaks. These address customer pain points around troubleshooting, deployment complexity, and security.
The document discusses the shift towards more design-driven and innovative data center solutions as enterprises turn to cloud computing to accelerate business innovation and contain costs. It notes that data from a growing number of sources is pouring into data centers, requiring more flexible architectures to handle diverse and real-time workloads. It outlines some of the key challenges with traditional hierarchical data center designs in supporting modern requirements around server-to-server communications, performance, security and availability. Finally, it discusses emerging trends like data center consolidation, virtualization, new application models, and the need for low-latency, high-performance networks optimized for the cloud.
Compute servers, storage servers, and management servers work together in Novell's new data center automation solution. Compute servers host virtual machines using hypervisors like Xen. Storage servers pool and protect storage accessed by compute servers on behalf of virtual machines. Management servers provide centralized control over the lifecycle of operating systems, including imaging, remote control, inventory, and software management of both physical and virtual systems.
Cloud reference architecture as per nistgaurav jain
The document discusses a cloud reference architecture. It describes how cloud resources provide computing power through large datacenters containing hundreds or thousands of linked computer systems. The infrastructure can vary depending on available resources and includes database systems and storage devices. The core middleware manages the physical infrastructure to provide a runtime environment for applications and optimize resource utilization using virtualization technologies. This allows hardware resources to be partitioned to meet user requirements. Cloud applications available on demand meet the needs of users without them needing to build their own systems. Infrastructure management through the core middleware supports functions like quality of service negotiation, execution monitoring, and billing.
Overview of the WebSphere branded appliance family and general use case information. This document covers DataPower, Cast Iron, Workload Deployer and Caching Technologies
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
This chapter introduces and describes several of the more common foundational cloud architectural models, each exemplifying a common usage and characteristic of contemporary cloud-based environments. The involvement and importance of different combinations of cloud computing mechanisms in relation to these architectures are explored.
Service Oriented Software Engineering: Services as reusable components, Service Engineering, Software Development with Services. Service-oriented architectures, RESTful services
Load Balancing Tactics in Cloud Computing: A Systematic Study Raman Gill
Cloud computing has recently emerged as new paradigm in field of technology. Cloud computing is attractive to business owners and IT people. It is still in its infancy and many issues are to be addressed. This paper covers the cloud computing basics and discusses load balancing in cloud computing environment as the one of the major challenges of cloud computing. It also discusses the various existing load balancing algorithms.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
The document discusses cloud computing and data centers. It defines cloud computing as a style of computing where scalable IT capabilities are provided as an internet-based service. It describes the different cloud service models including SaaS, PaaS, and IaaS. It also discusses cloud deployment models like public, private, community and hybrid clouds. The document then explains how growing adoption of cloud computing requires large data centers to host servers. It provides details on data center network architectures, use of Ethernet, and the role data centers play in processing user requests and transactions.
This document discusses enabling technologies for cloud computing, focusing on service oriented architecture and representational state transfer (REST) systems. It describes service oriented architecture as a design approach involving independent services that communicate with each other over a network. It outlines the layered architecture for web services and grids, and compares grids and clouds, noting that grids apply static resources while clouds emphasize elastic resources. It provides a brief overview of REST, describing it as a way to get information content from websites by reading designated web pages containing XML files that describe and include preferred content.
This document discusses distributed, grid, and cloud computing as well as the future of computing. It explains that distributed computing involves sharing resources across multiple systems, grid computing uses large numbers of interconnected computers to solve complex problems, and cloud computing allows organizations to access computing resources over the internet rather than maintaining their own infrastructure. The future of computing is seen to involve more distributed, grid, and cloud-based systems with data centers serving as hubs to power technologies like blockchain, AI, autonomous vehicles and virtual reality. Blockchain in particular is highlighted as a distributed system used for cryptocurrency that relies on peer-to-peer networks and immutable transaction records stored across blocks.
Urwish Patel is seeking a position as a software developer where he can grow and help increase a company's reputation and profitability. He has over 3 years of experience developing web applications using technologies like ASP.NET, MySQL, and APIs. He has a Master's degree in Computer Science from The University of Texas at Arlington with a focus on software engineering and databases. His skills include languages like Java, C#, and scripting languages as well as technologies like HTML, CSS, databases, and AWS services. He has developed several projects including a web and mobile application for blood donation, a weather tracking application, and an encrypted file sharing application using the Dropbox API.
Steven Duncan is a senior change manager and project management professional with over 15 years of experience managing projects, programs, and portfolios in various industries including banking, insurance, software development, and consulting. He has a track record of successfully delivering solutions on time and within budget by creating and implementing change management processes, project management offices, and governance procedures.
Cheah Boon Lee is a Project Manager at Esquel Malaysia Sdn Bhd with over 12 years of experience. He received a Bachelor's Degree in Computer Science/IT from Universiti Utara Malaysia in 2003. In his current role, he leads development teams, ensures projects are properly planned and documented, communicates with stakeholders, and reviews work. Notable projects include Garment Quality System (GQS) and Manufacturing Executive System (MES). Previously, he worked as a Business Systems Analyst at Motorola Solutions and as an Analyst Programmer and VB Programmer for other companies.
Vickie Williams is applying for a position and provides her resume. She has over 20 years of experience in inventory control, business analysis, and training roles. Her skills include expertise in Microsoft applications, SAP, DCMS, WMoS, and leadership, communication, problem solving and organizational abilities. She is seeking a position that utilizes her experience and allows her to contribute effectively to the organization.
This resume is for BeerMohamed, an IT professional with over 4 years of experience in web application development, design, and support using technologies like Java, J2EE, SQL, and Crystal Reports. He has worked on projects for clients like Walt Disney and ING Japan, providing application support, development, and report design. His roles have included requirements analysis, testing, troubleshooting, and mentoring other team members. He is looking to leverage his skills in Java, databases, and reporting as well as his experience in project delivery and support.
This document provides a summary of Leila Borooshak's qualifications, work experience, education, hobbies, and languages. It outlines her experience with various operating systems, software applications, hardware, and personal skills. Her work experience includes positions in Malaysia, Iran, and as a network manager, instructor, and engineer. She is currently a student pursuing degrees in IT security and mechanical engineering at Nashville State Community College and Tennessee State University.
Michael Kuzepski has over 25 years of experience in IT, marketing systems, database management, and digital marketing. He has comprehensive experience developing and implementing dynamic marketing strategies using data analysis. As Marketing Database Manager at Everfast, Inc. for 25 years, he established their marketing data warehouse, developed customer segmentation and campaign ROI analysis, and oversaw email, digital, and direct mail marketing campaigns. He also functioned as Lead Developer for Everfast's POS system used in over 100 retail stores.
Munir Limay seeks a challenging position as a software developer, network engineer, or systems administrator that utilizes his experience in information technology and computer science. He has a Bachelor's degree in Computer Science from Georgia State University with a concentration in networks and parallel/distributed computing. His technical skills include proficiency in Java, JavaScript, Linux/Unix, and version control tools. He has over 2 years of experience as a lead software developer at State Farm Insurance where he designed and developed web applications using Java/J2EE technologies. He is also experienced with IT support, troubleshooting, and training from his role at Georgia State University.
Eric Olsen has over 30 years of experience in construction project management, business management, and estimating for heavy highway and utility projects. He has managed multi-million dollar construction businesses and projects, demonstrating proven success growing sales and profits.
Ronny Wong Wai Loong has a diploma in fashion design from the International Fashion Training College. He has experience working as an Inventory and Marketing Executive for Atlantic Hitz Sdn Bhd from January 2016 to August 2016. In this role, he was responsible for identifying fashion trends, planning budgets and sales forecasts, monitoring inventory levels, and preparing various reports for management. He also had responsibilities related to ordering, receiving, and tracking inventory.
James Fink is a senior software engineer with over 20 years of experience designing, programming, testing, and supporting client/server systems. He has expertise in C#, C++, .NET, SQL, and protocols like TCP/IP. He currently owns his own software development business and has previously worked at financial firms developing low-latency trading systems and communications libraries. He holds a BS in Industrial Technology and has completed over 120 credit hours of additional graduate coursework.
Krupa Digumarthi is a software engineer with 3 years of experience seeking a position with a stable organization. She has skills in Java, JDBC, Servlets, JSP, Struts, Hibernate, and web services. She currently works as a system engineer for Tata Consultancy Services and has experience developing applications for telecom and Wi-Fi projects using technologies like Struts, Hibernate, PL/SQL, and REST web services. She is proficient in Oracle, Linux, Eclipse and has experience as a team leader and module leader.
This document is a resume for Chinh P. Le that outlines his education, skills, work history and projects. It details that he received a B.S. in Computer Science from Georgia State University with high honors and also attended Savannah Technical Institute. His skills include programming languages like Java, C#, and Python as well as IDEs like Eclipse and operating systems like Windows and Linux. For work history, he has been a software developer intern and has developed software, websites and games. His projects include an augmented reality social network and 3D modeling of Atlanta.
The document is a resume for Richie Bosworth, an IT consultant and trainer with over 10 years of experience in consulting, development, testing, and training. He has held roles such as lead quality control engineer, AV/IT sales consultant, QA test analyst, business analyst, and systems administrator. His skills include consulting, full life-cycle development, training, public speaking, and he has experience with technologies like Java, SharePoint, HL7 interfaces, and virtual systems.
The document provides a list of search strings and filters that can be used to find resumes on the internet and within databases. It includes strings targeted towards technical skills like Java, Oracle, and SQL as well as strings for non-technical roles like contract managers, business analysts, HR professionals, and supply chain managers. The strings are intended to search websites, databases, and filetypes like PDFs for resumes matching the specified skills, roles, and industries.
Scott Allen Williams Résumé - Senior Java Software Developer - Agile Technolo...Scott Williams
ATTN: Currently seeking employment in the Austin, TX area as of 6-10-2014. Please ignore any older copies of my résumé you may find on Slideshare as they were posted w/out my permission, not to mention they are out of date.
Scott Allen Williams
512-277-4254
Pavel has over 20 years of experience in IT and software development with a focus on payroll, finance, and accounting applications. He has strong skills in Java, SQL, JavaScript, and Linux and has worked extensively with frameworks like Spring and tools like IntelliJ IDEA, Eclipse, and Git. Pavel seeks new opportunities where he can apply his experience designing and implementing enterprise applications.
Veerendra Kumar Srivastava is a software developer with over 2.8 years of experience seeking a creative role where he can use technologies like Java, J2EE, JavaScript, JSP and more. His skills include web development, databases, and he has worked on projects for clients in areas like education, retail, and more. He is looking for a challenging environment to continue improving his technical and analytical abilities.
James Colby Maddox\’s Business Intelligence and Software Developer Resumecolbydaman
This resume outlines the work I did during my Business Intelligence Masters Program at SetFocus, some of the course work I have completed at Kennesaw State University, and the work experience I had at Georgia Pacific as a software developer
This document is a resume for Faheem Iqbal Ansari. It summarizes his objective to utilize his technical, management, and interpersonal skills in the field of information technology. It then lists his professional experience implementing IT infrastructure solutions including unified communications, contact centers, networking, security, and Microsoft/Cisco integration projects for various organizations in Pakistan. His technical skills and professional certifications from Cisco, Juniper, Microsoft, and other vendors are also outlined.
Design and build a Private Cloud for your Enterprise using a Scalable Architecture.
- Bridge IT and the Public Cloud
- Reduce Cost
- On-Demand Services
- Run Scalable Applications
- Handle Traffic Growth
- Meet Compliance Objectives
- Offer Operational Flexibility and Efficiency
virtualization is the solution to the under utilization problem. And the essence of virtualization is an abstraction layer of software called the Hypervisor.
Understanding the cloud computing stackSatish Chavan
Understanding the cloud computing stack
Introduction
Key characteristics
At Glance
Standardization, Migration &Adaptation
Service models
Deployment models
Network as a Service
Software as a Service (SaaS).
Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).
Communications as a Service (CaaS)
Data as a Service - DaaS
Benefits & Challenges
Security Risks & Challenges
Cloud Vendors
This document provides an overview of cloud computing. It defines cloud computing, describes its key characteristics including on-demand self-service, broad network access, resource pooling, and rapid elasticity. It also discusses cloud service models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Additionally, it covers cloud architecture, security concerns in cloud computing, and the future of Network as a Service (NaaS).
This chapter focuses on strategies for scaling wide area networks (WANs) using Cisco routers. It discusses various WAN connection types like dedicated leased lines, asynchronous dial-in, and dial-on-demand routing. Packet switched services are also covered, including Frame Relay, X.25, SMDS, and ATM. Key considerations for WAN design are reviewed, such as availability, bandwidth requirements, cost, ease of management, traffic types, and routing protocols. Cisco provides different connection service options to meet these considerations.
The document discusses the Brocade Virtual ADX, a virtual application delivery controller (ADC) platform. It provides a full-featured ADC system with a virtual footprint that leverages Intel technology. It can run on standard hypervisors and be hosted on x86 servers to provide load balancing, application security, and other services. It uses a distributed multi-core architecture to scale performance elastically. The Brocade Virtual ADX supports various load balancing methods and application health checks. It also provides scripting, global load balancing, and other features to optimize application delivery in virtualized environments.
In a cloud computing architecture, cloud infrastructure refers to the back-end technology elements found within most enterprise data centers -- servers, persistent storage and networking equipment -- but on a much greater scale. Some large cloud providers, including hyperscale cloud companies, such as Microsoft and Amazon, form partnerships with vendors to design custom infrastructure components that are optimized for specific needs, such as power efficiency or workloads that include big data and AI.
Citrix CloudPlatform is a turn-key cloud solution that provides fast time to value through a simple one-package installation. It is proven to scale beyond 40,000 hosts per region and multiple regions, providing users virtually unlimited computing resources on demand. It offers granular tracking and metering of resource usage for showback/chargeback. The solution is hypervisor and storage agnostic, and supports enterprise-grade networking and security for multi-tenant environments through logical and physical isolation.
According to a new Gartner report1, “Around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2022, Gartner predicts this
figure will reach 75%”. In addition to hosting new 5G era services, the other major network operator driver for edge compute and edge clouds is deploying virtualized network infrastructure, replacing many dedicated hardware-based elements with virtual network functions (VNFs) running on general purpose edge compute. Even portions of access networks are being virtualized, and many of these functions need to be deployed close to end users. The combination of these infrastructure and applications drivers is a major reason that so much of 5G era network transformation resolves around edge cloud distribution.
Cloud computing unit three notes with aws azure Microsoft eucalyptus salesforce clod computing paradigms and Cloud computing is a revolutionary paradigm that has fundamentally transformed the way organizations access, manage, and deploy computing resources. At its core, cloud computing provides on-demand access to a shared pool of configurable computing resources, such as servers, storage, and applications, over the internet. This model offers a departure from traditional on-premise infrastructure, providing a more flexible and scalable approach to IT services.
One of the key characteristics of cloud computing is on-demand self-service. Users can provision and manage computing resources autonomously, without requiring human intervention from the service provider. This empowers organizations to dynamically adjust their computing capacity in response to changing workloads, optimizing resource utilization and cost efficiency.
The broad network access feature ensures that cloud services are accessible over the internet using standard mechanisms. This accessibility facilitates the use of cloud resources from a variety of devices, promoting mobility and flexibility in how users interact with computing resources. Whether accessing applications from a desktop computer or a mobile device, users benefit from the ubiquity provided by the cloud.Cloud computing has become an integral part of the modern IT landscape, offering benefits such as scalability, cost-efficiency, and accessibility. Organizations, ranging from startups to large enterprises, leverage cloud services to focus on their core business activities while outsourcing IT infrastructure and services to cloud providers. Despite the numerous advantages, challenges related to security, data privacy, and compliance must be carefully addressed to ensure a successful and secure cloud adoption strategy. As cloud computing continues to evolve, it is poised to play a central role in shaping the future of information technology.In terms of deployment models, public cloud services are available to anyone over the internet and are operated and owned by third-party cloud service providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Private clouds, on the other hand, are used exclusively by a single organization, providing greater control over security and customization. Hybrid clouds combine elements of both public and private clouds, allowing data and applications to be shared between themCloud computing services are broadly categorized into three main models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS provides virtualized computing resources, including virtual machines and storage. PaaS offers a platform that simplifies application development, deployment, and maintenance. SaaS delivers software applications over the internet, eliminating the need for users to install and manage the software locally.
Network operators’ networks comprises of wide variety of hardware appliances. In a big and globally distributed network; the network would comprise of multi-vendor equipment and variety of proprietary services offered by the vendor.
The document discusses the history and concepts of cloud computing. It began with clustering and grid computing, where computers were grouped together to function as a single computer or where multiple clusters acted as a grid. Cloud computing evolved this concept further by providing dynamically scalable, virtualized resources as an internet-based service. Common types of cloud services include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document then discusses various components, applications, and benefits of cloud computing architectures.
This document provides an introduction to cloud computing. It defines cloud computing as IT services delivered on demand over the internet. Resources are pooled and accessed virtually, allowing for flexible scaling. The main advantages are reduced costs since users no longer need to maintain their own infrastructure, and pay only for what they use. Various cloud models are described including SaaS, PaaS, and IaaS. Careers in cloud computing involve roles in areas like provisioning, monitoring, security, virtualization, and software architecture.
Improving the Latency Value by Virtualizing Distributed Data Center and Auto...IOSR Journals
This document discusses improving latency in distributed cloud data centers through virtualization and automation. It begins by explaining the benefits of distributed over centralized data centers, such as lower latency and financial benefits from positioning services close to customers. Virtualizing data centers increases utilization and flexibility. Automation streamlines operations and provisioning. The document proposes using a virtual network with components like switches and virtual LANs to connect virtualized distributed data centers and improve latency. Automating configuration management avoids manual errors and complexity in managing dynamic cloud environments.
The first Technology driven reality competition showcasing the incredible virtualization community members and their talents. Virtually Everywhere · virtualdesignmaster.com
This document discusses data center network optimization. It begins with background on data centers and their increasing importance due to cloud computing. It then discusses challenges with current hierarchical data center network architectures, including network congestion, latency issues, and inefficient east-west traffic management. The author proposes a new "Hybrid Flow based Packet filtering, forwarding, and MEMS based all Optical Switching" architecture using OpenFlow and SDN control to help address these challenges by improving performance, scalability, and resource utilization while reducing costs.
The document discusses how future networking is being impacted by cloud/hybrid IT, software-defined networking, and network functions virtualization. Specifically:
1) The emergence of public cloud and hybrid IT models is driving more traffic to data centers and changing expectations around network flexibility and costs.
2) Software-defined WAN (SD-WAN) solutions allow businesses more control over their networks by using overlays to connect sites over multiple networks like broadband internet and MPLS.
3) Network functions virtualization (NFV) enables network functions to be deployed as software, increasing flexibility and reducing costs compared to hardware appliances.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Key characteristics of cloud computing include broad network access, resource pooling, rapid elasticity, and measured service. Cloud services provide software, platforms, and infrastructure as services on demand over the Internet.
Similar to brocade-data-center-fabric-architectures-wp (20)
re:Invent 2019 CMP320 - How Dropbox leverages hybrid cloud for scale and inno...Anuj Dewangan
The document discusses how Dropbox leverages a hybrid cloud architecture with AWS to achieve scale, agility, and innovation. The hybrid infrastructure powers Dropbox's massive scale with over 600 million users and 3 exabytes of data storage. It also enables faster product development and flexibility. Key services discussed include Previews for document processing, Audit Logs for a distributed data store, analytics pipelines, and machine learning training on AWS with inference on-premises. Dropbox's use of hybrid cloud unlocks global expansion capabilities.
re:Invent 2019 ARC217-R: Operating and managing hybrid cloud on AWSAnuj Dewangan
The document discusses operating and managing hybrid cloud environments on AWS. It provides an overview of AWS services that can help with fleet management, operations and monitoring, identity/security management, and developer tools to provide a consistent experience across on-premises and AWS environments. It also discusses examples of MassMutual and GreenPages using various AWS and third-party services to manage hybrid cloud environments at their organizations.
This document provides a test plan for certifying Connected Grid Endpoint (CGE) devices. It outlines the test setups that will be used, including testing individual devices, small and large mesh networks, and multi-vendor interoperability. It describes the pre-certification and certification testing phases, test cases, resources required, and risk mitigation. Test cases cover areas like hardware diagnostics, workforce training, mesh connectivity, security, remote management, applications, and fault resilience. The goal is to certify that CGE devices and networks meet requirements for large-scale grid modernization projects.
This document provides an overview of deploying the Brocade VCS Gateway with VMware NSX. It describes VXLAN technology, the Brocade VCS Gateway solution, common deployment topologies, and configuration steps. The Brocade VCS Gateway provides Layer 2 connectivity between physical VLAN networks and virtual VXLAN networks managed by VMware NSX. It encapsulates and decapsulates traffic between the physical and virtual networks.
This document provides an overview of Brocade's IP Fabric technology and network virtualization with BGP EVPN. It describes the key components and benefits of Brocade IP Fabric including leaf-spine layer 3 Clos topologies and optimized routing. It also summarizes Brocade's approach to network virtualization using VXLAN layer 2 extension with BGP EVPN control plane. Key concepts covered include VTEP, static anycast gateway, overlay gateway, ARP suppression, VLAN scoping, conversational learning, integrated routing and bridging, multitenancy, ingress replication, and vLAG pair.
The document provides an overview of network virtualization options including:
1) Layer 2 extension with VXLAN which creates logical overlays on physical networks.
2) VRF-based Layer 3 virtualization which provides traffic isolation through virtual routing instances.
3) Brocade BGP-EVPN network virtualization which uses BGP control plane signaling for virtual overlay networks.
This document provides an overview of Brocade's data center fabric architectures, including single-tier, leaf-spine, and optimized 5-stage folded Clos topologies. It describes the Brocade platforms that can be used to build these fabrics, such as the VDX 6740, VDX 6940, VDX 8770 and SLX 9850 switches. It also covers various design considerations for each topology, such as oversubscription ratios, scale, port speeds and licensing. The purpose is to help customers design high performance cloud networks that meet their requirements for throughput, scale, traffic isolation and application continuity.
This section describes an EVPN DCI deployment model that provides both L2 extension and inter-VLAN routing capabilities between data centers. It leverages BGP EVPN control plane learning between data centers to extend Layer 2 connectivity and uses Layer 3 VNIs to enable routing between VLANs across sites. This model is well-suited for interconnecting EVPN-based IP fabric data centers, as it extends the existing EVPN control plane and provides a unified multi-site solution. Considerations for VXLAN tunnel scale and VLAN reuse across sites are discussed.
This document provides an overview of Brocade Validated Design for a Brocade Virtual Cluster Switching (VCS) Fabric with IP Storage. It describes the benefits of using VCS Fabric for IP storage, the key terminology, technologies and components involved like Virtual Cluster Switching, IP Storage, and their common deployment models. It also lists the hardware and software validated in Brocade's testing of VCS Fabrics with IP Storage solutions.
1. WHITE PAPER
Brocade Data Center Fabric Architectures
Building the foundation for a cloud-optimized data center
Based on the principles of the New IP, Brocade is building on the proven
success of the Brocade®
VDX®
platform by expanding the Brocade cloud-
optimized network and network virtualization architectures and delivering
new automation innovations to meet customer demand for higher levels
of scale, agility, and operational efficiency.
The scalable and highly automated Brocade data center fabric
architectures described in this white paper make it easy for infrastructure
planners to architect, automate, and integrate with current and future data
center technologies while they transition to their own cloud-optimized
data center on their own time and terms.
This paper helps network architects,
virtualization architects, and network
engineers to make informed design,
architecture, and deployment decisions
that best meet their technical and
business objectives. The following topics
are covered in detail:
••Network architecture options for scaling
from tens to hundreds of thousands
of servers
••Network virtualization solutions
that include integration with leading
controller-based and controller-less
industry solutions
••Data Center Interconnect (DCI) options
••Server-based, open, and programmable
turnkey automation tools for rapid
provisioning and customization with
minimal effort
Evolution of
Data Center Architectures
Data center networking architectures
have evolved with the changing require-
ments of the modern data center and
cloud environments.
Traditional data center networks were
a derivative of the 3-tier architecture,
prevalent in enterprise campus
environments. (See Figure 1.) The tiers
are defined as Access, Aggregation,
and Core. The 3-tier topology was
architected with the requirements of an
enterprise campus in mind. A typical
network access layer requirement of
an enterprise campus is to provide
connectivity to workstations. These
enterprise workstations exchange traffic
with either an enterprise data center for
business application access or with
TABLE OF CONTENTS
Evolution of
Data Center Architectures................................... 1
Data Center Networks:
Building Blocks.........................................................3
Building Data Center Sites with
Brocade VCS Fabric Technology..................11
Building Data Center Sites with
Brocade IP Fabric .................................................15
Building Data Center Sites with
Layer 2 and Layer 3 Fabrics...........................21
Scaling a Data Center Site with
a Data Center Core................................................21
Control Plane and Hardware
Scale Considerations.........................................22
Choosing an Architecture
for Your Data Center..........................................23
Network Virtualization Options...................25
DCI Fabrics for Multisite
Data Center Deployments...............................37
Turnkey and Programmable
Automation...............................................................45
About Brocade.......................................................47
2. 2
the Internet. As a result, most traffic in
this network is traversing in and out
through the tiers in the network. This
traffic pattern is commonly referred to as
north-south traffic.
When compared to an enterprise campus
network, the traffic patterns in a data
center network are changing rapidly
from north-south to east-west. Cloud
applications are often multitiered and
hosted at different endpoints connected
to the network. The communication
between these application tiers is a major
contributor to the overall traffic in a data
center. In fact, some of the very large data
centers report that more than 90 percent
of their overall traffic occurs between the
application tiers. This traffic pattern is
commonly referred to as east-west traffic.
Traffic patterns are the primary reasons
that data center networks need to evolve
into scale-out architectures. These scale-
out architectures are built to maximize
the throughput for east-west traffic.
(See Figure 2.) In addition to providing
high east-west throughput, scale-out
architectures provide a mechanism to
add capacity to the network horizontally,
without reducing the provisioned capacity
between the existing endpoints. An
example of scale-out architectures is a
leaf-spine topology, which is described in
detail in a later section of this paper.
In recent years, with the changing
economics of application delivery, a
shift towards the cloud has occurred.
Enterprises have looked to consolidate
and host private cloud services.
Meanwhile, application cloud services,
as well as public service provider
clouds, have grown at a rapid pace. With
this increasing shift to the cloud, the
scale of the network deployment has
increased drastically. Advanced scale-
out architectures allow networks to be
CoreAggAccess
Figure 1: Three-tier architecture: Ideal for north-south traffic patterns commonly found in client-
server compute models.
Figure 2: Scale-out architecture: Ideal for east-west traffic patterns commonly found with web-
based or cloud-based application designs.
Leaf/Spine
Scale Out
Core
deployed at many multiples of the scale of
a leaf-spine topology (see Figure 3 on the
following page).
In addition to traffic patterns, as server
virtualization has become mainstream,
newer requirements of the networking
infrastructure are emerging. Because
physical servers can now host several
virtual machines (VM), the scale
requirement for the control and data
3. 3
planes for MAC addresses, IP
addresses, and Address Resolution
Protocol (ARP) tables have multiplied.
Also, large numbers of physical and
virtualized endpoints must support much
higher throughput than a traditional
enterprise environment, leading to an
evolution in Ethernet standards of
10 Gigabit Ethernet (GbE), 40 GbE,
100 GbE, and beyond. In addition, the
need to extend Layer 2 domains across
the infrastructure and across sites to
support VM mobility is creating new
challenges for network architects.
For multitenant cloud environments,
providing traffic isolation at the networking
layers, enforcing security and
traffic policies for the cloud tenants and
applications is a priority. Cloud scale
deployments also require the networking
infrastructure to be agile in provisioning
new capacity, tenants, and features, as well
as making modifications and managing
the lifecycle of the infrastructure.
10GbE
DC PoD N Edge Services PoD
Super-Spine
Border
Leaf
WAN Edge
Internet
DC PoD 1
Spine
Leaf
Spine
Leaf
DCI
Figure 3: Example of an advanced scale-out architecture commonly used in today’s large-scale data centers.
The remainder of this white paper
describes data center networking
architectures that meet the requirements
for building cloud-optimized networks
that address current and future needs for
enterprises and service provider clouds.
More specifically, this paper describes:
•• Example topologies and deployment
models demonstrating Brocade VDX
switches in Brocade VCS fabric or
Brocade IP fabric architectures
•• Network virtualization solutions that
include controller-based virtualization
such as VMware NSX and controller-
less virtualization using the Brocade
Border Gateway Protocol Ethernet
Virtual Private Network (BGP-EVPN)
•• DCI solutions for interconnecting
multiple data center sites
•• Open and programmable turnkey
automation and orchestration tools that
can simplify the provisioning of
network services
Data Center Networks:
Building Blocks
This section discusses the building blocks
that are used to build the appropriate
network and virtualization architecture for
a data center site. These building blocks
consist of the various elements that fit into
an overall data center site deployment.
The goal is to build fairly independent
elements that can be assembled together,
depending on the scale requirements of
the networking infrastructure.
Networking Endpoints
The first building blocks are the
networking endpoints that connect to
the networking infrastructure. These
endpoints include the compute servers
and storage devices, as well as network
service appliances such as firewalls and
load balancers.
4. 4
Figure 4 shows the different types of
racks used in a data center infrastructure
as described below:
••Infrastructure and Management Racks:
These racks host the management
infrastructure, which includes any
management appliances or software
used to manage the infrastructure.
Examples of this are server virtualization
management software like VMware
vCenter or Microsoft SCVMM,
orchestration software like OpenStack
or VMware vRealize Automation,
network controllers like the Brocade
SDN Controller or VMware NSX, and
network management and automation
tools like Brocade Network Advisor.
Examples of infrastructure racks are IP
physical or virtual storage appliances.
••Compute racks: Compute racks host
the workloads for the data centers.
These workloads can be physical
servers, or they can be virtualized
servers when the workload is made up
of Virtual Machines (VMs). The compute
endpoints can be single or can be
multihomed to the network.
••Edge racks: The network services
connected to the network are
consolidated in edge racks. The
role of the edge racks is to host the
edge services, which can be physical
appliances or VMs.
These definitions of infrastructure/
management, compute racks, and
edge racks are used throughout this
white paper.
Single-Tier Topology
The second building block is a single-
tier network topology to connect
endpoints to the network. Because of the
existence of only one tier, all endpoints
connect to this tier of the network. An
example of a single-tier topology is shown
in Figure 5. The single-tier switches are
shown as a virtual Link Aggregation
Group (vLAG) pair.
The topology in Figure 5 shows the
management/infrastructure, compute
racks, and edge racks connected to a pair
of switches participating in multiswitch
port channeling. This pair of switches is
called a vLAG pair.
The single-tier topology scales the least
among all the topologies described in
this paper, but it provides the best choice
for smaller deployments, as it reduces
the Capital Expenditure (CapEx) costs
for the network in terms of the size of the
infrastructure deployed. It also reduces
the optics and cabling costs for the
networking infrastructure.
Design Considerations for a
Single-Tier Topology
The design considerations for deploying
a single-tier topology are summarized in
this section.
Oversubscription Ratios
It is important for network architects to
understand the expected traffic patterns
in the network. To this effect, the
oversubscription ratios at the vLAG
pair should be well understood and
planned for.
vLAG Pair
Servers/Blades Servers/BladesIP Storage
Management/Infrastructure Racks Compute Racks Edge Racks
Figure 5: Ports on demand with a single networking tier.
Servers/Blades Servers/BladesIP Storage
Management/Infrastructure Racks Compute Racks Edge Racks
Figure 4: Networking endpoints and racks.
5. 5
The north-south oversubscription at the
vLAG pair is described as the ratio of the
aggregate bandwidth of all the downlinks
from the vLAG pair that are connected
to the endpoints to the aggregate
bandwidth of all the uplinks that are
connected to the edge/core router
(described in a later section). The
north-south oversubscription dictates
the proportion of traffic between the
endpoints versus the traffic entering and
exiting the data center site.
It is also important to understand the
bandwidth requirements for the
inter-rack traffic. This is especially true
for all north-south communication
through the services hosted in the edge
racks. All such traffic flows through the
vLAG pair to the edge racks and, if the
traffic needs to exit, it flows back to the
vLAG switches. Thus, the aggregate
ratio of bandwidth connecting the
compute racks to the aggregate ratio of
bandwidth connecting the edge racks is an
important consideration.
Another consideration is the bandwidth
of the link that interconnects the vLAG
pair. In case of multihomed endpoints
and no failure, this link should not be
used for data plane forwarding. However,
if there are link failures in the network,
then this link may be used for data plane
forwarding. The bandwidth requirement
for this link depends on the redundancy
design for link failures. For example, a
design to tolerate up to two 10 GbE link
failures has a 20 GbE interconnection
between the Top of Rack/End of Row
(ToR/EoR) switches.
Port Density and Speeds for Uplinks
and Downlinks
In a single-tier topology, the uplink and
downlink port density of the vLAG pair
determines the number of endpoints that
can be connected to the network, as well
as the north-south oversubscription ratios.
Another key consideration for single-tier
topologies is the choice of port speeds
for the uplink and downlink interfaces.
Brocade VDX Series switches support
10 GbE, 40 GbE, and 100 GbE interfaces,
which can be used for uplinks and
downlinks. The choice of platform for the
vLAG pair depends on the interface speed
and density requirements.
Scale and Future Growth
A design consideration for single-tier
topologies is the need to plan for more
capacity in the existing infrastructure and
more endpoints in the future.
Adding more capacity between existing
endpoints and vLAG switches can be
done by adding new links between them.
Also, any future expansion in the number
of endpoints connected to the single-
tier topology should be accounted for
during the network design, as this requires
additional ports in the vLAG switches.
Another key consideration is whether to
connect the vLAG switches to external
networks through core/edge routers and
whether to add a networking tier for
higher scale. These designs require
additional ports at the ToR/EoR. Multitier
designs are described in a later section of
this paper.
Ports on Demand Licensing
Ports on Demand licensing allows you
to expand your capacity at your own
pace, in that you can invest in a higher
port density platform, yet license only
a subset of the available ports on the
Brocade VDX switch, the ports that you
are using for current needs. This allows for
an extensible and future-proof network
architecture without the additional upfront
cost for unused ports on the switches. You
pay only for the ports that you plan to use.
Leaf-Spine Topology (Two-Tier)
The two-tier leaf-spine topology has
become the de facto standard for
networking topologies when building
medium-scale data center infrastructures.
An example of leaf-spine topology is
shown in Figure 6.
The leaf-spine topology is adapted from
Clos telecommunications networks. This
topology is also known as the “3-stage
folded Clos,” with the ingress and egress
stages proposed in the original Clos
architecture folding together at the spine
to form the leaves.
Leaf
Spine
Figure 6: Leaf-spine topology.
6. 6
The role of the leaf is to provide
connectivity to the endpoints in the
network. These endpoints include
compute servers and storage devices,
as well as other networking devices like
routers and switches, load balancers,
firewalls, or any other networking
endpoint—physical or virtual. As all
endpoints connect only to the leaves,
policy enforcement including security,
traffic path selection, Quality of Service
(QoS) markings, traffic scheduling,
policing, shaping, and traffic redirection
are implemented at the leaves.
The role of the spine is to provide
interconnectivity between the leaves.
Network endpoints do not connect to the
spines. As most policy implementation
is performed at the leaves, the major role
of the spine is to participate in the control
plane and data plane operations for traffic
forwarding between the leaves.
As a design principle, the following
requirements apply to the leaf-spine
topology:
••Each leaf connects to all the spines in
the network.
••The spines are not interconnected with
each other.
••The leaves are not interconnected with
each other for data plane purposes.
(The leaves may be interconnected
for control plane operations such as
forming a server-facing vLAG.)
These are some of the key benefits of a
leaf-spine topology:
••Because each leaf is connected to every
spine, there are multiple redundant paths
available for traffic between any pair of
leaves. Link failures cause other paths in
the network to be used.
••Because of the existence of multiple
paths, Equal-Cost Multipathing (ECMP)
can be leveraged for flows traversing
between pairs of leaves. With ECMP,
each leaf has equal-cost routes, to reach
destinations in other leaves, equal to the
number of spines in the network.
••The leaf-spine topology provides a basis
for a scale-out architecture. New leaves
can be added to the network without
affecting the provisioned east-west
capacity for the existing infrastructure.
••The role of each tier in the network is
well defined (as discussed previously),
providing modularity in the networking
functions and reducing architectural and
deployment complexities.
••The leaf-spine topology provides
granular control over subscription
ratios for traffic flowing within a rack,
traffic flowing between racks, and traffic
flowing outside the leaf-spine topology.
Design Considerations for a
Leaf-Spine Topology
There are several design considerations
for deploying a leaf-spine topology.
This section summarizes the key
considerations.
Oversubscription Ratios
It is important for network architects
to understand the expected traffic
patterns in the network. To this effect,
the oversubscription ratios at each
layer should be well understood and
planned for.
For a leaf switch, the ports connecting
to the endpoints are defined as downlink
ports, and the ports connecting to the
spines are defined as uplink ports. The
oversubscription ratio at the leaves is
the ratio of the aggregate bandwidth for
the downlink ports and the aggregate
bandwidth for the uplink ports.
For a spine switch in a leaf-spine
topology, the east-west oversubscription
ratio is defined per pair of leaf switches
connecting to the spine switch. For a
given pair of leaf switches connecting to
the spine switch, the oversubscription ratio
is the ratio of aggregate bandwidth of the
links connecting to each leaf switch. In a
majority of deployments, this ratio is 1:1,
making the east-west oversubscription
ratio at the spine nonblocking.
Exceptions to the nonblocking east-
west oversubscriptions should be well
understood and depend on the traffic
patterns of the endpoints that are
connected to the respective leaves.
The oversubscription ratios described
here govern the ratio of traffic bandwidth
between endpoints connected to
the same leaf switch and the traffic
bandwidth between endpoints connected
to different leaf switches. As an
example, if the oversubscription ratio is
3:1 at the leaf and 1:1 at the spine, then the
bandwidth of traffic between endpoints
connected to the same leaf switch
should be three times the bandwidth
between endpoints connected to
different leaves. From a network
endpoint perspective, the network
oversubscriptions should be planned
so that the endpoints connected to the
network have the required bandwidth for
communications. Specifically, endpoints
that are expected to use higher bandwidth
should be localized to the same leaf
switch (or same leaf switch pair—when
endpoints are multihomed).
The ratio of the aggregate bandwidth of
all the spine downlinks connected to the
leaves to the aggregate bandwidth of all
the downlinks connected to the border
leaves (described in the edge services
and border switch section) defines the
north-south oversubscription at the spine.
The north-south oversubscription dictates
the traffic destined to the services that are
connected to the border leaf switches and
that exit the data center site.
Leaf and Spine Scale
Because the endpoints in the network
connect only to the leaf switches, the
number of leaf switches in the network
7. 7
depends on the number of interfaces
required to connect all the endpoints.
The port count requirement should also
account for multihomed endpoints.
Because each leaf switch connects to
all the spines, the port density on the
spine switch determines the maximum
number of leaf switches in the topology.
A higher oversubscription ratio at
the leaves reduces the leaf scale
requirements, as well.
The number of spine switches in the
network is governed by a combination
of the throughput required between the
leaf switches, the number of redundant/
ECMP paths between the leaves, and
the port density in the spine switches.
Higher throughput in the uplinks from the
leaf switches to the spine switches can
be achieved by increasing the number
of spine switches or bundling the uplinks
together in port channel interfaces
between the leaves and the spines.
Port Speeds for Uplinks and Downlinks
Another consideration for leaf-spine
topologies is the choice of port speeds
for the uplink and downlink interfaces.
Brocade VDX switches support 10 GbE,
40 GbE, and 100 GbE interfaces, which
can be used for uplinks and downlinks.
The choice of platform for the leaf and
spine depends on the interface speed and
density requirements.
Scale and Future Growth
Another design consideration for leaf-
spine topologies is the need to plan
for more capacity in the existing
infrastructure and to plan for more
endpoints in the future.
Adding more capacity between existing
leaf and spine switches can be done by
adding spine switches or adding new
interfaces between existing leaf and spine
switches. In either case, the port density
requirements for the leaf and the spine
switches should be accounted for during
the network design process.
If new leaf switches need to be added
to accommodate new endpoints in
the network, then ports at the spine
switches are required to connect the
new leaf switches.
In addition, you must decide whether
to connect the leaf-spine topology to
external networks through border leaf
switches and also whether to add an
additional networking tier for higher scale.
Such designs require additional ports at
the spine. These designs are described in
another section of this paper.
Ports on Demand Licensing
Remember that Ports on Demand
licensing allows you to expand your
capacity at your own pace in that you can
invest in a higher port density platform,
yet license only the ports on the Brocade
VDX switch that you are using for current
needs. This allows for an extensible and
future-proof network architecture without
additional cost.
Deployment Model
The links between the leaf and spine can
be either Layer 2 or Layer 3 links.
If the links between the leaf and spine are
Layer 2 links, the deployment is known
as a Layer 2 (L2) leaf-spine deployment
or a Layer 2 Clos deployment. You can
deploy Brocade VDX switches in a Layer
2 deployment by using Brocade VCS®
Fabric technology. With Brocade VCS
Fabric technology, the switches in the
leaf-spine topology cluster together and
form a fabric that provides a single point
for management, distributed control plane,
embedded automation, and multipathing
capabilities from Layers 1 to 3. The
benefits of deploying a VCS fabric are
described later in this paper.
If the links between the leaf and spine are
Layer 3 links, the deployment is known as
a Layer 3 (L3) leaf-spine deployment or a
Layer 3 Clos deployment. You can deploy
Brocade VDX switches in a Layer 3
deployment by using Brocade IP fabrics.
Brocade IP fabrics provide a highly
scalable, programmable, standards-
based, and interoperable networking
infrastructure. The benefits of Brocade IP
fabrics are described later in this paper.
Data Center Points of Delivery
Figure 7 on the following page shows a
building block for a data center site. This
building block is called a data center point
of delivery (PoD). The data center PoD
consists of the networking infrastructure
in a leaf-spine topology along with
the endpoints grouped together in
management/infrastructure and compute
racks. The idea of a PoD is to create a
simple, repeatable, and scalable unit for
building a data center site at scale.
Optimized 5-Stage Folded Clos
Topology (Three Tiers)
Multiple leaf-spine topologies can be
aggregated together for higher scale
in an optimized 5-stage folded Clos
topology. This topology adds a new
tier to the network, known as the super-
spine. The role of the super-spine is to
provide connectivity between the spine
switches across multiple data center
PoDs. Figure 8 on the following page on
the following page shows four super-spine
switches connecting the spine switches
across multiple data center PoDs.
The connection between the spines
and the super-spines follow the
Clos principles:
••Each spine connects to all the super-
spines in the network.
••Neither the spines nor the super-spines
are interconnected with each other.
Similarly, all the benefits of a leaf-spine
topology—namely, multiple redundant
paths, ECMP, scale-out architecture and
control over traffic patterns—are realized
in the optimized 5-stage folded Clos
topology as well.
8. 8
Figure 8: An optimized 5-stage folded Clos with data center PoDs.
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD N
Spine
Leaf
Compute and Infrastructure/Management Racks
Super-Spine
10 GbE 10 GbE 1 0bEG 10 GbE
DC PoD 1
Spine
Leaf
Compute and Infrastructure/Management Racks
Figure 7: A data center PoD.
IP Storage
Spine
Leaf
Servers/Blades
10 GbE
Servers/Blades
10 GbE
Servers/Blades
10 GbE
Compute Racks
Controller
Management SW
10 GbE
Management/Infrastructure Racks
With an optimized 5-stage Clos topology,
a PoD is a simple and replicable unit. Each
PoD can be managed independently,
including firmware versions and network
configurations. This topology also
allows the data center site capacity to
scale up by adding new PoDs or scale
down by removing existing PoDs without
affecting the existing infrastructure—
providing elasticity in scale and isolation
of failure domains.
This topology also provides a basis for
interoperation of different deployment
models of Brocade VCS fabrics and
IP fabrics. This is described later in
this paper.
9. 9
Design Considerations for Optimized
5-Stage Clos Topology
The design considerations of
oversubscription ratios, port speeds and
density, spine and super-spine scale,
planning for future growth, and Brocade
Ports on Demand licensing, which were
described for the leaf-spine topology,
apply to the optimized 5-stage folded Clos
topology as well. Some key considerations
are highlighted below.
Oversubscription Ratios
Because the spine switches now
have uplinks connecting to the super-
spine switches, the north-south
oversubscription ratios for the spine
switches dictate the ratio of aggregate
bandwidth of traffic switched east-west
within a data center PoD to the aggregate
bandwidth of traffic exiting the data center
PoD. This is a key consideration from
the perspective of network infrastructure
and services placement, application tiers,
and (in the case of service providers)
tenant placement. In cases of north-south
oversubscription at the spines, endpoints
should be placed to optimize traffic within
a data center PoD.
At the super-spine switch, the east-west
oversubscription defines the ratio of
bandwidth of the downlink connections for
a pair of data center PoDs. In most cases,
this ratio is 1:1.
The ratio of the aggregate bandwidth of
all the super-spine downlinks connected
to the spines to the aggregate bandwidth
of all the downlinks connected to the
border leaves (described in the section
of this paper on edge services and
border switches) defines the north-south
oversubscription at the super-spine. The
north-south oversubscription dictates the
traffic destined to the services connected
to the border leaf switches and exiting the
data center site.
Deployment Model
Because of the existence of the Layer
3 boundary either at the leaf or at the
spine (depending on the Layer 2 or Layer
3 deployment model in the leaf-spine
topology of the data center PoD), the links
between the spines and super-spines are
Layer 3 links. The routing and overlay
protocols are described later in this paper.
Layer 2 connections between the spines
and super-spines is an option for smaller
scale deployments, due to the inherent
scale limitations of Layer 2 networks.
These Layer 2 connections would be
IEEE 802.1q based optionally over Link
Aggregation Control Protocol (LACP)
aggregated links. However, this design is
not discussed in this paper.
Edge Services and
Border Switches
For two-tier and three-tier data center
topologies, the role of the border switches
in the network is to provide external
connectivity to the data center site. In
addition, as all traffic enters and exits
the data center through the border leaf
switches, they present the ideal location in
the network to connect network services
like firewalls, load-balancers, and edge
VPN routers.
The topology for interconnecting the
border switches depends on the number
of network services that need to be
attached, as well as the oversubscription
ratio at the border switches. Figure 9
Figure 9: Edge services PoD.
Border Leaf
Servers/Blades
10 GbE
Edge Racks
Load Balancer
10 GbE
Firewall
SW RouterSW VPN
SW Firewall
10. 10
shows a simple topology for border
switches, where the service endpoints
connect directly to the border switches.
Border switches in this simple topology
are referred to as “border leaf switches”
because the service endpoints connect to
them directly.
More scalable border switch topologies
are possible, if a greater number of service
endpoints need to be connected. These
topologies include a leaf-spine topology
for the border switches with “border
spines” and “border leaves.” This white
paper demonstrates only the border leaf
variant for the border switch topologies,
but this is easily expanded to a leaf-
spine topology for the border switches.
The border switches with the edge racks
together form the edge services PoD.
Design Considerations for
Border Switches
The following section describes the
design considerations for border switches.
Oversubscription Ratios
The border leaf switches have uplink
connections to spines in the leaf-spine
topology and to super-spines in the
3-tier topology. They also have uplink
connections to the data center core/Wide-
Area Network (WAN) edge routers as
described in the next section. These data
center site topologies are discussed in
detail later in this paper.
The ratio of the aggregate bandwidth
of the uplinks connecting to the spines/
super-spines to the aggregate bandwidth
of the uplink connecting to the core/edge
routers determines the oversubscription
ratio for traffic exiting the data center site.
The north-south oversubscription
ratios for the services connected to the
border leaves is another consideration.
Because many of the services connected
to the border leaves may have public
interfaces facing external entities like
core/edge routers and internal interfaces
facing the internal network, the north-
south oversubscription for each of
these connections is an important
design consideration.
Data Center Core/WAN Edge Handoff
The uplinks to the data center core/WAN
edge routers from the border leaves
carry the traffic entering and exiting
the data center site. The data center
core/WAN edge handoff can be Layer
2 and/or Layer 3 in combination with
overlay protocols.
The handoff between the border leaves
and the data center core/WAN edge may
provide domain isolation for the control
and data plane protocols running in the
internal network and built using one-
tier, two-tier, or three-tier topologies.
This helps in providing independent
administrative, fault isolation, and control
plane domains for isolation, scale, and
security between the different domains
of a data center site. The handoff
between the data center core/WAN edge
and border leaves is explored in brief
elsewhere in this paper.
Data Center Core and
WAN Edge Routers
The border leaf switches connect to the
data center core/WAN edge devices in the
network to provide external connectivity
to the data center site. Figure 10 shows
Figure 10: Collapsed data center core and WAN edge routers connecting Internet and DCI fabric to the border leaf in the data center site.
Data Center Core / WAN Edge
Internet
Border Leaf Border Leaf Border Leaf
DCI
11. 11
an example of the connectivity between
border leaves, a collapsed data center
core/WAN edge tier, and external
networks for Internet and DCI options.
The data center core routers might
provide the interconnection between data
center PoDs built as single-tier, leaf-spine,
or optimized 5-stage Clos deployments
within a data center site. For enterprises,
the core router might also provide
connections to the enterprise campus
networks through campus core routers.
The data center core might also connect
to WAN edge devices for WAN and
interconnect connections. Note that
border leaves connecting to the data
center core provide the Layer 2 or Layer 3
handoff, along with any overlay control and
data planes.
The WAN edge devices provide the
interfaces to the Internet and DCI
solutions. Specifically for DCI, these
devices function as the Provider Edge
(PE) routers, enabling connections to
other data center sites through WAN
technologies like Multiprotocol Label
Switching (MPLS) VPN, Virtual Private
LAN Services (VPLS), Provider Backbone
Bridges (PBB), Dense Wavelength
Division Multiplexing (DWDM), and so
forth. These DCI solutions are described
in a later section.
Building Data Center Sites
with Brocade VCS Fabric
Technology
Brocade VCS fabrics are Ethernet
fabrics built for modern data center
infrastructure needs. With Brocade VCS
Fabric technology, up to 48 Brocade
VDX switches can participate in a VCS
fabric. The data plane of the VCS fabric is
based on the Transparent Interconnection
of Lots of Links (TRILL) standard,
supported by Layer 2 routing protocols
that propagate topology information within
the fabrics. This ensures that there are no
loops in the fabrics, and there is no need
to run Spanning Tree Protocol (STP). Also,
none of the links are blocked. Brocade
VCS Fabric technology provides a
compelling solution for deploying a Layer
2 Clos topology.
Brocade VCS Fabric technology provides
these benefits:
••Single point of management: With all
the switches in a VCS fabric participating
in a logical chassis, the entire topology
can be managed as a single switch
chassis. This drastically reduces the
management complexity of
the solution.
••Distributed control plane: Control
plane and data plane state information
is shared across devices in the VCS
fabric, which enables fabric-wide
MAC address learning, multiswitch
port channels (vLAG), Distributed
Spanning Tree (DiST), and gateway
redundancy protocols like Virtual
Router Redundancy Protocol–Extended
(VRRP-E) and Fabric Virtual Gateway
(FVG), among others. These enable
the VCS fabric to function like a single
switch to interface with other entities in
the infrastructure.
••TRILL-based Ethernet fabric: Brocade
VCS Fabric technology, which is based
on the TRILL standard, ensures
that no links are blocked in the Layer 2
network. Because of the existence of a
Layer 2 routing protocol, STP is
not required.
••Multipathing from Layers 1 to 3:
Brocade VCS Fabric technology
provides efficiency and resiliency
through the use of multipathing from
Layers 1 to 3:
-- At Layer 1, Brocade trunking
(BTRUNK) enables frame-based
load balancing between a pair of
switches that are part of the VCS
fabric. This ensures that thick, or
“elephant” flows do not congest an
Inter-Switch Link (ISL).
-- Because of the existence of a Layer
2 routing protocol, Layer 2 ECMP
is performed between multiple next
hops. This is critical in a Clos topology,
where all the spines are ECMP next
hops for a leaf that sends traffic to an
endpoint connected to another leaf.
The same applies for ECMP traffic
from the spines that have the super-
spines as the next hops.
-- Layer 3 ECMP using Layer 3 routing
protocols ensures that traffic is load
balanced between Layer 3 next hops.
••Embedded automation: Brocade VCS
Fabric technology provides embedded
turnkey automation built into Brocade
Network OS. These automation features
enable zero-touch provisioning of new
switches into an existing fabric. Brocade
VDX switches also provide multiple
management methods, including the
Command Line Interface (CLI), Simple
Network Management Protocol (SNMP),
REST, and Network Configuration
Protocol (NETCONF) interfaces.
••Multitenancy at Layers 2 and 3: With
Brocade VCS Fabric technology,
multitenancy features at Layers 2 and 3
enable traffic isolation and segmentation
across the fabric. Brocade VCS Fabric
technology allows an extended range of
up to 8000 Layer 2 domains within the
fabric, while isolating overlapping IEEE
802.1q-based tenant networks
into separate Layer 2 domains.
Layer 3 multitenancy using Virtual
Routing and Forwarding (VRF)
protocols, multi-VRF routing protocols,
as well as BGP-EVPN, enables large-
scale Layer 3 multitenancy.
••Ecosystem integration and
virtualization features: Brocade VCS
Fabric technology integrates with
leading industry solutions and products
12. 12
like OpenStack, VMware products like
vSphere, NSX, and vRealize, common
infrastructure programming tools like
Python, and Brocade tools like Brocade
Network Advisor. Brocade VCS Fabric
technology is virtualization-aware and
helps dramatically reduce administrative
tasks and enable seamless VM
migration with features like Automatic
Migration of Port Profiles (AMPP),
which automatically adjusts port profile
information as a VM moves from one
server to another.
••Advanced storage features: Brocade
VDX switches provide rich storage
protocols and features like Fibre
Channel over Ethernet (FCoE), Data
Center Bridging (DCB), Monitoring
and Alerting Policy Suite (MAPS), and
AutoNAS (Network Attached Storage),
among others, to enable advanced
storage networking.
The benefits and features listed simplify
Layer 2 Clos deployment by using
Brocade VDX switches and Brocade
VCS Fabric technology. The next section
describes data center site designs that
use Layer 2 Clos built with Brocade VCS
Fabric technology.
Data Center Site with
Leaf-Spine Topology
Figure 11 shows a data center site built
using a leaf-spine topology deployed
using Brocade VCS Fabric technology.
The data center PoD shown here was built
using a VCS fabric, and the border leaves
in the edge services PoD was built using
a separate VCS fabric. The border leaves
are connected to the spine switches in
the data center PoD and also to the data
center core/WAN edge routers. These
links can be either Layer 2 or Layer 3
links, depending on the requirements of
the deployment and the handoff required
to the data center core/WAN edge routers.
There can be more than one edge
services PoD in the network, depending
on the service needs and the bandwidth
requirement for connecting to the data
center core/WAN edge routers.
As an alternative to the topology shown
in Figure 11, the border leaf switches in the
edge services PoD and the data center
PoD can be part of the same VCS fabric,
to extend the fabric benefits to the entire
data center site.
Scale
Table 1 on the following page provides
sample scale numbers for 10 GbE
ports with key combinations of
Brocade VDX platforms at the leaf and
spine Places in the Network (PINs) in a
Brocade VCS fabric.
Figure 11: Data center site built with a leaf-spine topology and Brocade VCS Fabric technology.
Spine
Leaf
10 GbE 10 GbE 10 GbE 10 GbE
Compute and Infrastructure/Management Racks Edge Racks
10 GbE 10 GbE
Border
Leaf
Internet DCI
Data Center Core/
WAN Edge
DC PoD Edge Services PoD
L2 Links
13. 13
The following assumptions are made:
••Links between the leaves and the spines
are 40 GbE.
••The Brocade VDX 6740 Switch
platforms use 4 × 40 GbE uplinks. The
Brocade VDX 6740 platform family
includes the Brocade VDX 6740 Switch,
the Brocade VDX 6740T Switch, and
the Brocade VDX 6740T-1G Switch.
(The Brocade VDX 6740T-1G requires a
Capacity on Demand license to upgrade
to 10GBase-T ports.)
••The Brocade VDX 6940-144S
platforms use 12 × 40 GbE uplinks.
••The Brocade VDX 8770-4 Switch
uses 27 × 40 GbE line cards with
40 GbE interfaces.
Scaling the Data Center Site
with an Optimized 5-Stage
Folded Clos
If multiple VCS fabrics are needed at
a data center site, then the optimized
5-stage Clos topology is used to increase
scale by interconnecting the data center
PoDs built using leaf-spine topology
with Brocade VCS Fabric technology.
This deployment architecture is referred
to as a multifabric topology using VCS
fabrics. An example topology is shown in
Figure 12.
In a multifabric topology using VCS
fabrics, individual data center PoDs
resemble a leaf-spine topology deployed
using Brocade VCS Fabric technology.
Figure 12: Data center site built with an optimized 5-stage folded Clos topology and Brocade VCS Fabric technology.
Border
Leaf
Spine
Leaf 10 GbE
10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD N
Compute and Infrastructure/Management Racks Edge Racks
Edge Services PoD
Super-Spine
Data Center Core/
WAN Edge
Internet DCI
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Compute and Infrastructure/Management Racks
L2 Links
L3 Links
Spine
Leaf
Table 1: Scale numbers for a data center site with a leaf-spine topology implemented with Brocade VCS Fabric technology.
Leaf Switch Spine Switch
Leaf Oversubscription
Ratio Leaf Count Spine Count
VCS Fabric Size (Number
of Switches) 10 GbE Port Count
6740, 6740T,
6740T-1G
6940-36Q 3:1 36 4 40 1728
6740, 6740T,
6740T-1G
8770-4 3:1 44 4 48 2112
6940-144S 6940-36Q 2:1 36 12 48 3456
6940-144S 8770-4 2:1 36 12 48 3456
14. 14
However, the new super-spine tier is used
to interconnect the spine switches in the
data center PoD. In addition, the border
leaf switches are also connected to the
super-spine switches. Note that the super-
spines do not participate in a VCS fabric,
and the links between the super-spines,
spine, and border leaves are Layer 3 links.
Figure 12 shows only one edge services
PoD, but there can be multiple such PoDs
depending on the edge service endpoint
requirements, the oversubscription for
traffic that is exchanged with the data
center core/WAN edge, and the related
handoff mechanisms.
Scale
Table 2 provides sample scale numbers
for 10 GbE ports with key combinations of
Brocade VDX platforms at the leaf, spine,
and super-spine PINs for an optimized
5-stage Clos built with Brocade VCS
fabrics. The following assumptions are
made:
••Links between the leaves and the spines
are 40 GbE. Links between the spines
and super-spines are also 40 GbE.
••The Brocade VDX 6740 platforms
use 4 × 40 GbE uplinks. The Brocade
VDX 6740 platform family includes
the Brocade VDX 6740, Brocade VDX
6740T, and Brocade VDX 6740T-1G.
(The Brocade VDX 6740T-1G requires a
Capacity on Demand license to upgrade
to 10GBase-T ports.) Four spines are
used for connecting the uplinks.
••The Brocade 6940-144S platforms use
12 × 40 GbE uplinks. Twelve spines are
used for connecting the uplinks.
••North-south oversubscription ratio
at the spines is 1:1. In other words, the
bandwidth of uplink ports is equal to the
bandwidth of downlink ports at spines.
A larger port scale can be realized with
Table 2: Scale numbers for a data center site built as a multifabric topology using Brocade VCS Fabric technology.
Leaf Switch Spine Switch
Super-Spine
Switch
Leaf
Oversubscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number
of Super-
Spines
Number of
Data Center
PoDs
10 GbE Port
Count
6740,
6740T,
6740T-1G
6940-36Q 6940-36Q 3:1 18 4 18 9 7776
6940-144S 6940-36Q 6940-36Q 2:1 18 12 18 3 5184
6740,
6740T,
6740T-1G
8770-4 6940-36Q 3:1 32 4 32 9 13824
6940-144S 8770-4 6940-36Q 2:1 32 12 32 3 9216
6740,
6740T,
6740T-1G
6940-36Q 8770-4 3:1 18 4 18 18 15552
6940-144S 6940-36Q 8770-4 2:1 18 12 18 6 10368
6740,
6740T,
6740T-1G
8770-4 8770-4 3:1 32 4 32 18 27648
6940-144S 8770-4 8770-4 2:1 32 12 32 6 18432
6740,
6740T,
6740T-1G
6940-36Q 8770-8 3:1 18 4 18 36 31104
6940-144S 6940-36Q 8770-8 2:1 18 12 18 12 20736
6740,
6740T,
6740T-1G
8770-4 8770-8 3:1 32 4 32 36 55296
6940-144S 8770-4 8770-8 2:1 32 12 32 12 36864
15. 15
a higher oversubscription ratio at the
spines. However, a 1:1 oversubscription
ratio is used here and is also
recommended.
••One spine plane is used for the scale
calculations. This means that all spine
switches in each data center PoD
connect to all the super-spine switches
in the topology. This topology is
consistent with the optimized 5-stage
Clos topology.
•• Brocade VDX 8770 platforms use
27 × 40 GbE line cards in performance
mode (use 18 × 40 GbE) for connections
between spines and super-spines.
The Brocade VDX 8770-4 supports
72 × 40 GbE ports in performance
mode. The Brocade VDX 8770-8
supports 144 × 40 GbE ports in
performance mode.
••32-way Layer 3 ECMP is utilized for
spine to super-spine connections with
a Brocade VDX 8770 at the spine. This
gives a maximum of 32 super-spines
for the multifabric topology using
Brocade VCS Fabric technology.
Note: For a larger port scale for the
multifabric topology using Brocade
VCS Fabric technology, multiple spine
planes are used. Multiple spine planes are
described in the section about scale for
Brocade IP fabrics.
Building Data Center Sites
with Brocade IP Fabric
The Brocade IP fabric provides a
Layer 3 Clos deployment architecture
for data center sites. With Brocade IP
fabric, all the links in the Clos topology
are Layer 3 links. The Brocade IP fabric
includes the networking architecture,
the protocols used to build the network,
turnkey automation features used to
provision, manage, and monitor the
networking infrastructure and the
hardware differentiation with Brocade VDX
switches. The following sections describe
these aspects of building data center sites
with Brocade IP fabrics.
Because the infrastructure is built on IP,
advantages like loop-free communication
using industry-standard routing
protocols, ECMP, very high solution
scale, and standards-based
interoperablility are leveraged.
These are some of the key benefits of
deploying a data center site with Brocade
IP fabrics:
••Highly scalable infrastructure:
Because the Clos topology is built using
IP protocols, the scale of the
infrastructure is very high. These port
and rack scales are documented with
descriptions of the Brocade IP fabric
deployment topologies.
••Standards-based and interoperable
protocols: The Brocade IP fabric is built
using industry-standard protocols like
the Border Gateway Protocol (BGP) and
Open Shortest Path First (OSPF). These
protocols are well understood and
provide a solid foundation for a highly
scalable solution. In addition, industry-
standard overlay control and data plane
protocols like BGP-EVPN and Virtual
Extensible Local Area Network (VXLAN)
are used to extend Layer 2 domain
and extend tenancy domains by
enabling Layer 2 communications
and VM mobility.
••Active-active vLAG pairs:
By supporting vLAG pairs on leaf
switches, dual-homing of the networking
endpoints are supported. This provides
higher redundancy. Also, because the
links are active-active, vLAG pairs
provide higher throughput to the
endpoints. vLAG pairs are supported
for all 10 GbE, 40 GbE, and 100 GbE
interface speeds, and up to 32 links can
participate in a vLAG.
••Layer 2 extensions: In order to enable
Layer 2 domain extension across the
Layer 3 infrastructure, VXLAN protocol
is leveraged. The use of VXLAN
provides a very large number of Layer
2 domains to support large-scale
multitenancy over the infrastructure.
In addition, Brocade BGP-EVPN
network virtualization provides the
control plane for the VXLAN, providing
enhancements to the VXLAN standard
by reducing the Broadcast, Unknown
unicast, Multicast (BUM) traffic in the
network through mechanisms like MAC
address reachability information and
ARP suppression.
••Multitenancy at Layers 2 and 3:
Brocade IP fabric provides multitenancy
at Layers 2 and 3, enabling traffic
isolation and segmentation across the
fabric. Layer 2 multitenancy allows an
extended range of up to 8000 Layer
2 domains to exist at each ToR switch,
while isolating overlapping 802.1q tenant
networks into separate Layer 2 domains.
Layer 3 multitenancy using VRFs, multi-
VRF routing protocols, and BGP-EVPN
allows large-scale Layer 3 multitenancy.
Specifically, Brocade BGP-EVPN
Network Virtualization leverages
BGP-EVPN to provide a control
plane for MAC address learning and
VRF routing for tenant prefixes
and host routes, which reduces BUM
traffic and optimizes the traffic patterns
in the network.
••Support for unnumbered interfaces:
Using Brocade Network OS support
for IP unnumbered interfaces, only
one IP address per switch is required
to configure the routing protocol
peering. This significantly reduces the
planning and use of IP addresses and
simplifies operations.
16. 16
••Turnkey automation: Brocade
automated provisioning dramatically
reduces the deployment time of network
devices and network virtualization.
Prepackaged, server-based automation
scripts provision Brocade IP fabric
devices for service with minimal effort.
••Programmable automation: Brocade
server-based automation provides
support for common industry
automation tools such as Python
Ansible, Puppet, and YANG model-
based REST and NETCONF APIs.
Prepackaged PyNOS scripting library
and editable automation scripts execute
predefined provisioning tasks, while
allowing customization for addressing
unique requirements to meet technical
or business objectives when the
enterprise is ready.
••Ecosystem integration: The Brocade
IP fabric integrates with leading industry
solutions and products like VMware
vSphere, NSX, and vRealize. Cloud
orchestration and control are provided
through OpenStack and OpenDaylight-
based Brocade SDN Controller support.
Data Center Site with
Leaf-Spine Topology
A data center PoD built with IP fabrics
supports dual-homing of network
endpoints using multiswitch port channel
interfaces formed between a pair of
switches participating in a vLAG. This pair
of leaf switches is called a vLAG pair.
(See Figure 13.).
The switches in a vLAG pair have a link
between them for control plane purposes,
to create and manage the multiswitch
port channel interfaces. These links also
carry switched traffic in case of downlink
failures. In most cases these links are
not configured to carry any routed traffic
upstream, however, the vLAG pairs can
peer using a routing protocol if upstream
traffic needs to be carried over the link, in
cases of uplink failures on a vLAG
switch. Oversubscription of the vLAG
link is an important consideration for
failure scenarios.
Figure 14 on the following page shows
a data center site deployed using a
leaf-spine topology and IP fabric. Here
the network endpoints are illustrated as
single-homed, but dual homing is enabled
through vLAG pairs where required.
The links between the leaves, spines, and
border leaves are all Layer 3 links. The
border leaves are connected to the spine
switches in the data center PoD and also
to the data center core/WAN edge routers.
The uplinks from the border leaf to the
data center core/WAN edge can be either
Layer 2 or Layer 3, depending on the
requirements of the deployment and the
handoff required to the data center core/
WAN edge routers.
There can be more than one edge
services PoD in the network, depending
on service needs and the bandwidth
requirement for connecting to the data
center core/WAN edge routers.
Figure 13: An IP fabric data center PoD built with leaf-spine topology and a vLAG pair for dual-homed network endpoints.
IP Storage
Spine
Leaf
Servers/Blades
10 GbE
Servers/Blades
10 GbE
Servers/Blades
10 GbE
Compute Racks
Controller
Management SW
10 GbE
Management/Infrastructure Racks
L3 Links
17. 17
Figure 14: Data center site built with leaf-spine topology and an IP fabric PoD.
Spine
Leaf
10 GbE 10 GbE 10 GbE 10 GbE
Compute and Infrastructure/Management Racks Edge Racks
10 GbE 10 GbE
Border
Leaf
Internet DCI
Data Center Core/
WAN Edge
Edge Services PoD
L3 Links
DC PoD
Table 3. Scale numbers for a leaf-spine topology with Brocade IP fabrics in a data center site.
Leaf Switch Spine Switch
Leaf Oversubscription
Ratio Leaf Count Spine Count
VCS Fabric Size (Number
of Switches) 10 GbE Port Count
6740, 6740T,
6740T-1G
6940-36Q 3:1 36 4 40 1728
6740, 6740T,
6740T-1G
8770-4 3:1 72 4 76 3456
6740, 6740T,
6740T-1G
8770-8 3:1 144 4 148 6912
6940-144S 6940-36Q 2:1 36 12 48 3456
6940-144S 8770-4 2:1 72 12 84 6912
6940-144S 8770-8 2:1 144 12 156 13824
Scale
Table 3 provides sample scale numbers
for 10 GbE ports with key combinations
of Brocade VDX platforms at the leaf and
spine PINs in a Brocade IP fabric.
The following assumptions are made:
••Links between the leaves and the spines
are 40 GbE.
••The Brocade VDX 6740 platforms
use 4 × 40 GbE uplinks. The Brocade
VDX 6740 platform family includes
the Brocade VDX 6740, Brocade VDX
6740T, and Brocade VDX 6740T-1G.
(The Brocade VDX 6740T-1G requires a
Capacity on Demand license to upgrade
to 10GBase-T ports.)
••The Brocade VDX 6940-144S
platforms use 12 × 40 GbE uplinks.
••The Brocade VDX 8770 platforms use
27 × 40 GbE line cards in performance
mode (use 18 × 40 GbE) for connections
between leaves and spines. The
Brocade VDX 8770-4 supports
72 × 40 GbE ports in performance
mode. The Brocade VDX 8770-
8 supports 144 × 40 GbE ports in
performance mode.
Note: For a larger port scale in Brocade
IP fabrics in a 3-stage folded Clos, the
Brocade VDX 8770-4 or 8770-8 can be
used as a leaf switch.
18. 18
Scaling the Data Center Site with an
Optimized 5-Stage Folded Clos
If a higher scale is required, then the
optimized 5-stage Clos topology is used
to interconnect the data center PoDs built
using Layer 3 leaf-spine topology. An
example topology is shown in Figure 15.
Figure 15 shows only one edge services
PoD, but there can be multiple such
PoDs, depending on the edge service
endpoint requirements, the amount of
oversubscription for traffic exchanged with
the data center core/WAN edge, and the
related handoff mechanisms.
Scale
Figure 16 shows a variation of the
optimized 5-stage Clos. This variation
includes multiple super-spine planes.
Each spine in a data center PoD connects
to a separate super-spine plane.
The number of super-spine planes is
equal to the number of spines in the data
center PoDs. The number of uplink ports
on the spine switch is equal to the number
of switches in a super-spine plane. Also,
the number of data center PoDs is equal
to the port density of the super-spine
switches. Introducing super-spine planes
to the optimized 5-stage Clos topology
Edge Racks
Super-Spine
Border
Leaf
WAN Edge
Internet DCI
10 GbE
10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD N
SPINE
LEAF
Compute and Infrastructure/Management Racks
Edge Services PoD
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Spine
Leaf
Compute and Infrastructure/Management Racks
L3 Links
Figure 15: Data center site built with an optimized 5-stage Clos topology and IP fabric PoDs.
Figure 16: Optimized 5-stage Clos with multiple super-spine planes.
10 Gbe 10 Gbe 10 Gbe 10 Gbe
DC PoD N
Spine
Leaf
Compute and Infrastructure/Management Racks
Super-Spine
Plane 1
L3 Links
10 Gbe 10 Gbe 10 Gbe 10 Gbe
DC PoD 1
Spine
Leaf
Compute and Infrastructure/Management Racks
Super-Spine
Plane 2
Super-Spine
Plane 3
Super-Spine
Plane 4
19. 19
Table 4: Scale numbers for an optimized 5-Stage folded Clos topology with multiple super-spine planes built with Brocade IP fabric.
Leaf Switch Spine Switch
Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number
of
Super-
Spines
Number
of Super-
Spines in
Each Super-
Spine Plane
Number of
Data Center
PoDs
10 GbE
Port
Count
6740,
6740T,
6740T-1G
6940-36Q 6940-36Q 3:1 18 4 4 18 36 31104
6940-144S 6940-36Q 6940-36Q 2:1 18 12 12 18 36 62208
6740,
6740T,
6740T-1G
6940-36Q 8770-4 3:1 18 4 4 18 72 62208
6940-144S 6940-36Q 8770-4 2:1 18 12 12 18 72 124416
6740,
6740T,
6740T-1G
6940-36Q 8770-8 3:1 18 4 4 18 144 124416
6940-144S 6940-36Q 8770-8 2:1 18 12 12 18 144 248832
6740,
6740T,
6740T-1G
8770-4 8770-4 3:1 32 4 4 32 72 110592
6940-144S 8770-4 8770-4 2:1 32 12 12 32 72 221184
6740,
6740T,
6740T-1G
8770-4 8770-8 3:1 32 4 4 32 144 221184
6940-144S 8770-4 8770-8 2:1 32 12 12 32 144 442368
6740,
6740T,
6740T-1G
8770-8 8770-8 3:1 32 4 4 32 144 221184
6940-144S 8770-8 8770-8 2:1 32 12 12 32 144 442368
greatly increases the number of data
center PoDs that can be supported. For
the purposes of port scale calculations
of the Brocade IP fabric in this section,
the optimized 5-stage Clos with multiple
super-spine plane topology is considered.
Table 4 provides sample scale numbers
for 10 GbE ports with key combinations of
Brocade VDX platforms at the leaf, spine,
and super-spine PINs for an optimized
5-stage Clos with multiple super-spine
planes built with Brocade IP fabric. The
following assumptions are made:
••Links between the leaves and the spines
are 40 GbE. Links between spines and
super-spines are also 40 GbE.
••The Brocade VDX 6740 platforms use
4 × 40 GbE uplinks. The Brocade
VDX 6740 platform family includes
the Brocade VDX 6740, the
Brocade VDX 6740T, and the
Brocade VDX 6740T-1G.
(The Brocade VDX 6740T-1G requires a
Capacity on Demand license to upgrade
to 10GBase-T ports.) Four spines are
used for connecting the uplinks.
••The Brocade VDX 6940-144S
platforms use 12 × 40 GbE uplinks.
Twelve spines are used for connecting
the uplinks.
••The north-south oversubscription ratio
at the spines is 1:1. In other words, the
bandwidth of uplink ports is equal to
the bandwidth of downlink ports at
spines. The number of physical ports
utilized from spine towards super-spine
and spine towards leaf is equal to the
number of ECMP paths supported. A
larger port scale can be realized with
a higher oversubscription ratio or by
ensuring route import policies to meet
32-way ECMP scale at the spines.
However, a 1:1 subscription ratio is used
here and is also recommended.
••The Brocade VDX 8770 platforms use
27 × 40 GbE line cards in performance
mode (use 18 × 40 GbE) for connections
between spines and super-spines. The
Brocade VDX 8770-4 supports
20. 20
72 × 40 GbE ports in performance
mode. The Brocade VDX 8770-
8 supports 144 × 40 GbE ports in
performance mode.
••32-way Layer 3 ECMP is utilized for
spine to super-spine connections when
a Brocade VDX 8770 is used at the
spine. This gives a maximum of
32 super-spines in each super-spine
plane for the optimized 5-stage Clos
built using Brocade IP fabric.
Further higher scale can be achieved by
physically connecting all available ports
on the switching platform and using
BGP policies to enforce a maximum of
32-way ECMP. This provides higher
port scale for the topology, while still
ensuring that maximum 32-way ECMP
is used. It should be noted that this
arrangement provides nonblocking 1:1
north-south subscription at the spine in
most scenarios. In Table 5 below, 72 ports
are used as uplinks from each spine to
the super-spine plane. Using BGP policy
enforcement for any given BGP learned
route, a maximum 32 of the 72 uplinks
are used as next hops. However, all uplink
ports are used and load balanced across
the entire set of BGP learned routes.
The calculations in Table 4 and Table 5
show networks with no oversubscription at
the spine. Table 6 provides sample scale
numbers for 10 GbE ports for a few key
Table 5: Scale numbers for an optimized 5-Stage folded Clos topology with multiple super-spine planes and BGP policy-enforced 32-way ECMP.
Leaf Switch Spine Switch
Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number
of
Super-
Spine
Planes
Number
of Super-
Spines in
Each Super-
Spine Plane
Number of
Data Center
PoDs
10 GbE
Port Count
6740,
6740T,
6740T-1G
8770-8 8770-8 3:1 72 4 4 72 144 497664
Table 6: Scale numbers for an optimized 5-stage folded Clos topology with multiple super-spine planes built with Brocade IP fabric and north-south
oversubscription at the spine.
Leaf Switch Spine Switch
Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center
PoD
Spine
Count
per Data
Center
PoD
North-South
Over-
subscription
at Spine
Number of
Super-
Spine
Planes
Number of
Super-Spines
in each
Super-Spine
Plane
Number
of Data
Center
PoDs
10 GbE
Port
Count
6740, 6740T,
6740T-1G
6940-36Q 6940-36Q 3:1 27 4 3:1 4 9 36 46656
6940-144S 6940-36Q 6940-36Q 2:1 27 12 3:1 12 9 36 93312
6740, 6740T,
6740T-1G
6940-36Q 8770-4 3:1 27 4 3:1 4 9 72 93312
6940-144S 6940-36Q 8770-4 2:1 27 12 3:1 12 9 72 186624
6740, 6740T,
6740T-1G
6940-36Q 8770-8 3:1 27 4 3:1 4 9 144 186624
6940-144S 6940-36Q 8770-8 2:1 27 12 3:1 12 9 144 373248
6740, 6740T,
6740T-1G
8770-4 8770-4 3:1 54 4 3:1 4 18 72 186624
6940-144S 8770-4 8770-4 2:1 54 12 3:1 12 18 72 373248
6740, 6740T,
6740T-1G
8770-4 8770-8 3:1 54 4 3:1 4 18 144 373248
6940-144S 8770-4 8770-8 2:1 54 12 3:1 12 18 144 746496
6740, 6740T,
6740T-1G
8770-8 8770-8 3:1 96 4 3:1 4 32 144 663552
6940-144S 8770-8 8770-8 2:1 96 12 3:1 12 32 144 1327104
21. 21
combinations of Brocade VDX platforms
at the leaf, spine, and super-spine PINs for
an optimized 5-stage Clos with multiple
super-spine planes built with Brocade
IP fabric. In this case, the north-south
oversubscription ratio at the spine is
also noted.
Building Data Center Sites
with Layer 2 and Layer 3
Fabrics
A data center site can be built using
Layer 2 and Layer 3 Clos that uses
Brocade VCS fabrics and Brocade IP
fabrics simultaneously in the same
topology. This topology is applicable
when a particular deployment is more
suited for a given application or use case.
Figure 17 shows a deployment with both
Brocade VCS based data center PoDs
based on VCS fabrics and data center
PoDs based on IP fabrics, interconnected
in an optimized 5-stage Clos topology.
In this topology, the links between the
spines, super-spines, and border leaves
are Layer 3. This provides a consistent
interface between the data center PoDs
and enables full communication between
endpoints in any PoD.
Scaling a Data Center Site
with a Data Center Core
A very large data center site can use
multiple different deployment topologies.
Figure 18 on the following page shows a
data center site with multiple 5-stage Clos
deployments that are interconnected with
each other by using a data center core.
The role of the data center core is to
provide the interface between the
different Clos deployments. Note that
the border leaves or leaf switches from
each of the Clos deployments connect
into the data center core routers. The
handoff from the border leaves/leaves to
the data center core router can be Layer 2
and/or Layer 3, with overlay protocols like
VXLAN and BGP-EVPN, depending on
the requirements.
The number of Clos topologies that
can be connected to the data center
core depends on the port density and
throughput of the data center core
devices. Each deployment connecting
into the data center core can be a single-
tier, leaf-spine, or optimized 5-stage
Clos design deployed using an IP fabric
architecture or a multifabric topology
using VCS fabrics.
Also shown in Figure 18 on the next
page is a centralized edge services PoD
that provides network services for the
entire site. There can be one or more
of the edge services PoDs with the
border leaves in the edge services PoD,
providing the handoff to the data center
core. The WAN edge routers also connect
to the edge services PoDs and provide
connectivity to the external network.
Figure 17: Data center site built using VCS fabric and IP fabric PoDs.
10 GbE 10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD N
Spine
Leaf
Compute and Infrastructure/Management Racks Edge Racks
Edge Services PoD
Super-Spine
Data Center Core/
WAN Edge
Internet DCI
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Compute and Infrastructure/Management Racks
L2 Links
L3 Links
22. 22
Figure 18: Data center site built with optimized 5-stage Clos topologies interconnected with a data center core.
DC PoD 1 DC PoD 2
Super-Spine
DC PoD N
DC PoD 1
Spine
Leaf
Spine
Leaf
Spine
Leaf
Spine
Leaf
DC PoD 2
Super-Spine
DC PoD N
Data Center
Core
Internet DCI
WAN Edge
Edge Racks
Edge Services PoD
Control Plane and Hardware
Scale Considerations
The maximum size of the network
deployment depends on the scale of the
control plane protocols, as well as the
scale of hardware Application-Specific
Integrated Circuit (ASIC) tables.
The control plane for a VCS fabric
includes these:
••A Layer 2 routing protocol called Fabric
Shortest Path First (FSPF)
••VCS fabric messaging services for
protocol messaging and state exchange
••Ethernet Name Server (ENS) for MAC
address learning
••Protocols for VCS formation:
-- Brocade Link Discovery Protocol
(BLDP)
-- Join and Merge Protocol (JMP)
••State maintenance and distributed
protocols:
-- Distributed Spanning Tree Protocol
(dSTP)
The maximum scale of the VCS fabric
deployment is a function of the number
of nodes, topology of the nodes, link
reliability, distance between the nodes,
features deployed in the fabric, and
the scale of the deployed features. A
maximum of 48 nodes are supported in a
VCS fabric.
In a Brocade IP fabric, the control plane
is based on routing protocols like BGP
and OSPF. In addition, a control plane
is provided for formation of vLAG pairs.
In the case of virtualization with VXLAN
overlays, BGP-EVPN provides the
control plane. The maximum scale of the
topology depends on the scalability of
these protocols.
For both Brocade VCS fabrics and IP
fabrics, it is important to understand
the hardware table scale and the related
control plane scales. These tables include:
••MAC address table
••Host route tables/Address Resolution
Protocol/Neighbor Discovery (ARP/ND)
tables
••Longest Prefix Match (LPM) tables for
IP prefix matching
••Tertiary Content Addressable Memory
(TCAM) tables for packet matching
These tables are programmed into the
switching ASICs based on the information
learned through configuration, the data
plane, or the control plane protocols.
This also means that it is important
to consider the control plane scale for
carrying information for these tables when
determining the maximum size of the
network deployment.
23. 23
Choosing an Architecture
for Your Data Center
Because of the ongoing and rapidly
evolving transition towards the cloud and
the need across IT to quickly improve
operational agility and efficiency, the
best choice is an architecture based on
Brocade data center fabrics. However, the
process of choosing an architecture that
best meets your needs today while leaving
you flexibility to change can be paralyzing.
Brocade recognizes how difficult it is
for customers to make long-term
technology and infrastructure
investments, knowing they will have to
live for years with those choices. For this
reason, Brocade provides solutions that
help you build cloud-optimized networks
with confidence, knowing that your
investments have value today—and will
continue to have value well into the future.
High-Level Comparison Table
Table 7 provides information about
which Brocade data center fabric best
meets your needs. The IP fabric columns
represent all deployment topologies for
IP fabric, including the leaf-spine and
optimized 5-stage Clos topologies.
Deployment Scale Considerations
The scalability of a solution is an
important consideration for deployment.
Depending on whether the topology is
a leaf-spine or optimized 5-stage Clos
topology, deployments based on Brocade
VCS Fabric technology and Brocade IP
fabrics scale differently. The port scales
for each of these deployments are
documented in previous sections of this
white paper.
In addition, the deployment scale also
depends on the control plane as well
as on the hardware tables of the platform.
Table 7: Data Center Fabric Support Comparison Table.
Customer Requirement VCS Fabric
Multifabric VCS
with VXLAN IP Fabric
IP Fabric with BGP-
EVPN-Based VXLAN
Virtual LAN (VLAN) extension Yes Yes Yes
VM mobility across racks Yes Yes Yes
Embedded turnkey provisioning and
automation
Yes Yes,
in each data center PoD
Embedded centralized fabric
management
Yes Yes,
in each data center PoD
Data center PoDs optimized for
Layer 2 scale-out
Yes Yes
vLAG support Yes,
up to 8 devices
Yes,
up to 8 devices
Yes,
up to 2 devices
Yes,
up to 2 devices
Gateway redundancy Yes,
VRRP/VRRP-E/FVG
Yes,
VRRP/VRRP-E/FVG
Yes,
VRRP-E
Yes,
Static Anycast Gateway
Controller-based network virtualization
(for example, VMware NSX)
Yes Yes Yes Yes
DevOps tool-based automation Yes Yes Yes Yes
Multipathing and ECMP Yes Yes Yes Yes
Layer 3 scale-out between PoDs Yes Yes Yes
Turnkey off-box provisioning
and automation
Planned Yes Yes
Data center PoDs optimized for
Layer 3 scale-out
Yes Yes
Controller-less network virtualization
(Brocade BGP-EVPN network
virtualization)
Planned Yes
24. 24
Table 8 provides an example of the scale
considerations for parameters in a leaf-
spine topology with Brocade VCS
fabric and IP fabric deployments. The
table illustrates how scale requirements
for the parameters vary between a
VCS fabric and an IP fabric for the
same environment.
The following assumptions are made:
••There are 20 compute racks in the leaf-
spine topology.
••4 spines and 20 leaves are deployed.
Physical servers are single-homed.
••The Layer 3 boundary is at the spine of
the VCS fabric deployment and at the
leaf in IP fabric deployment.
••Each peering between leaves and spines
uses a separate subnet.
••Brocade IP fabric with BGP-EVPN
extends all VLANs across all
20 racks.
••40 1 Rack Unit (RU) servers per rack
(a standard rack has 42 RUs).
••2 CPU sockets per physical
server × 1 Quad-core CPU per
socket = 8 CPU cores per
physical server.
••5 VMs per CPU core × 8 CPU cores
per physical server = 40 VMs per
physical server.
••There is a single virtual Network
Interface Card (vNIC) for each VM.
••There are 40 VLANs per rack.
Table 8: Scale Considerations for Brocade VCS Fabric and IP Fabric Deployments.
Brocade VCS Fabric Brocade IP Fabric
Brocade IP Fabric with BGP-EVPN
Based VXLAN
Leaf Spine Leaf Spine Leaf Spine
MAC Adresses 40 VMs/server ×
40 servers/rack ×
20 racks = 32,000
MAC addresses
40 VMs/server ×
40 servers/rack ×
20 racks = 32,000
MAC addresses
40 VMs/server ×
40 servers/rack
= 1600 MAC
addresses
Small number of
MAC addresses
needed for
peering
40 VMs/server ×
40 servers/rack ×
20 racks
= 32,000 MAC
addresses
Small number of
MAC addresses
needed for
peering
VLANs 40 VLANs/rack
× 20 racks = 800
VLANs
40 VLANs/rack
× 20 racks = 800
VLANs
40 VLANs No VLANs at
spine
40 VLANs/rack
extended to all
20 racks = 800
VLANs
No VLANs at
spine
ARP Entries/
Host Routes
None 40 VMs/server ×
40 servers/rack ×
20 racks = 32,000
ARP entries
40 VMs/server ×
40 servers/rack
= 1600 ARP
entries
Small number of
ARP entries for
peers
40 VMs/server X
40 servers/rack
X 20 racks + 20
VTEP loopback
IP addresses
= 32,020 host
routes/ARP
entries
Small number of
ARP entries for
peers
L3 Routes
(Longest Prefix
Match)
None Default gateway
for 800 VLANs =
800 L3 routes
40 default
gateways + 40
remote subnets
× 19 racks + 80
peering subnets =
880 L3 routes
40 subnets ×
20 racks + 80
peering subnets =
880 L3 routes
80 peering
subnets + 40
subnets X 20
racks = 880 L3
routes
Small number
of L3 routes for
peering
Layer 3
Default
Gateways
None 40 VLANs/rack
× 20 racks = 800
default gateways
40 VLANs/
rack = 40 default
gateways
None 40 VLANs/rack
× 20 racks = 800
default gateways
None
25. 25
Fabric Architecture
Another way to determine which Brocade
data center fabric provides the best
solution for your needs is to compare the
architectures side-by-side.
Figure 19 provides a side-by-side
comparison of the two Brocade data
center fabric architectures. The blue text
shows how each Brocade data center
fabric is implemented. For example, a
VCS fabric is topology-agnostic and
uses TRILL as its transport mechanism,
whereas the topology for an IP fabric is a
Clos that uses IP for transport.
It is important to note that the same
Brocade VDX switch platform, Brocade
Network OS software, and licenses are
used for either deployment. So, when
you are making long-term infrastructure
purchase decisions, be reassured to know
that you need only one switching platform.
Recommendations
Of course, each organization’s choices are
based on its own unique requirements,
culture, and business and technical
objectives. Yet by and large, the scalability
and seamless server mobility of a Layer
2 scale-out VCS fabric provides the
ideal starting point for most enterprise
and cloud providers. Like IP fabrics,
VCS fabrics provide open interfaces and
software extensibility, if you decide to
extend the already capable and proven
embedded automation of Brocade VCS
Fabric technology.
For organizations looking for a Layer 3
optimized scale-out approach, Brocade IP
fabrics is the best architecture to deploy.
And if controller-less network virtualization
using Internet-proven technologies such
as BGP-EVPN is the goal, Brocade IP
fabric is the best underlay.
Brocade architectures also provide the
flexibility of combining both of these
deployment topologies in an optimized
5-stage Clos architecture, as illustrated
in Figure 19. This provides flexibility of
choice in choosing a different deployment
model per data center PoD.
Most importantly, if you find your
infrastructure technology investment
decisions challenging, you can be
confident that an investment in the
Brocade VDX switch platform will
continue to prove its value over time.
With the versatility of the Brocade VDX
platform and its support for both Brocade
data center fabric architectures, your
infrastructure needs will be fully met today
and into the future.
Network Virtualization
Options
Network virtualization is the process
of creating virtual, logical networks on
physical infrastructures. With network
virtualization, multiple physical networks
can be consolidated together to form a
logical network. Conversely, a physical
network can be segregated to form
multiple virtual networks.
Virtual networks are created through a
combination of hardware and software
elements spanning the networking,
Figure 19: Data center fabric architecture comparison.
L2 ISL
Layer 3
Boundary
L3
ECMP
Layer 3
Boundary
Topology:
Clos
Transport:
IP
Provisioning:
Componentized
Scale:
100s of Switches
Topology:
Agnostic
Transport:
TRILL
Provisioning:
Embedded
Scale:
48 Switches
26. 26
storage, and computing infrastructure.
Network virtualization solutions leverage
the benefits of software in terms of
agility, programmability, along with the
performance acceleration and scale of
application-specific hardware. Different
network virtualization solutions leverage
these benefits uniquely.
Network Functions Virtualization (NFV)
is also a network virtualization construct
where traditional networking hardware
appliances like routers, switches, and
firewalls are emulated in software. The
Brocade vRouters and Brocade vADC
are examples of NFV. However, the
Brocade NFV portfolio of products are
not discussed further in this white paper.
Network virtualization offers several
key benefits that apply generally to
network virtualization:
••Efficient use of infrastructure: Through
network virtualization techniques like
VLANs, traffic for multiple Layer 2
domains are carried over the same
physical link. Technologies such as
IEEE 802.1q are used, eliminating the
need to carry different Layer 2 domains
over separate physical links. Advanced
virtualization technologies like TRILL,
which are used in Brocade VCS Fabric
technology, avoid the need to run STP
and avoid blocked interfaces as well,
ensuring efficient utilization of all links.
••Simplicity: Many network virtualization
solutions simplify traditional networking
deployments by substituting old
technologies with advanced protocols.
Ethernet fabrics with Brocade VCS
Fabric technology leveraging TRILL
provide a much simpler deployment
compared to traditional networks,
where multiple protocols are required
between the switches—for example,
protocols like STP and variants like Per-
VLAN STP (PVST), trunk interfaces with
IEEE 802.1q, LACP port channeling, and
so forth. Also, as infrastructure is used
more efficiently, less infrastructure must
be deployed, simplifying management
and reducing cost.
••Infrastructure consolidation: With
network virtualization, virtual networks
can span across disparate networking
infrastructures and work as a single
logical network. This capability is
leveraged to span a virtual network
domain across physical domains in a
data center environment. An example
of this is the use of Layer 2 extension
mechanisms between data center PoDs
to extend VLAN domains across them.
These use cases are discussed in a later
section of this paper.
Another example is the use of VRF
to extend the virtual routing domains
across the data center PoDs, creating
virtual routed networks that span
different data center PoDs.
••Multitenancy: With network virtualiza-
tion technologies, multiple virtual
Layer 2 and Layer 3 networks can be
created over the physical infrastructure,
and multitenancy is achieved through
traffic isolation. Examples of Layer 2
technologies for multitenancy include
VLAN, virtual fabrics, and VXLAN.
Examples of Layer 3 multitenancy
technologies include VRF, along with the
control plane routing protocols for the
VRF route exchange.
••Agility and automation: Network
virtualization combines software and
hardware elements to provide agility
in network configuration and
management. NFV allows networking
entities like vSwitches, vRouters,
vFirewalls, and vLoad Balancers to be
instantly spun up or down, depending
on the service requirements. Similarly,
Brocade switches provide a rich set
of APIs using REST and NETCONF,
enabling agility and automation
in deployment, monitoring, and
management of the infrastructure.
Brocade network virtualization solutions
are categorized as follows:
••Controller-less network virtualization:
Controller-less network virtualization
leverages the embedded virtualization
capabilities of Brocade Network OS
to realize the benefits of network
virtualization. The control plane for
virtualization solution is distributed
across the Brocade data center fabric.
The management of the infrastructure
is realized through turnkey automation
solutions, which are described in a later
section of this paper.
••Controller-based network virtualization:
Controller-based network virtualization
decouples the control plane for the
network from the data plane into a
centralized entity known as a controller.
The controller holds the network state
information of all the entities and
programs the data plane forwarding
tables in the infrastructure. Brocade
Network OS provides several
interfaces that communicate with
network controllers, including
OpenFlow, Open vSwitch Database
Management Protocol (OVSDB),
REST, and NETCONF. The network
virtualization solution with VMware NSX
is a example of controller-based network
virtualization and is briefly described in
this white paper.
Layer 2 Extension with VXLAN-
Based Network Virtualization
Virtual Extensible LAN (VXLAN) is an
overlay technology that provides
Layer 2 connectivity for workloads
27. 27
residing across the data center network.
VXLAN creates a logical network overlay
on top of physical networks, extending
Layer 2 domains across Layer 3
boundaries. VXLAN provides decoupling
of the virtual topology provided by
the VXLAN tunnels from the physical
topology of the network. It leverages
Layer 3 benefits in the underlay, such as
load balancing on redundant links, which
leads to higher network utilization. In
addition, VXLAN provides a large number
of logical network segments, allowing for
large-scale multitenancy in the network.
The Brocade VDX platform provides
native support for the VXLAN protocol.
Layer 2 domain extension across Layer
3 boundaries is an important use case
in a data center environment where VM
mobility requires a consistent Layer 2
network environment between the source
and the destination.
Figure 20 illustrates a leaf-spine
deployment based on Brocade IP fabrics.
The Layer 3 boundary for an IP fabric is at
the leaf. The Layer 2 domains from a leaf
or a vLAG pair are extended across the
infrastructure using VXLAN between the
leaf switches.
VXLAN can be used to extend Layer
2 domains between leaf switches in an
optimized 5-stage Clos IP fabric topology,
as well.
In a VCS fabric, the Layer 2 domains are
extended by default within a deployment.
This is because Brocade VCS Fabric
technology uses the Layer 2 network
virtualization overlay technology of TRILL
to carry the standard VLANs, as well as
the extended virtual fabric VLANs, across
the fabric.
For a multifabric topology using VCS
fabrics, the Layer 3 boundary is at
the spine of a data center PoD that is
implemented with a VCS fabric. Virtual
Fabric Extension (VF Extension)
technology in Brocade VDX Series
switches provides Layer 2 extension
between data center PoDs for standard
VLANs, as well as virtual fabric VLANs.
Figure 21 on the following page shows
an example of a Virtual Fabric Extension
tunnel between data center PoDs.
In conclusion, Brocade VCS Fabric
technology provides TRILL-based
implementation for extending Layer 2
within a VCS fabric. The implementation
of VXLAN by Brocade provides
extension mechanisms for a Layer 2 over
Figure 20: VXLAN-based Layer 2 domain extension in a leaf-spine IP fabric.
Spine
Leaf
10 GbE 10 GbE 10 GbE 10 GbE
Compute and Infrastructure/Management Racks Edge Racks
10 GbE 10 GbE
Border
Leaf
Internet DCI
Data Center Core/
WAN EDGE
DC PoD Edge Services PoD
VXLAN
L3 Links
28. 28
Layer 3 infrastructure, so that Layer 2
multitenancy is realized across the
entire infrastructure.
VRF-Based Layer 3 Virtualization
VF-Extension support in Brocade VDX
switches provides traffic isolation at
Layer 3.
Figure 22 illustrates an example of a leaf-
spine deployment with Brocade IP fabrics.
Here the Layer 3 boundary is at the leaf
switch. The VLANs are associated with a
VRF at the default gateway at the leaf. The
VRF instances are routed over the leaf-
spine Brocade VDX infrastructure using
multi-VRF internal BGP (iBGP), external
BGP (eBGP), or OSPF protocols.
The VRF instances can be handed over
from the border leaf switches to the data
center core/WAN edge to extend the
VRFs across sites.
Figure 22: Multi-VRF deployment in a leaf-spine IP fabric.
SPINE
Leaf
10 GbE 10 GbE 10 GbE 10 GbE
Compute and Infrastructure/Management Racks Edge Racks
10 GbE 10 GbE
Border
Leaf
Internet DCI
Datacenter Core/
WAN Edge
DC PoD Edge Services PoD
Multi-VRF iBGP, eBGP or OSPF)
L3
L2
L3
L2Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs
Tenant VRFsL3 Links
Figure 21: Virtual fabric extension-based Layer 2 domain extension in a multifabric topology using VCS fabrics.
Border
Leaf
Spine
Leaf
10 GbE 10GbE 10 GbE 10 GbE
DC PoD N
SPINE
LEAF
10 GbE
10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Compute and Infrastructure/Management Racks Compute and Infrastructure/Management Racks Edge Racks
Edge Services PoD
Super-Spine
Data Center Core/
WAN Edge
Internet DCI
VXLAN
L2 Links
L3 Links
VXLAN
29. 29
Similarly, Figure 23 illustrates VRFs and
VRF routing protocols in a multifabric
topology using VCS fabrics.
To realize Layer 2 and Layer 3
multitenancy across the data center site,
VXLAN-based extension mechanisms
can be used along with VRF routing. This
is illustrated in Figure 24.
The handoff between the border leaves
and the data center core/WAN edge
devices is a combination of Layer 2 for
extending the VLANs across sites and/or
Layer 3 for extending the VRF instances
across sites.
Brocade BGP-EVPN network
virtualization provides a simpler, efficient,
resilient, and highly scalable alternative for
Figure 23: Multi-VRF deployment in a multifabric topology using VCS fabrics.
10 GbE
40G
10 GbE 10 GbE 10 GbE
DC PoD 4
10
GbE
10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Compute and Infrastructure/Management Racks Compute and Infrastructure/Management Racks Edge Racks
Edge Services PoD
Super-Spine
Border
Leaf
Data Center Core/
WAN Edge
Internet DCI
Multi-VRF iBGP, eBGP or OSPF
L3
L2
L3
L2
L3
L2
Tenant VRFs
Tenant VRFs Tenant VRFs
Tenant VRFs
ISL Links
L3 Links
Spine
Leaf
Figure 24: Multi-VRF deployment with Layer 2 extension in an IP fabric deployment.
Spine
Leaf
10 GbE 10 GbE 10 GbE 10 GbE
Compute and Infrastructure/Management Racks Edge Racks
10 GbE 10 GbE
Border
Leaf
Internet DCI
Datacenter Core/
WAN Edge
DC PoD Edge Services PoD
L3
L2
L3
L2Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs Tenant VRFs
Tenant VRFs
VXLAN
L3 Links
30. 30
network virtualization, as described in the
next section.
Brocade BGP-EVPN Network
Virtualization
Layer 2 extension mechanisms using
VXLAN rely on “flood and learn”
mechanisms. These mechanisms are
very inefficient, making MAC address
convergence longer and resulting in
unnecessary flooding.
Also, in a data center environment
with VXLAN-based Layer 2 extension
mechanisms, a Layer 2 domain and an
associated subnet might exist across
multiple racks and even across all racks
in a data center site. With traditional
underlay routing mechanisms, routed
traffic destined to a VM or a host
belonging to the subnet follows an
inefficient path in the network, because
the network infrastructure is aware only
of the existence of the distributed Layer 3
subnet, but not aware of the exact location
of the hosts behind a leaf switch.
With Brocade BGP-EVPN network
virtualization, network virtualization is
achieved through creation of a VXLAN-
based overlay network. Brocade BGP-
EVPN network virtualization leverages
BGP-EVPN to provide a control plane
for the virtual overlay network. BGP-
EVPN enables control-plane learning for
end hosts behind remote VXLAN tunnel
endpoints (VTEPs). This learning includes
reachability for Layer 2 MAC addresses
and Layer 3 host routes.
With BGP-EVPN deployed in a data
center site, the leaf switches participate in
the BGP-EVPN control and data plane
operations. These are shown as BGP-
EVPN Instance (EVI) in Figure 25. The
spine switches participate only in the
BGP-EVPN control plane.
Figure 24 shows BGP-EVPN deployed
with eBGP. Not all the spine routers
need to participate in the BGP-EVPN
control plane. Figure 24 shows two spines
participating in BGP-EVPN.
BGP-EVPN is also supported with
iBGP. BGP-EVPN deployment with
iBGP as the underlay protocols is shown
in Figure 26 on the next page. As with
eBGP deployment, only two spines are
participating in the BGP-EVPN route
reflection.
BGP-EVPN Control Plane
Signaling
Figure 27 on the next page summarizes
the operations of BGP-EVPN.
The operational steps are summarized
as follows:
1. Leaf VTEP-1 learns the MAC address
and IP address of the connected
host through data plane inspection.
Host IP addresses are learned through
ARP learning.
2. Based on the learned information, the
BGP tables are populated with the
MAC-IP information.
3. Leaf VTEP-1 advertises the MAC-IP
route to the spine peers, along
Figure 25: Brocade BGP-EVPN network virtualization in a leaf-spine topology with eBGP.
Data Center Core/ WAN Edge
Severs/BladesSevers/Blades Severs/Blades Severs/Blades
Border Leaf Border Leaf
eBGP Underlay
BGP EVPN
EVI EVI
Mac/ IP
EVI
Mac/ IP
BGP-EVPN
EVI EVIEVI
Spine
Leaf
31. 31
with the Route Distinguisher (RD)
and Route Target (RT) that are
associated with the MAC-VRF for
the associated host. Leaf VTEP-1
also advertises the BGP next-hop
attributes as its VTEP address and a
VNI for Layer 2 extension.
4. The spine switch advertises the
L2VPN EVPN route to all the other
leaf switches, and Leaf VTEP-3 also
receives the BGP update.
5. When Leaf VTEP-3 receives the
BGP update, it uses the information
to populate its forwarding tables.
The host route is imported in the IP
VRF table, and the MAC address is
imported in the MAC address table,
with reachability as Leaf VTEP-1.
All data plane forwarding for switched or
routed traffic between the leaves is over
Figure 27: Brocade BGP-EVPN network virtualization in a leaf-spine topology with iBGP.
Figure 26: Brocade BGP-EVPN network virtualization in a leaf-spine topology with iBGP.
ASN65XXX
Static Anycast
Gateway
Core
Severs/Blades
10 GbE
Spine
Border Leaf
Leaf
R
R
R
R
R
R
R
R
EVI EVI
Mac/ IP
EVI
Mac/ IP
eBGP Underlay iBGP Overlay
R
R Overlay Route ReflectoriBGP Underlay L2 MP-BGP NLRI
EVI
Severs/Blades Severs/Blades Severs/Blades
Border Leaf
32. 32
VXLAN. The spine switches see only
VXLAN-encapsulated traffic between the
leaves and are responsible for forwarding
the Layer 3 packets.
Brocade BGP-EVPN Network
Virtualization Key Features
and Benefits
Some key features and benefits
of Brocade BGP-EVPN network
virtualization are summarized as follows:
••Active-active vLAG pairs: vLAG pairs
for multiswitch port channel for dual
homing of network endpoints are
supported at the leaf. Both the switches
in the vLAG pair participate in the BGP-
EVPN operations and are capable of
actively forwarding traffic.
••Static anycast gateway: With static
anycast gateway technology, each leaf
is assigned the same default gateway
IP and MAC addresses for all the
connected subnets. This ensures that
local traffic is terminated and routed at
Layer 3 at the leaf. This also eliminates
any suboptimal inefficiencies found
with centralized gateways. All leaves
are simultaneously active forwarders
for all default traffic for which they
are enabled. Also, because the static
anycast gateway does not rely on any
control plane protocol, it can scale to
large deployments.
••Efficient VXLAN routing: With the
existence of active-active vLAG pairs
and the static anycast gateway, all
traffic is routed and switched at the
leaf. Routed traffic from the network
endpoints is terminated in the leaf
and is then encapsulated in VXLAN
header to be sent to the remote site.
Similarly, traffic from the remote leaf
node is VXLAN-encapsulated and
needs to be decapsulated and routed
to the destination. This VXLAN routing
operation into and out of the tunnel
on the leaf switches is enabled in the
Brocade VDX 6740 and 6940 platform
ASICs. VXLAN routing performed
in a single pass is more efficient than
competitive ASICs.
••Data plane IP and MAC learning: With
IP host routes and MAC addresses
learned from the data plane and
advertised with BGP-EVPN, the leaf
switches are aware of the reachability
information for the hosts in the network.
Any traffic destined to the hosts takes
the most efficient route in the network.
••Layer 2 and Layer 3 multitenancy:
BGP-EVPN provides control plane
for VRF routing as well as for Layer 2
VXLAN extension. BGP-EVPN enables
a multitenant infrastructure and extends
it across the data center site to enable
traffic isolation between the Layer 2
and Layer 3 domains, while providing
efficient routing and switching between
the tenant endpoints.
••Dynamic tunnel discovery: With
BGP-EVPN, the remote VTEPs are
automatically discovered. The resulting
VXLAN tunnels are also automatically
created. This significantly reduces
Operational Expense (OpEx) and
eliminates errors in configuration.
••ARP/ND suppression: As the
BGP-EVPN EVI leaves discover
remote IP and MAC addresses, they
use this information to populate their
local ARP tables. Using these entries,
the leaf switches respond to any local
ARP queries. This eliminates the
need for flooding ARP requests in the
network infrastructure.
••Conversational ARP/ND learning:
Conversational ARP/ND reduces the
number of cached ARP/ND entries
by programming only active flows
into the forwarding plane. This helps
to optimize utilization of hardware
resources. In many scenarios, there
are software requirements for ARP
and ND entries beyond the hardware
capacity. Conversational ARP/ND
limits storage-in-hardware to active
ARP/ND entries; aged-out entries are
deleted automatically.
••VM mobility support: If a VM moves
behind a leaf switch, with data plane
learning, the leaf switch discovers
the VM and learns its addressing
information. It advertises the reachability
to its peers, and when the peers
receive the updated information for the
reachability of the VM, they update their
forwarding tables accordingly. BGP-
EVPN-assisted VM mobility leads to
faster convergence in the network.
••Simpler deployment: With multi-VRF
routing protocols, one routing protocols
session is required per VRF. With BGP-
EVPN, VRF routing and MAC address
reachability information is propagated
over the same BGP sessions as the
underlay, with the addition of the L2VPN
EVPN address family. This significantly
reduces OpEx and eliminates errors
in configuration.
••Open standards and interoperability:
BGP-EVPN is based on the open
standard protocol and is interoperable
with implementations from other
vendors. This allows the BGP-EVPN-
based solution to fit seamlessly in a
multivendor environment.
33. 33
Brocade BGP-EVPN is also supported in
an optimized 5-stage Clos with Brocade
IP fabrics with both eBGP and iBGP.
Figure 28 illustrates the eBGP underlay
and overlay peering for the optimized
5-stage Clos.
In future releases, Brocade BGP-EVPN
network virtualization is planned with a
multifabric topology using VCS fabrics
between the spine and the super-spine.
Standards Conformance and
RFC Support for BGP-EVPN
Table 9 shows the standards conformance
and RFC support for
BGP-EVPN.
Network Virtualization with
VMware NSX
VMware NSX is a network virtualization
platform that orchestrates the provisioning
of logical overlay networks over
Table 9: Standards conformance for the BGP-EVPN implementation.
Applicable Standard Reference URL Description of Standard
RFC 7432: BGP MPLS-Based
Ethernet VPN
http://tools.ietf.org/html/rfc7432 BGP-EVPN implementation is based on the IETF standard
RFC 7432.
A Network Virtualization Overlay
Solution Using EVPN
https://tools.ietf.org/html/draft-ietf-
bess-dci-evpn-overlay-01
Describes how EVPN can be used as a Network Virtualization
Overlay (NVO) solution and explores the various tunnel
encapsulation options over IP and their impact on the EVPN
control plane and procedures.
Integrated Routing and Bridging
in EVPN
https://tools.ietf.org/html/draft-
ietf-bess-evpn-inter-subnet-
forwarding-00
Describes an extensible and flexible multihoming VPN solution
for intrasubnet connectivity among hosts and VMs over an
MPLS/IP network.
Figure 28: Brocade BGP-EVPN network virtualization in an optimized 5-stage Clos topology.
10 GbE 10 GbE
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD N
Compute and Infrastructure/Management Racks Edge Racks
Edge Services PoD
Super-Spine
Border
Leaf
Internet DCI
10 GbE 10 GbE 10 GbE 10 GbE
DC PoD 1
Spine
Leaf
Compute and Infrastructure/Management Racks
Core/WAN Edge
eBGP Underlay
BGP EVPN
Spine
Leaf
physical networks. VMware NSX-based
network virtualization leverages VXLAN
technology to create logical networks,
extending Layer 2 domains over
underlay networks. Brocade data center
architectures integrated with VMware
NSX provide a controller-based network
virtualization architecture for a data
center network.
34. 34
VMware NSX provides several networking
functions in software. The functions are
summarized in Figure 29.
The NSX architecture has built-in
separation of data, control, and manage-
ment layers. The NSX components
that map to each layer and each layer’s
architectural properties are shown in
Figure 30.
VMware NSX Controller is a key part of
the NSX control plane. NSX Controller
is logically separated from all data plane
traffic. In addition to the controller, the NSX
Logical Router Control VM provides the
routing control plane to enable dynamic
routing between the NSX vSwitches and
the NSX Edge routers for north-south
traffic.The control plane elements of the
NSX environment store the control plane
states for the entire environment. The
control plane uses southbound Software
Defined Networking (SDN) protocols like
OpenFlow and OVSDB to program the
data plane components.
The NSX data plane exists in the
vSphere Distributed Switch (VDS) in the
ESXi hypervisor. The data plane in the
distributed switch performs functions
like logical switching, logical routing, and
firewalling. The data plane also exists
in the NSX Edge, which performs edge
functions like logical load balancing,
Layer 2/Layer 3 VPN services,
edge firewalling, and Dynamic Host
Configuration Protocol/Network Address
Translation (DHCP/NAT).
In addition, Brocade VDX switches also
participate in the data plane of the NSX-
based Software-Defined Data Center
(SDDC) network. As a hardware VTEP,
the Brocade VDX switches perform the
bridging between the physical and the
virtual domains. The gateway solution
connects Ethernet VLAN-based physical
devices with the VXLAN-based virtual
infrastructure, providing data center
operators a unified network operations
model for traditional, multitier, and
emerging applications.
Switching Routing Firewalling VPN Load
Balancing
Figure 29: Networking services offered by VMware NSX.
Figure 30: Networking layers and VMware NSX components.
35. 35
Brocade Data Center Fabrics
and VMware NSX in a Data Center Site
Brocade data center fabric architectures
provide the most robust, resilient, efficient,
and scalable physical networks for the
VMware SDDC. Brocade provides
choices for the underlay architecture and
deployment models.
The VMware SDDC can be deployed
using a leaf-spine topology based either
on Brocade VCS Fabric technology or
Brocade IP fabrics. If a higher scale is
required, an optimized 5-stage Clos
topology with Brocade IP fabrics or a
multifabric topology using VCS fabrics
provides an architecture that is scalable to
a very large number of servers.
Figure 31 illustrates VMware NSX
components deployed in a data center
PoD. For a VMware NSX deployment
within a data center PoD, the management
rack hosts the NSX software infrastructure
components like vCenter Server, NSX
Manager, and NSX Controller, as well
as cloud management platforms like
OpenStack or vRealize Automation.
The compute racks in a VMware NSX
environment host virtualized workloads.
The servers are virtualized using the
VMware ESXi hypervisor, which includes
the vSphere Distributed Switch (VDS). The
VDS hosts the NSX vSwitch functionality
of logical switching, distributed routing,
and firewalling. In addition, VXLAN
encapsulation and decapsulation is
performed at the NSX vSwitch.
Figure 32 shows the NSX components in
the edge services PoD. The edge racks
host the NSX Edge Services Gateway,
Figure 31: VMware NSX components in a data center PoD.
Servers/Blades
10 GbE
Spine
Leaf
Servers/Blades
10 GbE
IP Storage
10 GbE
Compute RacksManagement Rack Infrastructure Rack
NSX
vSwitch
Figure 32: VMware NSX components in an edge services PoD.
Border Leaf
Servers/Blades
10 GbE
Edge Racks
Load Balancer
10 GbE
Firewall