This document provides an overview of Brocade's data center fabric architectures, including single-tier, leaf-spine, and optimized 5-stage folded Clos topologies. It describes the Brocade platforms that can be used to build these fabrics, such as the VDX 6740, VDX 6940, VDX 8770 and SLX 9850 switches. It also covers various design considerations for each topology, such as oversubscription ratios, scale, port speeds and licensing. The purpose is to help customers design high performance cloud networks that meet their requirements for throughput, scale, traffic isolation and application continuity.
This document provides an overview of Brocade's IP Fabric technology and network virtualization with BGP EVPN. It describes the key components and benefits of Brocade IP Fabric including leaf-spine layer 3 Clos topologies and optimized routing. It also summarizes Brocade's approach to network virtualization using VXLAN layer 2 extension with BGP EVPN control plane. Key concepts covered include VTEP, static anycast gateway, overlay gateway, ARP suppression, VLAN scoping, conversational learning, integrated routing and bridging, multitenancy, ingress replication, and vLAG pair.
This section describes an EVPN DCI deployment model that provides both L2 extension and inter-VLAN routing capabilities between data centers. It leverages BGP EVPN control plane learning between data centers to extend Layer 2 connectivity and uses Layer 3 VNIs to enable routing between VLANs across sites. This model is well-suited for interconnecting EVPN-based IP fabric data centers, as it extends the existing EVPN control plane and provides a unified multi-site solution. Considerations for VXLAN tunnel scale and VLAN reuse across sites are discussed.
The document provides an overview of network virtualization options including:
1) Layer 2 extension with VXLAN which creates logical overlays on physical networks.
2) VRF-based Layer 3 virtualization which provides traffic isolation through virtual routing instances.
3) Brocade BGP-EVPN network virtualization which uses BGP control plane signaling for virtual overlay networks.
This document provides an overview of Brocade Validated Design for a Brocade Virtual Cluster Switching (VCS) Fabric with IP Storage. It describes the benefits of using VCS Fabric for IP storage, the key terminology, technologies and components involved like Virtual Cluster Switching, IP Storage, and their common deployment models. It also lists the hardware and software validated in Brocade's testing of VCS Fabrics with IP Storage solutions.
The Brocade ICX 7250 is an Ethernet switch available in models with 24 or 48 fixed ports and 4 or 8 uplink/stacking ports. It supports 10/100/1000BASE-T ports, SFP+ uplink ports, and optional PoE. Management is provided via a console port and out-of-band management port.
This document provides configuration guidelines and examples for VLANs on Brocade FastIron switches. It describes the different types of VLANs including port-based, IP subnet-based, and protocol-based VLANs. It also covers VLAN configuration topics like assigning trunk ports, enabling spanning tree per VLAN, routing between VLANs using virtual routing interfaces, and more. The document contains many VLAN configuration examples to illustrate different scenarios.
The document is the SCCP Programmer's Manual for Dialogic's SCCP software implementation. It provides an overview of the SCCP module's operation and defines all messages that can be sent to or received from the module. The manual describes the module's configuration parameters and interfaces to MTP, user applications, and management. It also covers global title translation procedures, message segmentation/reassembly, and other SCCP functions.
This document provides instructions for setting up and using the A400E analog telephony card with DAHDI on Linux. It includes information on hardware setup such as power supply and module installation. It also covers software installation and configuration of the A400E card with Asterisk and DAHDI drivers. The document contains safety instructions, an overview of Asterisk and the A400E card features, and chapters on hardware setup, software installation and configuration, specifications, and pin assignments.
This document provides an overview of Brocade's IP Fabric technology and network virtualization with BGP EVPN. It describes the key components and benefits of Brocade IP Fabric including leaf-spine layer 3 Clos topologies and optimized routing. It also summarizes Brocade's approach to network virtualization using VXLAN layer 2 extension with BGP EVPN control plane. Key concepts covered include VTEP, static anycast gateway, overlay gateway, ARP suppression, VLAN scoping, conversational learning, integrated routing and bridging, multitenancy, ingress replication, and vLAG pair.
This section describes an EVPN DCI deployment model that provides both L2 extension and inter-VLAN routing capabilities between data centers. It leverages BGP EVPN control plane learning between data centers to extend Layer 2 connectivity and uses Layer 3 VNIs to enable routing between VLANs across sites. This model is well-suited for interconnecting EVPN-based IP fabric data centers, as it extends the existing EVPN control plane and provides a unified multi-site solution. Considerations for VXLAN tunnel scale and VLAN reuse across sites are discussed.
The document provides an overview of network virtualization options including:
1) Layer 2 extension with VXLAN which creates logical overlays on physical networks.
2) VRF-based Layer 3 virtualization which provides traffic isolation through virtual routing instances.
3) Brocade BGP-EVPN network virtualization which uses BGP control plane signaling for virtual overlay networks.
This document provides an overview of Brocade Validated Design for a Brocade Virtual Cluster Switching (VCS) Fabric with IP Storage. It describes the benefits of using VCS Fabric for IP storage, the key terminology, technologies and components involved like Virtual Cluster Switching, IP Storage, and their common deployment models. It also lists the hardware and software validated in Brocade's testing of VCS Fabrics with IP Storage solutions.
The Brocade ICX 7250 is an Ethernet switch available in models with 24 or 48 fixed ports and 4 or 8 uplink/stacking ports. It supports 10/100/1000BASE-T ports, SFP+ uplink ports, and optional PoE. Management is provided via a console port and out-of-band management port.
This document provides configuration guidelines and examples for VLANs on Brocade FastIron switches. It describes the different types of VLANs including port-based, IP subnet-based, and protocol-based VLANs. It also covers VLAN configuration topics like assigning trunk ports, enabling spanning tree per VLAN, routing between VLANs using virtual routing interfaces, and more. The document contains many VLAN configuration examples to illustrate different scenarios.
The document is the SCCP Programmer's Manual for Dialogic's SCCP software implementation. It provides an overview of the SCCP module's operation and defines all messages that can be sent to or received from the module. The manual describes the module's configuration parameters and interfaces to MTP, user applications, and management. It also covers global title translation procedures, message segmentation/reassembly, and other SCCP functions.
This document provides instructions for setting up and using the A400E analog telephony card with DAHDI on Linux. It includes information on hardware setup such as power supply and module installation. It also covers software installation and configuration of the A400E card with Asterisk and DAHDI drivers. The document contains safety instructions, an overview of Asterisk and the A400E card features, and chapters on hardware setup, software installation and configuration, specifications, and pin assignments.
Eduardo Naval Rodríguez presentó un seminario sobre competencias informacionales, enseñando habilidades para buscar, evaluar y usar información de manera efectiva y responsable. El seminario cubrió temas como cómo encontrar y analizar fuentes confiables, citarlas correctamente y crear conocimiento a partir de la información obtenida.
This document provides an overview of Keller Williams' performance and growth from 2008-2009. It highlights that Keller Williams was ranked #1 among major real estate franchises in several surveys. While overall home sales declined in 2009, Keller Williams increased its number of agents by 26% and closed units increased by 1% compared to a 7% decrease nationally. The document outlines Keller Williams' continued focus on business tools, education, and culture to support its agents and drive future growth.
Macanta Consulting provides business and IT consulting services to help organizations become more profitable and sustainable. Their eco-ITSM service uses IT service management best practices to assess the sustainability of IT processes and minimize costs, waste, and carbon emissions from IT. The document discusses how IT contributes significantly to carbon emissions and energy costs. It presents data on the carbon footprint of different IT devices and industry sectors. The eco-ITSM service aims to help organizations continually improve the sustainability of their IT services.
The document discusses Pak-Afghan relations and offers suggestions for improving them. It notes that both countries face security threats from militancy and terrorism. It suggests that Pakistan and Afghanistan establish joint border security, enhance economic cooperation, and resolve disputes over Taliban sanctuaries and Indian influence in a spirit of mutual understanding and respect for sovereignty. Former Afghan president Hamid Karzai also provided a two-point solution: jointly fighting terrorism and Pakistan accepting Afghanistan's sovereignty and non-interference in its foreign relations. Experts say the countries must cooperate regionally and normalize bilateral relations.
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)Ontico
В докладе я хочу поделиться нашим (http://erlyvideo.ru/) опытом по международным продажам b2b ПО, разрабатываемого в России. Зарубежный доход — это 80% наших денег при предельной диверсификации наших доходов, нам платят люди из почти 100 стран.
Структура доклада:
1. Что мы продаем и как зарабатываем.
2. Как получилось вывести на международный рынок.
3. Почему наш софт покупают.
4. Как мы продаем и поддерживаем.
5. Кто клиенты, как с ними общаемся.
6. Как принимаем деньги и платим налоги (карты, пейпал, банковский счет, recurrent).
7. Сравнение российского ООО и зарубежной компании: юридические, налоговые аспекты.
8. Перспективы и вызовы.
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...Ontico
Совместно с университетом ИТМО мы запустили курс, посвященный основам HTML и CSS. Уже на момент регистрации на этот курс записалось более 12 тысяч студентов. Перед нами стояла задача разработать систему, которая будет автоматически проверять итоговые проекты на соответствие заранее подготовленному макету. В качестве основной техники для проверки было выбрано регрессионное тестирование.
В каждом проекте мы проверяли разметку, сетку и стилевое оформление не только страницы целиком, но и отдельных блоков. Одной из главных проблем был поиск этих самых блоков, так как о том, какой будет верстка студентов, мы не знали ничего — ни какие теги они использовали, ни какие классы и идентификаторы были задействованы. Имели только общее представление о структуре.
В докладе я расскажу, от чего мы отталкивались при построении этой системы, как мы разбирали и анализировали проекты. Какие инструменты и технологии мы для этого использовали и почему. Какие подводные камни вылезали, и какие возникали проблемы.
DC/OS – больше чем PAAS, Никита Борзых (Express 42)Ontico
Доклад про ближайшее будущее в эксплуатации распределённых систем.
Компания Mesosphere весной 2016 сделала свою платформу DC/OS (data center operation system) бесплатной и открытой. Платформа DC/OS унифицирует и упрощает процесс поставки и эксплуатации систем.
Основными особенностями платформы являются:
– переход от host centric к resource centric подходу для всех компонентов вашего проекта за счёт представления серверов как ресурсов для приложения (с помощью mesos и marathon);
– наличие инструментов автоматического восстановления вашего проекта после аварии;
– marketplace для приложений. Например, можно развернуть MySQL, Elasticsearch, Kafka или mongodb кластер, используя готовые скрипты развертывания. Процесс развертывания кастомизируется, в случае необходимости можно описать кастомные приложения и поправить скрипты существующих;
– наличие API для интеграции в ваши системы CI/CD, мониторинга, и т.д.
Основные компоненты DC/OS:
– Apache Mesos — абстракция над датацентром, которая представляет сервера (физические и виртуальные) как ресурсы и распределяет эти ресурсы на основании данных о потребностях приложения;
– Marathon — система распределённого запуска приложений (в т.ч. docker контейнеров), основной фишкой является возможность декларативного описания вашей системы. Вы можете описать, сколько ресурсов нужно вашему приложению, зависимости между приложениями, и в каком порядке производить деплой.
Доклад разбит на три части:
– Интро про DC/OS, сравнение с kubernetes и coreos стеком;
– Рассказ про компоненты mesos и marathon, как их можно использовать с докером (и без!) уже сейчас;
– Опыт Express 42. Мы построили CI/CD платформу для приложений, с использованием Mesos, Marathon, Docker и Jenkins 2.0.
Metodologias ativas de aprendizagem envolvem métodos de ensino além das aulas tradicionais, como projetos e resolução de problemas, para engajar mais os alunos no processo de aprendizagem. Ensinar por meio de projetos ou problemas são exemplos comuns dessas metodologias, as quais os professores podem usar para tornar o aprendizado mais interativo.
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...Ontico
Не так страшен Angular 2, как его малюют.
Первая реакция о нем весьма негативная. Круглые скобочки, квадратные, что это, зачем? Но что, если я вам скажу, что эти скобочки позволяют избавиться от проблем, которые не может решить React v15.x?
Знаете ли вы, что Angular 2 ближе к функциональному программированию, чем Redux?
В этом докладе мы обсудим:
1) Что нового даёт нам Angular 2?
2) Рассмотрим его архитектуру и поймём ценность этих решений.
3) Реактивное программирование с Angular 2.
4) В чём Angular 2 превосходит React и Redux?
5) Как перейти на Angular 2 и спать спокойно.
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)Ontico
Современный Веб всё больше стремится к динамичным, похожим на приложения, сайтам.
Оперативно строить быстрый и динамичный интерфейс на проекте N1.RU нам помогает Vue.js.
Однако, как и многие современные библиотеки и фреймворки, Vue.js не умеет рендериться на сервере.
При этом иметь такую возможность бывает полезно по нескольким причинам: начиная от вопросов SEO и заканчивая красотой загрузки страницы.
Чтобы реализовать такую возможность для Vue.js мы создали его дополнение — Vue-server.js.
Я расскажу о том, что умеет Vue.js, что у нашего дополнения "под капотом", почему мы выбрали такой путь и как, вообще, всё это работает. А ещё попробую дать критическую оценку проделанной работе.
The document discusses using microalgae to produce biodiesel as a renewable alternative fuel. Microalgae have advantages over other biodiesel feedstocks like seed oils in that they do not require arable land, can use brackish or saline water, and absorb more CO2. While open ponds are commonly used, they have issues with contamination, evaporation and land use. The aim is to use microalgae for high and cost-effective biodiesel production to address declining fossil fuels and global warming without competing with food supplies.
Get started with Socialfave, a complete SMM platform to manage your Tweets & likes (search, classify, analyze, share by topics, schedule) and your community (search, analyze, grow).
This document provides an overview and user guide for Oracle Process Manufacturing Cost Management. It describes how to set up and use standard, actual, and lot costing methods. The document also covers period-end cost processing, copying costs between periods and organizations, and available cost management reports.
This document provides an overview of Oracle Primavera P6 Enterprise Project Portfolio Management (EPPM). It discusses the main P6 applications, features for customizing the user interface, working with data, security, printing, email notifications, and grouping/sorting. It also provides links to documentation, training, support and lists new features in the latest version of P6 EPPM.
Eduardo Naval Rodríguez presentó un seminario sobre competencias informacionales, enseñando habilidades para buscar, evaluar y usar información de manera efectiva y responsable. El seminario cubrió temas como cómo encontrar y analizar fuentes confiables, citarlas correctamente y crear conocimiento a partir de la información obtenida.
This document provides an overview of Keller Williams' performance and growth from 2008-2009. It highlights that Keller Williams was ranked #1 among major real estate franchises in several surveys. While overall home sales declined in 2009, Keller Williams increased its number of agents by 26% and closed units increased by 1% compared to a 7% decrease nationally. The document outlines Keller Williams' continued focus on business tools, education, and culture to support its agents and drive future growth.
Macanta Consulting provides business and IT consulting services to help organizations become more profitable and sustainable. Their eco-ITSM service uses IT service management best practices to assess the sustainability of IT processes and minimize costs, waste, and carbon emissions from IT. The document discusses how IT contributes significantly to carbon emissions and energy costs. It presents data on the carbon footprint of different IT devices and industry sectors. The eco-ITSM service aims to help organizations continually improve the sustainability of their IT services.
The document discusses Pak-Afghan relations and offers suggestions for improving them. It notes that both countries face security threats from militancy and terrorism. It suggests that Pakistan and Afghanistan establish joint border security, enhance economic cooperation, and resolve disputes over Taliban sanctuaries and Indian influence in a spirit of mutual understanding and respect for sovereignty. Former Afghan president Hamid Karzai also provided a two-point solution: jointly fighting terrorism and Pakistan accepting Afghanistan's sovereignty and non-interference in its foreign relations. Experts say the countries must cooperate regionally and normalize bilateral relations.
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)Ontico
В докладе я хочу поделиться нашим (http://erlyvideo.ru/) опытом по международным продажам b2b ПО, разрабатываемого в России. Зарубежный доход — это 80% наших денег при предельной диверсификации наших доходов, нам платят люди из почти 100 стран.
Структура доклада:
1. Что мы продаем и как зарабатываем.
2. Как получилось вывести на международный рынок.
3. Почему наш софт покупают.
4. Как мы продаем и поддерживаем.
5. Кто клиенты, как с ними общаемся.
6. Как принимаем деньги и платим налоги (карты, пейпал, банковский счет, recurrent).
7. Сравнение российского ООО и зарубежной компании: юридические, налоговые аспекты.
8. Перспективы и вызовы.
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...Ontico
Совместно с университетом ИТМО мы запустили курс, посвященный основам HTML и CSS. Уже на момент регистрации на этот курс записалось более 12 тысяч студентов. Перед нами стояла задача разработать систему, которая будет автоматически проверять итоговые проекты на соответствие заранее подготовленному макету. В качестве основной техники для проверки было выбрано регрессионное тестирование.
В каждом проекте мы проверяли разметку, сетку и стилевое оформление не только страницы целиком, но и отдельных блоков. Одной из главных проблем был поиск этих самых блоков, так как о том, какой будет верстка студентов, мы не знали ничего — ни какие теги они использовали, ни какие классы и идентификаторы были задействованы. Имели только общее представление о структуре.
В докладе я расскажу, от чего мы отталкивались при построении этой системы, как мы разбирали и анализировали проекты. Какие инструменты и технологии мы для этого использовали и почему. Какие подводные камни вылезали, и какие возникали проблемы.
DC/OS – больше чем PAAS, Никита Борзых (Express 42)Ontico
Доклад про ближайшее будущее в эксплуатации распределённых систем.
Компания Mesosphere весной 2016 сделала свою платформу DC/OS (data center operation system) бесплатной и открытой. Платформа DC/OS унифицирует и упрощает процесс поставки и эксплуатации систем.
Основными особенностями платформы являются:
– переход от host centric к resource centric подходу для всех компонентов вашего проекта за счёт представления серверов как ресурсов для приложения (с помощью mesos и marathon);
– наличие инструментов автоматического восстановления вашего проекта после аварии;
– marketplace для приложений. Например, можно развернуть MySQL, Elasticsearch, Kafka или mongodb кластер, используя готовые скрипты развертывания. Процесс развертывания кастомизируется, в случае необходимости можно описать кастомные приложения и поправить скрипты существующих;
– наличие API для интеграции в ваши системы CI/CD, мониторинга, и т.д.
Основные компоненты DC/OS:
– Apache Mesos — абстракция над датацентром, которая представляет сервера (физические и виртуальные) как ресурсы и распределяет эти ресурсы на основании данных о потребностях приложения;
– Marathon — система распределённого запуска приложений (в т.ч. docker контейнеров), основной фишкой является возможность декларативного описания вашей системы. Вы можете описать, сколько ресурсов нужно вашему приложению, зависимости между приложениями, и в каком порядке производить деплой.
Доклад разбит на три части:
– Интро про DC/OS, сравнение с kubernetes и coreos стеком;
– Рассказ про компоненты mesos и marathon, как их можно использовать с докером (и без!) уже сейчас;
– Опыт Express 42. Мы построили CI/CD платформу для приложений, с использованием Mesos, Marathon, Docker и Jenkins 2.0.
Metodologias ativas de aprendizagem envolvem métodos de ensino além das aulas tradicionais, como projetos e resolução de problemas, para engajar mais os alunos no processo de aprendizagem. Ensinar por meio de projetos ou problemas são exemplos comuns dessas metodologias, as quais os professores podem usar para tornar o aprendizado mais interativo.
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...Ontico
Не так страшен Angular 2, как его малюют.
Первая реакция о нем весьма негативная. Круглые скобочки, квадратные, что это, зачем? Но что, если я вам скажу, что эти скобочки позволяют избавиться от проблем, которые не может решить React v15.x?
Знаете ли вы, что Angular 2 ближе к функциональному программированию, чем Redux?
В этом докладе мы обсудим:
1) Что нового даёт нам Angular 2?
2) Рассмотрим его архитектуру и поймём ценность этих решений.
3) Реактивное программирование с Angular 2.
4) В чём Angular 2 превосходит React и Redux?
5) Как перейти на Angular 2 и спать спокойно.
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)Ontico
Современный Веб всё больше стремится к динамичным, похожим на приложения, сайтам.
Оперативно строить быстрый и динамичный интерфейс на проекте N1.RU нам помогает Vue.js.
Однако, как и многие современные библиотеки и фреймворки, Vue.js не умеет рендериться на сервере.
При этом иметь такую возможность бывает полезно по нескольким причинам: начиная от вопросов SEO и заканчивая красотой загрузки страницы.
Чтобы реализовать такую возможность для Vue.js мы создали его дополнение — Vue-server.js.
Я расскажу о том, что умеет Vue.js, что у нашего дополнения "под капотом", почему мы выбрали такой путь и как, вообще, всё это работает. А ещё попробую дать критическую оценку проделанной работе.
The document discusses using microalgae to produce biodiesel as a renewable alternative fuel. Microalgae have advantages over other biodiesel feedstocks like seed oils in that they do not require arable land, can use brackish or saline water, and absorb more CO2. While open ponds are commonly used, they have issues with contamination, evaporation and land use. The aim is to use microalgae for high and cost-effective biodiesel production to address declining fossil fuels and global warming without competing with food supplies.
Get started with Socialfave, a complete SMM platform to manage your Tweets & likes (search, classify, analyze, share by topics, schedule) and your community (search, analyze, grow).
This document provides an overview and user guide for Oracle Process Manufacturing Cost Management. It describes how to set up and use standard, actual, and lot costing methods. The document also covers period-end cost processing, copying costs between periods and organizations, and available cost management reports.
This document provides an overview of Oracle Primavera P6 Enterprise Project Portfolio Management (EPPM). It discusses the main P6 applications, features for customizing the user interface, working with data, security, printing, email notifications, and grouping/sorting. It also provides links to documentation, training, support and lists new features in the latest version of P6 EPPM.
This document introduces Oracle WebCenter Suite, which provides tools and services to help developers build applications that simplify transactions for users. Key components of WebCenter Suite include the WebCenter Framework, which enhances the Java Server Faces environment and allows portlets and content to be integrated into applications. WebCenter Services provide capabilities for communication, content management, customization, and search that can be utilized by applications to provide additional context and functionality for users. The tutorial will teach how to use the WebCenter Framework to build applications that leverage these services.
This document is the user guide for Oracle Shipping Execution software. It contains 3 parts:
1) Intellectual property information stating that the software and documentation contain proprietary information and restrictions on use and distribution.
2) Information on reporting documentation errors and restrictions on reproduction of the programs.
3) A notice for US government customers stating the programs are commercial computer software/data and subject to licensing restrictions.
This document provides installation instructions for Oracle9i Database on Windows. It includes information on planning the installation, reviewing system requirements, installing Oracle components like the database, client, and management tools, and post-installation configuration tasks. The document has several chapters that cover pre-installation planning, installing Oracle components, reviewing the contents of a starter database, and post-installation configuration. It also includes appendices that describe individual Oracle components and provide additional installation guidance for features like Oracle Real Application Clusters and transparent gateways to other databases.
The document describes Oracle's business intelligence solution, which provides a complete and integrated set of tools to support business intelligence. It includes tools for information consumers, report developers and analysts, database administrators, and application developers. The solution allows businesses to derive critical information from their large amounts of transactional data to help decision makers improve business performance and competitiveness.
This document provides an overview of Oracle Data Integrator (ODI) 11g Release 1 (11.1.1) and its developer's guide. It includes information about new features, documentation accessibility, related documents, and table of contents. The document is copyrighted by Oracle and describes restrictions on the use of ODI software and documentation. It also contains Oracle trademark information.
This document provides instructions for installing Oracle WebLogic Server 11g Release 1 (10.3.4). It describes downloading the installer, meeting prerequisites like system requirements and administrator privileges. It explains choosing installation directories and modes. Sections cover running the graphical, console, and silent mode installers, including samples. The document provides post-installation information.
Oracle Fusion Cloud Advanced Controls regulates activity in business applications through two components: Oracle Advanced Access Controls and Oracle Advanced Financial Controls. It includes models that establish risk logic and controls that adopt a model's logic to generate permanent incidents. Models return temporary results while controls return permanent incidents. Notifications and worklists inform users of tasks requiring attention, such as new incidents for controls where they are investigators. Records are secured by authorizing eligible users as owners, editors, or viewers.
Here are the key points about policies, assertions, expressions, and operators in Oracle Web Services Manager (Oracle WSM):
- A policy defines the capabilities and requirements of a web service, such as security, reliability, transactions, etc.
- A policy assertion is a basic unit that expresses an individual requirement, capability or property in a policy.
- A policy expression is the XML representation of a policy, consisting of policy assertions combined using policy operators.
- Common policy operators include:
- wsp:Policy - A policy consisting of a single assertion or a list of assertions combined using the AND logic.
- wsp:All - A list of assertions that must all evaluate to true (AND logic).
Here are the key steps to set up Oracle Engineering:
1. Set profile options - Required
2. Enter employee information - Required
3. Define ECO types (optional but recommended)
4. Define ECO departments (optional)
5. Define ECO autonumbering sequences (optional)
6. Define ECO approval lists (optional but recommended)
7. Define material dispositions (optional)
8. Define ECO reasons (optional)
9. Define ECO priorities (optional)
10. Start the AutoImplement Manager (optional)
You'll also need to complete some prerequisite setup in Oracle Inventory and Bills of Material. Let me know if you need any clarification or have
This document provides installation instructions for Oracle Application Server Forms and Reports Services 10g Release 2 for Windows. It discusses what's new in this release, an introduction to the available features, system requirements, port usage, and other topics to prepare for a successful installation. The document contains detailed information on installing, configuring, and deploying Forms and Reports Services.
Oracle® application server forms and reports services installation guideFITSFSd
This document provides installation instructions for Oracle Application Server Forms and Reports Services 10g Release 2 for Windows. It discusses what's new in this release, an introduction to the available features, system requirements, port usage, and other topics to prepare for a successful installation. The document contains detailed information on installing, configuring, and deploying Forms and Reports Services.
This document provides an overview and reference for Oracle Database PL/SQL. It discusses the main features of PL/SQL including block structure, variables, constants, control structures, subprograms, collections and records. The document is copyrighted by Oracle Corporation and is intended to help users understand and effectively use the PL/SQL language to develop applications for the Oracle database.
This document provides installation instructions for Oracle WebCenter Sites 11g Release 1 (11.1.1.8.0) and includes the following key points:
- It outlines the installation process and prerequisites for installing WebCenter Sites on Oracle WebLogic Server, Apache Tomcat Server, or IBM WebSphere Application Server.
- It provides instructions for configuring the application server with managed servers, clusters, data sources, and other required settings before installing WebCenter Sites.
- It describes how to integrate the application server with a supported web server like Oracle HTTP Server, Apache HTTP Server, or IIS.
- It contains reference information like paths, directories, and start/stop commands for the application servers
New features in Primavera Prime 15.2 include enhancements to document collaboration, risk analysis, schedule management, resources, cost management, reports, scope management, the platform, and Prime Mobile. Key updates involve adding uncertainty and risk response plans, drag and drop uploading of documents, project reporting cycles, and management of project resources, roles, budgets, and scope.
This document provides release notes for Oracle Fusion Middleware 11g Release 1 (11.1.1) for Microsoft Windows (32-Bit). It includes sections on installation and configuration issues and workarounds, upgrade considerations, administration topics, and documentation errata. Specific issues covered include requirements for upgrading components, problems with installation on specific operating systems, configuration changes required when expanding a cluster, and inaccuracies in other documentation.
Oracle Application Express requires an Oracle database release 9.2 or higher, a supported browser, an HTTP server, and sufficient disk space and memory resources. It also requires that Oracle XML DB, Oracle Text, and the PL/SQL Web Toolkit are installed and configured.
This document provides guidelines for building applications using Oracle Forms Developer and Oracle Reports Developer. It discusses managing application development through the software development lifecycle using Project Builder. Project Builder allows developers to associate modules with applications, create dependencies between modules, designate installation modules, and access source control. The document also covers managing projects and project documents during design, testing, and release phases. It concludes with instructions for deploying completed applications using Oracle Installer files.
Oracle database gateway 11g r2 installation and configuration guideFarrukh Muhammad
Initialization Parameters
Initialization Parameters for Oracle Database Gateway for Sybase ............................................. C-1
Initialization Parameters for Oracle Database Gateway for Informix .......................................... C-4
Initialization Parameters for Oracle Database Gateway for Teradata .......................................... C-7
Initialization Parameters for Oracle Database Gateway for SQL Server ................................... C-10
Initialization Parameters for Oracle Database Gateway for ODBC ........................................... C-13
Initialization Parameters for Oracle Database Gateway for DRDA ........................................... C-16
D Accessing Oracle Database Gateway
Using SQL*Plus....................................................................................................................................... D-1
Using OCI (Pro*C/C++, OCCI, or Pro*COBOL)............................................................................. D-2
Similar to brocade-dc-fabric-architectures-sdg (20)
3. Contents
Preface...................................................................................................................................................................................................................................5
Document History......................................................................................................................................................................................................................................5
About the Author........................................................................................................................................................................................................................................5
Overview........................................................................................................................................................................................................................................................5
Purpose of This Document....................................................................................................................................................................................................................5
About Brocade............................................................................................................................................................................................................................................ 6
Data Center Networking Architectures...........................................................................................................................................................................7
Throughput and Traffic Patterns...........................................................................................................................................................................................................7
Scale Requirements of Cloud Networks...........................................................................................................................................................................................9
Traffic Isolation, Segmentation, and Application Continuity..................................................................................................................................................... 9
Data Center Networks: Building Blocks.......................................................................................................................................................................11
Brocade VDX and SLX Platforms....................................................................................................................................................................................................11
VDX 6740........................................................................................................................................................................................................................................11
VDX 6940........................................................................................................................................................................................................................................12
VDX 8770........................................................................................................................................................................................................................................13
SLX 9850.........................................................................................................................................................................................................................................14
Networking Endpoints...........................................................................................................................................................................................................................15
Single-Tier Topology............................................................................................................................................................................................................................. 16
Design Considerations................................................................................................................................................................................................................ 17
Oversubscription Ratios..............................................................................................................................................................................................................17
Port Density and Speeds for Uplinks and Downlinks.....................................................................................................................................................17
Scale and Future Growth............................................................................................................................................................................................................ 17
Ports on Demand Licensing.....................................................................................................................................................................................................18
Leaf-Spine Topology (Two Tiers)......................................................................................................................................................................................................18
Design Considerations................................................................................................................................................................................................................ 19
Oversubscription Ratios..............................................................................................................................................................................................................19
Leaf and Spine Scale....................................................................................................................................................................................................................19
Port Speeds for Uplinks and Downlinks.............................................................................................................................................................................. 20
Scale and Future Growth............................................................................................................................................................................................................ 20
Ports on Demand Licensing.....................................................................................................................................................................................................20
Deployment Model....................................................................................................................................................................................................................... 20
Data Center Points of Delivery.................................................................................................................................................................................................21
Optimized 5-Stage Folded Clos Topology (Three Tiers)....................................................................................................................................................... 21
Design Considerations................................................................................................................................................................................................................ 22
Oversubscription Ratios..............................................................................................................................................................................................................23
Deployment Model....................................................................................................................................................................................................................... 23
Edge Services and Border Switches Topology.......................................................................................................................................................................... 23
Design Considerations................................................................................................................................................................................................................ 24
Oversubscription Ratios..............................................................................................................................................................................................................24
Data Center Core/WAN Edge Handoff.................................................................................................................................................................................24
Data Center Core and WAN Edge Routers.........................................................................................................................................................................25
Building Data Center Sites with Brocade VCS Fabric Technology........................................................................................................................27
Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................28
Scale....................................................................................................................................................................................................................................................30
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics...........................................................................................................31
Brocade Data Center Fabric Architectures
53-1004601-02 3
4. Scale....................................................................................................................................................................................................................................................34
Building Data Center Sites with Brocade IP Fabric...................................................................................................................................................37
Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................37
Scale....................................................................................................................................................................................................................................................39
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos............................................................................................................................40
Scale....................................................................................................................................................................................................................................................41
Building Data Center Sites with Layer 2 and Layer 3 Fabrics................................................................................................................................ 45
Scaling a Data Center Site with a Data Center Core................................................................................................................................................. 47
Control-Plane and Hardware-Scale Considerations.................................................................................................................................................49
Control-Plane Architectures................................................................................................................................................................................................................50
Single-Tier Data Center Sites....................................................................................................................................................................................................50
Brocade VCS Fabric.....................................................................................................................................................................................................................51
Multi-Fabric Topology Using VCS Technology................................................................................................................................................................ 54
Brocade IP Fabric..........................................................................................................................................................................................................................56
Routing Protocol Architecture for Brocade IP Fabric and Multi-Fabric Topology Using VCS Technology..................................................59
eBGP-based Brocade IP Fabric and Multi-Fabric Topology............................................................................................................................................... 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology.................................................................................................................................................60
Choosing an Architecture for Your Data Center.........................................................................................................................................................63
High-Level Comparison Table ......................................................................................................................................................................................................... 63
Deployment Scale Considerations...................................................................................................................................................................................................64
Fabric Architecture..................................................................................................................................................................................................................................65
Recommendations................................................................................................................................................................................................................................. 65
Brocade Data Center Fabric Architectures
4 53-1004601-02
5. Preface
• Document History................................................................................................................................................................................................5
• About the Author...................................................................................................................................................................................................5
• Overview...................................................................................................................................................................................................................5
• Purpose of This Document.............................................................................................................................................................................. 5
• About Brocade.......................................................................................................................................................................................................6
Document History
Date Part Number Description
February 9, 2016 Initial release with DC fabric architectures, network virtualization, Data Center Interconnect, and
automation content.
September 13, 2016 53-1004601-01 Initial release of solution design guide for DC fabric architectures.
October 06, 2016 53-1004601-02 Replaced the figures for the Brocade VDX 6940-36Q and the Brocade VDX 6940-144S.
About the Author
Anuj Dewangan is the lead Technical Marketing Engineer (TME) for Brocade's data center switching products. He holds a CCIE in
Routing and Switching and has several years of experience in the networking industry with roles in software development, solution
validation, technical marketing, and product management. At Brocade, his focus is creating reference architectures, working with
customers and account teams to address their challenges with data center networks, creating product and solution collateral, and helping
define products and solutions. He regularly speaks at industry events and has authored several white papers and solution design guides
on data center networking.
The author would like to acknowledge Jeni Lloyd and Patrick LaPorte for their in-depth review of this solution guide and for providing
valuable insight, edits, and feedback.
Overview
Based on the principles of the New IP, Brocade is building on Brocade
®
VDX
®
and Brocade
®
SLX
®
platforms by delivering cloud-
optimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for
higher levels of scale, agility, and operational efficiency.
The scalable and highly automated Brocade data center fabric architectures described in this solution design guide make it easy for
infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their
own cloud-optimized data center on their own time and terms.
Purpose of This Document
This guide helps network architects, virtualization architects, and network engineers to make informed design, architecture, and
deployment decisions that best meet their technical and business objectives. Network architecture and deployment options for scaling
from tens to hundreds of thousands of servers are discussed in detail.
Brocade Data Center Fabric Architectures
53-1004601-02 5
6. About Brocade
Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where
applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity,
non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and
cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support,
and professional services offerings (www.brocade.com).
About Brocade
Brocade Data Center Fabric Architectures
6 53-1004601-02
7. Data Center Networking Architectures
• Throughput and Traffic Patterns..................................................................................................................................................................... 7
• Scale Requirements of Cloud Networks..................................................................................................................................................... 9
• Traffic Isolation, Segmentation, and Application Continuity................................................................................................................9
Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments.
This evolution has been triggered by a combination of industry technology trends like server virtualization as well as the architectural
changes of the applications being deployed in the data centers. These technological and architectural changes are affecting the way
private and public cloud networks are designed. As these changes proliferate in the traditional data centers, the need to adopt modern
data center architectures has been growing.
Throughput and Traffic Patterns
Traditional data center network architectures were a derivative of the three-tier topology, prevalent in enterprise campus environments.
The tiers are defined as Access, Aggregation, and Core. Figure 1 shows an example of a data center network built using a traditional
three-tier topology.
FIGURE 1 Three-Tier Data Center Architecture
The three-tier topology was architected with the requirements of an enterprise campus in mind. In a campus network, the basic
requirement of the access layer is to provide connectivity to workstations. These workstations exchange traffic either with an enterprise
Brocade Data Center Fabric Architectures
53-1004601-02 7
8. data center for business application access or with the Internet. As a result, most traffic in this network traverses in and out through the
tiers in the network. This traffic pattern is commonly referred to as north-south traffic.
The throughput requirements for traffic in a campus environment are less compared to those of a data center network where server
virtualization has increased the application density and subsequently the data throughput to and from the servers. In addition, cloud
applications are often multitiered and hosted at different endpoints connected to the network. The communication between these
application tiers is a major contributor to the overall traffic in a data center. The multitiered nature of the applications deployed in a data
center drives traffic patterns in a data center network to be more east-west than north-south. In fact, some of the very large data centers
hosting multitiered applications report that more than 90 percent of their overall traffic occurs between the application tiers.
Because of high throughput requirements and the east-west traffic patterns, the networking access layer that connects directly to the
servers exchanges a much higher proportion of traffic with the upper layers of the networking infrastructure, as compared to an enterprise
campus network.
These reasons have driven the data center network architecture evolution into scale-out architectures. Figure 2 illustrates a leaf-spine
topology, which is an example of a scale-out architecture. These scale-out architectures are built to maximize the throughput of traffic
exchange between the leaf layer and the spine layer.
FIGURE 2 Scale-Out Architecture: Ideal for East-West Traffic Patterns Common with Web-Based or Cloud-Based Application Designs
As compared to a three-tier network, where the aggregation layer is restricted to two devices—typically because of technologies like
Multi-Chassis Trunking (MCT) where exactly two devices can participate in the creation of port channels facing the access-layer switches
—the spine layer can have multiple devices and hence provide a higher port density to connect to the leaf-layer switches. This allows
more interfaces from each leaf to connect into the spine layer, providing higher throughput from each leaf to the spine layer. The
characteristics of a leaf-spine topology are discussed in more detail in subsequent sections.
The traditional three-tier datacenter architecture is still prevalent in environments where traffic throughput requirements between the
networking layers can be satisfied through high-density platforms at the aggregation layer. For certain use cases like co-location data
centers, where customer traffic is restricted to racks or managed areas within the data center, a three-tier architecture maybe more
suitable. Similarly, enterprises hosting nonvirtualized and single-tiered applications may find the three-tier datacenter architecture more
suitable.
Throughput and Traffic Patterns
Brocade Data Center Fabric Architectures
8 53-1004601-02
9. Scale Requirements of Cloud Networks
Another trend in recent years has been the consolidation of disaggregated infrastructures into larger central locations. With the changing
economics and processes of application delivery, there has also been a shift of application workloads to public cloud provider networks.
Enterprises have looked to consolidate and host private cloud services. Meanwhile, software cloud services, as well as infrastructure and
platform service providers, have grown at a rapid pace. With this increasing shift of applications to the private and public cloud, the scale
of the network deployment has increased drastically. Advanced scale-out architectures allow networks to be deployed at many multiples
of the scale of a leaf-spine topology. An example of Brocade's advanced scale-out architecture is shown in Figure 3.
FIGURE 3 Example of Brocade's Advanced Scale-Out Architecture (Optimized 5-Stage Clos)
Brocade's advanced scale-out architectures allow data centers to be built at very high scales of ports and racks. Advanced scale-out
architectures using an optimized 5-stage Clos topology are described later in more detail.
A consequence of server virtualization enabling physical servers to host several virtual machines (VMs) is that the scale requirement for
the control and data planes for networking parameters like MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables
has multiplied. Also, these virtualized servers must support much higher throughput than in a traditional enterprise environment, leading
to an evolution in Ethernet standards of 10 Gigabit Ethernet (10 GbE), 25GbE, 40 GbE, 50GbE, and 100 GbE.
Traffic Isolation, Segmentation, and Application Continuity
For multitenant cloud environments, providing traffic isolation between the network tenants is a priority. This isolation must be achieved at
all networking layers. In addition, many environments must support overlapping IP addresses and VLAN numbering for the tenants of
the network. Providing traffic segmentation through enforcement of security and traffic policies for each cloud tenant's application tiers is
a requirement as well.
In order to support application continuity and infrastructure high availability, it is commonly required that the underlying networking
infrastructure be extended within and across one or more data center sites. Extension of Layer 2 domains is a specific requirement in
many cases. Examples of this include virtual machine mobility across the infrastructure for high availability; resource load balancing and
fault tolerance needs; and creation of application-level clustering, which commonly relies on shared broadcast domains for clustering
Traffic Isolation, Segmentation, and Application Continuity
Brocade Data Center Fabric Architectures
53-1004601-02 9
10. operations like cluster node discovery and many-to-many communication. The need to extend tenant Layer 2 and Layer 3 domains
while still supporting a common infrastructure Layer 3 environment across the infrastructure and also across sites is creating new
challenges for network architects and administrators.
The remainder of this solution design guide describes data center networking architectures that meet the requirements identified above
for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. This guide
focuses on the design considerations and choices for building a data center site using Brocade platforms and technologies. Refer to the
Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide for a discussion on multitenant
infrastructures and overlay networking that builds on the architectural concepts defined here.
Traffic Isolation, Segmentation, and Application Continuity
Brocade Data Center Fabric Architectures
10 53-1004601-02
11. Data Center Networks: Building Blocks
• Brocade VDX and SLX Platforms...............................................................................................................................................................11
• Networking Endpoints..................................................................................................................................................................................... 15
• Single-Tier Topology........................................................................................................................................................................................16
• Leaf-Spine Topology (Two Tiers)................................................................................................................................................................ 18
• Optimized 5-Stage Folded Clos Topology (Three Tiers)..................................................................................................................21
• Edge Services and Border Switches Topology.....................................................................................................................................23
This section discusses the building blocks that are used to build the network and network virtualization architectures for a data center site.
These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly
independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure.
Brocade VDX and SLX Platforms
The first building block for the networking infrastructure is the Brocade networking platforms, which include Brocade VDX
®
switches and
Brocade SLX
®
routers. This section provides a high-level summary of each of these two platform families.
Brocade VDX switches with IP fabrics and VCS fabrics provide automation, resiliency, and scalability. Industry-leading Brocade VDX
switches are the foundation for high-performance connectivity in data center fabric, storage, and IP network environments. Available in
fixed and modular forms, these highly reliable, scalable, and available switches are designed for a wide range of environments, enabling a
low Total Cost of Ownership (TCO) and fast Return on Investment (ROI).
VDX 6740
The Brocade VDX 6740 series of switches provides the advanced feature set that data centers require while delivering the high
performance and low latency that virtualized environments demand. Together with Brocade data center fabrics, these switches transform
data center networks to support the New IP by enabling cloud-based architectures that deliver new levels of scale, agility, and operational
efficiency. These highly automated, software-driven, and programmable data center fabric design solutions support a breadth of network
virtualization options and scale for data center environments ranging from tens to thousands of servers. Moreover, they make it easy for
organizations to architect, automate, and integrate current and future data center technologies while they transition to a cloud model that
addresses their needs, on their timetable and on their terms. The Brocade VDX 6740 Switch offers 48 10-Gigabit-Ethernet (GbE) Small
Form Factor Pluggable Plus (SFP+) ports and 4 40-GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40-GbE SFP+ port can
be broken out into four independent 10-GbE SFP+ ports, providing an additional 16 10-GbE SFP+ ports, which can be licensed with
Ports on Demand (PoD).
FIGURE 4 VDX 6740
Brocade Data Center Fabric Architectures
53-1004601-02 11
12. FIGURE 5 VDX 6740T
FIGURE 6 VDX 6740T-1G
VDX 6940
The Brocade VDX 6940-36Q is a fixed 40-Gigabit-Ethernet (GbE)optimized switch in a 1U form factor. It offers 36 40-GbE QSFP+
ports and can be deployed as a spine or leaf switch. Each 40-GbE port can be broken out into four independent 10-GbE SFP+ ports,
providing a total of 144 10-GbE SFP+ ports. Deployed as a spine, it provides options to connect 40-GbE or 10-GbE uplinks from leaf
switches. By deploying this high-density, compact switch, data center administrators can reduce their TCO through savings on power,
space, and cooling. In a leaf deployment, 10-GbE and 40-GbE ports can be mixed, offering flexible design options to cost-effectively
support demanding data center and service provider environments. As with other Brocade VDX platforms, the Brocade VDX 6940-36Q
offers a Ports on Demand (PoD) licensing model. The Brocade VDX 6940-36Q is available with 24 ports or 36 ports. The 24-port
model offers a lower entry point for organizations that want to start small and grow their networks over time. By installing a software
license, organizations can upgrade their 24-port switch to the maximum 36-port switch. The Brocade VDX 6940-144S Switch is
10 GbE optimized with 40-GbE or 100-GbE uplinks in a 2U form factor. It offers 96 native 1/10-GbE SFP/SFP+ ports and 12
40-GbE QSFP+ ports, or 4 100-GbE QSFP28 ports.
FIGURE 7 VDX 6940-36Q
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
12 53-1004601-02
13. FIGURE 8 VDX 6940-144S
VDX 8770
The Brocade VDX 8770 switch is designed to scale and support complex environments with dense virtualization and dynamic traffic
patterns—where more automation is required for operational scalability. The 100-GbE-ready Brocade VDX 8770 dramatically increases
the scale that can be achieved in Brocade data center fabrics, with 10-GbE and 40-GbE wire-speed switching, numerous line card
options, and the ability to connect over 8,000 server ports in a single switching domain. Available in 4-slot and 8-slot versions, the
Brocade VDX 8770 is a highly scalable, low-latency modular switch that supports the most demanding data center networks.
FIGURE 9 VDX 8770-4
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
53-1004601-02 13
14. FIGURE 10 VDX 8770-8
SLX 9850
The Brocade
®
SLX
™
9850 Router is designed to deliver the cost-effective density, scale, and performance needed to address the
ongoing explosion of network bandwidth, devices, and services today and in the future. This flexible platform powered by Brocade
SLX-OS provides carrier-class advanced features leveraging proven Brocade routing technology that is used in the most demanding
data center, service provider, and enterprise networks today and is delivered on best-in-class forwarding hardware. The extensible
architecture of the Brocade SLX 9850 is designed for investment protection to readily support future needs for greater bandwidth, scale,
and forwarding capabilities.
Additionally, the Brocade SLX 9850 helps address the increasing agility and analytics needs of digital businesses with network
automation and network visibility innovation supported through the Brocade Workflow Composer
™
and the Brocade SLX Insight
Architecture
™
.
FIGURE 11 Brocade SLX-9850-4
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
14 53-1004601-02
15. FIGURE 12 Brocade SLX-9850-8
Networking Endpoints
The next building blocks are the networking endpoints to connect to the networking infrastructure. These endpoints include the compute
servers and storage devices, as well as network service appliances such as firewalls and load balancers.
FIGURE 13 Networking Endpoints and Racks
Figure 13 shows the different types of racks used in a data center infrastructure:
• Infrastructure and management racks—These racks host the management infrastructure, which includes any management
appliances or software used to manage the infrastructure. Examples of these are server virtualization management software like
VMware vCenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vRealize Automation, network
controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade
Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances.
Networking Endpoints
Brocade Data Center Fabric Architectures
53-1004601-02 15
16. • Compute racks—Compute racks host the workloads for the data centers. These workloads can be physical servers, or they can
be virtualized servers when the workload is made up of virtual machines (VMs). The compute endpoints can be single or can be
multihomed to the network.
• Edge racks—Network services like perimeter firewalls, load balancers, and NAT devices connected to the network are
consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or virtual
machines.
These definitions of infrastructure/management, compute, and edge racks are used throughout this solution design guide.
Single-Tier Topology
The second building block is a single-tier network topology to connect endpoints to the network. Because of the existence of only one
tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 14. The single-tier switches
are shown as a virtual Link Aggregation Group (vLAG) pair. However, the single-tier switches can also be part of a Multi-Chassis Trunking
(MCT) pair. The Brocade VDX supports vLAG pairs, whereas the Brocade SLX 9850 supports MCT.
The topology in Figure 14 shows the management/infrastructure, compute, and edge racks connected to a pair of switches participating
in multiswitch port channeling. This pair of switches is called a vLAG pair.
FIGURE 14 Single Networking Tier
The single-tier topology scales the least among all the topologies described in this guide, but it provides the best choice for smaller
deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It
also reduces the optics and cabling costs for the networking infrastructure.
Single-Tier Topology
Brocade Data Center Fabric Architectures
16 53-1004601-02
17. Design Considerations
The design considerations for deploying a single-tier topology are summarized in this section.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at the vLAG pair/MCT should be well understood and planned for.
The north-south oversubscription at the vLAG pair/MCT is described as the ratio of the aggregate bandwidth of all downlinks from the
vLAG pair/MCT that are connected to the endpoints to the aggregate bandwidth of all uplinks that are connected to the data center
core/WAN edge router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the
endpoints versus the traffic entering and exiting the single-tier topology.
It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south
communication through the services hosted in the edge racks. All such traffic flows through the vLAG pair/MCT to the edge racks, and if
the traffic needs to exit, it flows back to the vLAG/MCT switches. Thus, the aggregate ratio of bandwidth connecting the compute racks
to the aggregate ratio of bandwidth connecting the edge racks is an important consideration.
Another consideration is the bandwidth of the link that interconnects the vLAG pair/MCT. In case of multihomed endpoints and no failure,
this link should not be used for data-plane forwarding. However, if there are link failures in the network, this link may be used for data-
plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to
tolerate up to two 10-GbE link failures has a 20-GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches.
Port Density and Speeds for Uplinks and Downlinks
In a single-tier topology, the uplink and downlink port density of the vLAG pair/MCT determines the number of endpoints that can be
connected to the network, as well as the north-south oversubscription ratios.
Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX and
SLX Series platforms support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks (25-GbE
interfaces will be supported in the future with the Brocade SLX 9850). The choice of the platform for the vLAG pair/MCT depends on
the interface speed and density requirements.
Scale and Future Growth
A design consideration for single-tier topologies is the need to plan for more capacity in the existing infrastructure and more endpoints in
the future.
Adding more capacity between existing endpoints and vLAG switches can be done by adding new links between them. Any future
expansion in the number of endpoints connected to the single-tier topology should be accounted for during the network design, as this
requires additional ports in the vLAG switches.
Other key considerations are whether to connect the vLAG/MCT pair to external networks through data center core/WAN edge routers
and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are
described in a later section of this guide.
Single-Tier Topology
Brocade Data Center Fabric Architectures
53-1004601-02 17
18. Ports on Demand Licensing
Ports on Demand licensing allows you to expand your capacity at your own pace, in that you can invest in a higher port density platform,
yet license only a subset of the available ports—the ports that you are using for current needs. This allows for an extensible and future-
proof network architecture without the additional upfront cost for unused ports on the switches.
Leaf-Spine Topology (Two Tiers)
The two-tier leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale
data center infrastructures. An example of leaf-spine topology is shown in Figure 15.
FIGURE 15 Leaf-Spine Topology
The leaf-spine topology is adapted from traditional Clos telecommunications networks. This topology is also known as the "3-stage
folded Clos," with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leafs.
The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage
devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint—
physical or virtual. As all endpoints connect only to the leafs, policy enforcement including security, traffic path selection, Quality of
Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leafs. The Brocade VDX 6740
and 6940 family of switches is used as leaf switches.
The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. As most policy
implementation is performed at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for
traffic forwarding between the leafs. Brocade VDX or SLX platform families are used as the spine switches depending on the scale and
feature requirements.
As a design principle, the following requirements apply to the leaf-spine topology:
• Each leaf connects to all spines in the network.
• The spines are not interconnected with each other.
• The leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane
operations such as forming a server-facing vLAG.)
The following are some of the key benefits of a leaf-spine topology:
• Because each leaf is connected to every spine, there are multiple redundant paths available for traffic between any pair of leafs.
Link failures cause other paths in the network to be used.
• Because of the existence of multiple paths, Equal-Cost Multipathing (ECMP) can be leveraged for flows traversing between
pairs of leafs. With ECMP, each leaf has equal-cost routes to reach destinations in other leafs, equal to the number of spines in
the network.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
18 53-1004601-02
19. • The leaf-spine topology provides a basis for a scale-out architecture. New leafs can be added to the network without affecting
the provisioned east-west capacity for the existing infrastructure.
• New spines and new uplink ports on the leafs can be provisioned to increase the capacity of the leaf-spine fabric.
• The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions
and reducing architectural and deployment complexities.
• The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, between racks, and
outside the leaf-spine topology.
Design Considerations
There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at each layer should be well understood and planned for.
For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined
as uplink ports. The north-south oversubscription ratio at the leafs is the ratio of the aggregate bandwidth for the downlink ports and the
aggregate bandwidth for the uplink ports.
For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the
spine switch. For a given pair of leaf switches connecting to the spine switch, the east-west oversubscription ratio at the spine is the ratio
of the aggregate bandwidth of the uplinks of the first switch and the aggregate bandwidth of the uplinks of the second switch. In a
majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the
nonblocking east-west oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are
connected to the respective leafs.
The oversubscription ratios described here govern the ratio of the traffic bandwidth between endpoints connected to the same leaf switch
and the traffic bandwidth between endpoints connected to different leaf switches. For example, if the north-south oversubscription ratio is
3:1 at the leafs and 1:1 at the spines, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three
times the bandwidth between endpoints connected to different leafs. From a network endpoint perspective, the network
oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications.
Specifically, endpoints that are expected to use higher bandwidth should be localized to the same leaf switch (or the same leaf switch pair
when endpoints are multihomed).
The ratio of the aggregate bandwidth of all spine downlinks connected to the leafs and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the
border leaf switches and that exit the data center site.
Leaf and Spine Scale
Because the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the
number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints.
Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches
in the topology. A higher oversubscription ratio at the leafs reduces the leaf scale requirements, as well.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
53-1004601-02 19
20. The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the
number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from
the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in
port-channel interfaces between the leafs and the spines.
Port Speeds for Uplinks and Downlinks
Another consideration for leaf-spine topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX
switches support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for
the leaf and spine depends on the interface speed and density requirements.
Scale and Future Growth
Another design consideration for leaf-spine topologies is the need to plan for more capacity in the existing infrastructure and to plan for
more endpoints in the future.
Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between
existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for
during the network design process.
If new leaf switches need to be added to accommodate new endpoints in the network, ports at the spine switches are required to connect
the new leaf switches.
In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches or whether to
add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in
another section of this guide.
Ports on Demand Licensing
Remember that Ports on Demand licensing allows you to expand your capacity at your own pace in that you can invest in a higher port
density platform, yet license only the ports on the Brocade VDX switch that you are using for current needs. This allows for an extensible
and future-proof network architecture without additional cost.
Deployment Model
The links between the leaf and spine can be either Layer 2 or Layer 3 links.
If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2
Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS
®
Fabric technology. With
Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for
management, a distributed control plane, embedded automation, and multipathing capabilities from Layer 1 to Layer 3. The benefits of
deploying a VCS fabric are described later in this design guide.
If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3
Clos deployment. You can deploy Brocade VDX and SLX platforms in a Layer 3 deployment by using Brocade IP fabric technology.
Brocade VDX switches can be deployed in spine and leaf Place in the Networks (PINs), whereas the Brocade SLX 9850 can be
deployed in the spine PIN. Brocade IP fabrics provide a highly scalable, programmable, standards-based, and interoperable networking
infrastructure. The benefits of Brocade IP fabrics are described later in this guide.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
20 53-1004601-02
21. Data Center Points of Delivery
Figure 16 shows a building block for a data center site. This building block is called a data center point of delivery (PoD). The data center
PoD consists of the networking infrastructure in a leaf-spine topology along with the endpoints grouped together in management/
infrastructure and compute racks. The idea of a PoD is to create a simple, repeatable, and scalable unit for building a data center site at
scale.
FIGURE 16 A Data Center PoD
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Multiple leaf-spine topologies can be aggregated for higher scale in an optimized 5-stage folded Clos topology. This topology adds a
new tier to the network known as the super-spine. The role of the super-spine is to provide connectivity between the spine switches
across multiple data center PoDs. Figure 17 shows four super-spine switches connecting the spine switches across multiple data center
PoDs.
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Brocade Data Center Fabric Architectures
53-1004601-02 21
22. FIGURE 17 An Optimized 5-Stage Folded Clos with Data Center PoDs
The connection between the spines and the super-spines follows the Clos principles:
• Each spine connects to all super-spines in the network.
• Neither the spines nor the super-spines are interconnected with each other.
Similarly, all the benefits of a leaf-spine topology—namely, multiple redundant paths, ECMP, scale-out architecture, and control over
traffic patterns—are realized in the optimized 5-stage folded Clos topology as well.
With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including
firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or
to scale down by removing existing PoDs, without affecting the existing infrastructure, providing elasticity in scale and isolation of failure
domains.
Brocade VDX switches are used for the leaf PIN, whereas depending on scale and features being deployed, either Brocade VDX or SLX
platforms can be deployed at the spine and super-spine PINs.
This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is
described later in this guide.
Design Considerations
The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth,
and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos
topology as well. Some key considerations are highlighted below.
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Brocade Data Center Fabric Architectures
22 53-1004601-02
23. Oversubscription Ratios
Because the spine switches now have uplinks connecting to the super-spine switches, the north-south oversubscription ratios for the
spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth
of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement,
application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines,
endpoints should be placed to optimize traffic within a data center PoD.
At the super-spine switch, the east-west oversubscription defines the ratio of the bandwidth of the downlink connections for a pair of data
center PoDs. In most cases, this ratio is 1:1.
The ratio of the aggregate bandwidth of all super-spine downlinks connected to the spines and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the
border leaf switches and exiting the data center site.
Deployment Model
The Layer 3 gateways for the endpoints connecting to the networking infrastructure can be at the leaf, at the spine, or at the super-spine.
With Brocade IP fabric architecture (described later in this guide), the Layer 3 gateways are present at the leaf layer. So the links between
the leafs, spines, and super-spines are Layer 3.
With Brocade multi-fabric topology using VCS fabric architecture (described later in this guide), there is a choice of the Layer 3 gateway
at the spine layer or at the super-spine layer. In either case, the links between the leafs and spines will be Layer 2 links. If the Layer 3
gateway is at the spine layer, the links between the spine and super-spine are Layer 3. Else, those links are Layer 2 as well. These Layer
2 links are IEEE-802.1Q-VLAN-based optionally over Link Aggregation Control Protocol (LACP) aggregated links. These architectures
are described later in this guide.
Edge Services and Border Switches Topology
For two-tier and three-tier data center topologies, the role of the border switches in the network is to provide external connectivity to the
data center site. In addition, as all traffic enters and exits the data center through the border leaf switches, they present the ideal location in
the network to connect network services like firewalls, load balancers, and edge VPN routers.
The topology for interconnecting the border switches depends on the number of network services that need to be attached and the
oversubscription ratio at the border switches. Figure 18 shows a simple topology for border switches, where the service endpoints
connect directly to the border switches. Border switches in this simple topology are referred to as "border leaf switches" because the
service endpoints connect to them directly.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
53-1004601-02 23
24. FIGURE 18 Edge Services PoD
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Design Considerations
The following sections describe the design considerations for border switches.
Oversubscription Ratios
The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They
also have uplink connections to the data center core/WAN edge routers as described in the next section.
The ratio of the aggregate bandwidth of the uplinks connecting to the spines/super-spines and the aggregate bandwidth of the uplink
connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site.
The north-south oversubscription ratios for the services connected to the border leafs are another consideration. Because many of the
services connected to the border leafs may have public interfaces that face external entities like core/edge routers and internal interfaces
that face the internal network, the north-south oversubscription for each of these connections is an important design consideration.
Data Center Core/WAN Edge Handoff
The uplinks to the data center core/WAN edge routers from the border leafs carry the traffic entering and exiting the data center site. The
data center core/WAN edge handoff can be Layer 2 and/or Layer 3 in combination with overlay protocols.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
24 53-1004601-02
25. The handoff between the border leafs and the data center core/WAN edge may provide domain isolation for the control- and data-plane
protocols running in the internal network and built using one-tier, two-tier, or three-tier topologies. This helps in providing independent
administrative, fault-isolation, and control-plane domains for isolation, scale, and security between the different domains of a data center
site.
Data Center Core and WAN Edge Routers
The border leaf switches connect to the data center core/WAN edge devices in the network to provide external connectivity to the data
center site. Figure 19 shows an example of the connectivity between the vLAG/MCT pair from a single-tier topology, spine switches
from a two-tier topology, border leafs, a collapsed data center core/WAN edge tier, and external networks for Internet and data center
interconnection.
FIGURE 19 Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the Border Leaf in the Data
Center Site
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
53-1004601-02 25
27. Building Data Center Sites with Brocade
VCS Fabric Technology
• Data Center Site with Leaf-Spine Topology........................................................................................................................................... 28
• Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics......................................................................31
Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to
48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection
of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This
ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are
blocked. Brocade VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology.
Brocade VCS Fabric technology provides the following benefits:
• TRILL-based Ethernet fabric—Brocade VCS Fabric technology, which is based on the TRILL standard, uses a Layer 2 routing
protocol within the fabric. This ensures that all links are always utilized within the VCS fabric, and there is no need for loop-
prevention protocols like Spanning Tree that block links and provide inefficient utilization of the networking infrastructure.
• Active-Active vLAG—VCS fabrics allow for active-active port channels between networking endpoints and multiple VDX
switches participating in a VCS fabric, enabling redundancy and increased throughput.
• Single point of management—With all switches in a VCS fabric participating in a logical chassis, the entire topology can be
managed as a single switch. This drastically reduces the configuration, validation, monitoring, and troubleshooting complexity of
the fabric.
• Distributed MAC address learning—With Brocade VCS Fabric technology, the MAC addresses that are learned at the edge
ports of the fabric are distributed to all nodes participating within the fabric. This means that the MAC address learning within
the fabric does not rely on flood-and-learn mechanisms, and flooding related to unknown unicast frames is avoided.
• Multipathing from Layer 1 to Layer 3—Brocade VCS Fabric technology provides efficiency and resiliency through the use of
multipathing from Layer 1 to Layer 3:
– At Layer 1, Brocade Trunking (BTRUNK) enables frame-based load balancing between a pair of switches that are part of
the VCS fabric. This provides near identical link utilization for links participating in a BTRUNK. This ensures that thick(or
“elephant”) flows do not congest an inter-switch link (ISL).
– Because of the existence of a Layer 2 routing protocol, Layer 2 ECMP is performed between multiple next hops. This is
critical in a Clos topology, where all spines are ECMP next hops for a leaf that sends traffic to an endpoint connected to
another leaf.
– Layer 3 ECMP using Layer 3 routing protocols ensures that traffic is load-balanced between Layer 3 next hops.
• Distributed control plane—Control-plane and data-plane state information is shared across devices in the VCS fabric, which
enables fabric-wide MAC address learning, multiswitch port channels (vLAG), Distributed Spanning Tree (DiST), and gateway
redundancy protocols like Virtual Router Redundancy Protocol–Extended (VRRP-E) and Fabric Virtual Gateway (FVG), among
others. These enable the VCS fabric to function like a single switch to interface with other entities in the infrastructure—thus
appearing as a single control-plane entity to other devices in the network.
• Embedded automation—Brocade VCS Fabric technology provides embedded turnkey automation built into Brocade Network
OS. These automation features enable zero-touch provisioning of new switches into an existing fabric. Brocade VDX switches
also provide multiple management methods, including the command-line interface (CLI), Simple Network Management
Protocol (SNMP), REST, and Network Configuration Protocol (NETCONF) interfaces.
• Multitenancy at Layers 2 and 3—With Brocade VCS Fabric technology, multitenancy features at Layers 2 and 3 enable traffic
isolation and segmentation across the fabric. Brocade VCS Fabric technology allows an extended range of up to 8,000 Layer 2
Brocade Data Center Fabric Architectures
53-1004601-02 27
28. domains within the fabric, while isolating overlapping IEEE-802.1Q-based tenant networks into separate Layer 2 domains.
Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-VRF routing protocols, and BGP-EVPN
enables large-scale Layer 3 multitenancy.
• Ecosystem integration and virtualization features—Brocade VCS Fabric technology integrates with leading industry solutions
and products like OpenStack; VMware products like vSphere, NSX, and vRealize; common infrastructure programming tools like
Python; and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps
dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port
Profiles (AMPP), which automatically adjusts port-profile information as a VM moves from one server to another.
• Advanced storage features—Brocade VDX switches provide rich storage protocols and features like Fibre Channel over
Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and Auto-NAS (Network Attached
Storage), among others, to enable advanced storage networking.
The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology.
The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology.
Data Center Site with Leaf-Spine Topology
Figure 20 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology, the
spines are connected to the data center core/WAN edge devices directly. The spine PIN in this topology is sometimes referred to as the
"border spine" because it performs both the spine function of east-west traffic switches and the border function of providing an interface
to the data center core/WAN edge.
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
28 53-1004601-02
29. FIGURE 20 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Spine Switches
Figure 21 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology,
border leaf switches are added along with the edge services PoD for external connectivity and hosting edge services.
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
53-1004601-02 29
30. FIGURE 21 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Leaf Switches
The border leafs in the edge services PoD are built using a separate VCS fabric. The border leafs are connected to the spine switches in
the data center PoD and also to the data center core/WAN edge routers. These links can be either Layer 2 or Layer 3 links, depending on
the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one
edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center
core/WAN edge routers.
As an alternative to the topology shown in Figure 21, the border leaf switches in the edge services PoD and the data center PoD can be
part of the same VCS fabric, to extend the fabric benefits to the entire data center site. This model is shown in Brocade VCS Fabric on
page 51.
The data center PoDs shown in Figure 20 and Figure 21 are built using Brocade VCS fabric technology. With Brocade VCS fabric
technology, we recommend interconnecting the spines with each other (not shown in the figures) to ensure the best traffic path during
failure scenarios.
Scale
Table 1 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX platforms at the leaf and spine Places
in the Network (PINs) in a Brocade VCS fabric.
TABLE 1 Scale Numbers for a Data Center Site with a Leaf-Spine Topology Implemented with Brocade VCS Fabric Technology
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count VCS Fabric Size
(Number of
Switches)
10-GbE Port
Count
6740, 6740T,
6740T-1G
6940-36Q 3:1 36 4 40 1,728
6740, 6740T,
6740T-1G
8770-4 3:1 44 4 48 2,112
6940-144S 6940-36Q 2:1 36 12 48 3,456
6940-144S 8770-4 2:1 36 12 48 3,456
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
30 53-1004601-02
31. The following assumptions are made:
• Links between the leafs and the spines are 40 GbE.
• The Brocade VDX 6740 Switch platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the
Brocade VDX 6740 Switch, the Brocade VDX 6740T Switch, and the Brocade VDX 6740T-1G Switch. (The Brocade VDX
6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.)
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks.
• The Brocade VDX 8770-4 Switch uses 27 × 40-GbE line cards with 40-GbE interfaces.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS
Fabrics
If multiple VCS fabrics are needed at a data center site, the optimized 5-stage Clos topology is used to increase scale by interconnecting
the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as
a multi-fabric topology using VCS fabrics.
In a multi-fabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS
Fabric technology. Note that we recommend that the spines be interconnected in a data center PoD built using Brocade VCS Fabric
technology.
A new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also
connected to the super-spine switches. There are two deployment options available to build multi-fabric topology using VCS fabrics.
In the first deployment option, the links between the spine and super-spine are Layer 2. In order to achieve a loop-free environment and
avoid loop-prevention protocols between the spine and super-spine tiers, the super-spine devices participate in a VCS fabric as well. The
connections between the spine and the super-spines are bundled together in (dual-sided) vLAGs to create a loop-free topology. The
standard VLAN range of 1 to 4094 can be extended between the DC PoDs using IEEE 802.1Q tags over the dual-sided vLAGs. This is
illustrated in Figure 22.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 31
32. FIGURE 22 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Super-Spine
In this topology, the super-spines connect directly into the data center core/WAN edge, which provides external connectivity to the
network. Alternately, Figure 23 shows the border leafs connecting directly to the data center core/WAN edge. In this topology, if the
Layer 3 boundary is at the super-spine, the links between the super-spine and the border leafs carry Layer 3 traffic as well.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
32 53-1004601-02
33. FIGURE 23 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Border Leafs
In the second deployment option, the links between the spine and super-spine are Layer 3. In cases where the Layer 3 gateways for the
VLANs in the VCS fabrics are at the spine layer, this model provides routing between the data center PoDs. As a consequence of the
links being Layer3, a loop-free topology is achieved. Here the Brocade SLX 9850 is an option for the super-spine PIN. This is illustrated
in Figure 24.
FIGURE 24 Multi-Fabric Topology with VCS Technology—With L3 Links Between Spine and Super-Spine
If Layer 2 extension is required between the DC PoDs, Virtual Fabric Extension (VF-Extension) technology can be used. With VF-
Extension, the spine switches (VDX 6740 and VDX 6940 only) can be configured as VXLAN Tunnel Endpoints (VTEPs). Subsequently,
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 33
34. the VXLAN protocol can be used to extend the Layer 2 VLANs as well as the virtual fabrics between the VCS fabrics of the DC PoDs.
This is described in more detail in the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide.
Figure 23 and Figure 24 show only one edge services PoD, but there can be multiple such PoDs depending on the edge service
endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Table 2 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos built with Brocade VCS fabrics. The following assumptions are made:
• Links between the leafs and the spines are 40 GbE. Links between the spines and super-spines are also 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.) Four spines are used to connect the uplinks.
• The Brocade 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used to connect the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. A larger port scale can be realized with a higher oversubscription ratio at the spines.
However, a 1:1 oversubscription ratio is used here and is also recommended.
• One spine plane is used for the scale calculations. This means that all spine switches in each data center PoD connect to all
super-spine switches in the topology. This topology is consistent with the optimized 5-stage Clos topology.
• Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (uses 18 × 40-GbE per line card) for
connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode.
The Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode.
• The link between the spines and the super-spines is assumed to be Layer 3, and 32-way Layer 3 ECMP is utilized for spine to
super-spine connections. This gives a maximum of 32 super-spines for the multi-fabric topology using Brocade VCS Fabric
technology. Refer to the release notes for your platform to check the ECMP support scale.
NOTE
For a larger port scale for the multi-fabric topology using Brocade VCS Fabric technology, multiple spine planes are used.
Architectures with multiple spine planes are described later.
TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number of
Super-
Spines
Number of
Data Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 6940-36Q 3:1 18 4 18 9 7,776
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 18 3 5,184
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 6940-36Q 3:1 32 4 32 9 13,824
VDX 6940-144S VDX 8770-4 VDX 6940-36Q 2:1 32 12 32 3 9,216
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-4 3:1 18 4 18 18 15,552
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 18 6 10,368
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
34 53-1004601-02
35. TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
(continued)
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number of
Super-
Spines
Number of
Data Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-4 3:1 32 4 32 18 27,648
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 32 6 18,432
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-8 3:1 18 4 18 36 31,104
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 18 12 20,736
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-8 3:1 32 4 32 36 55,296
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 32 12 36,864
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-4 3:1 18 4 18 60 51,840
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 18 20 34,560
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 SLX 9850-4 3:1 32 4 32 60 92,160
VDX 6940-144S VDX 8770-4 SLX 9850-4 2:1 32 12 32 20 61,440
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-8 3:1 18 4 18 120 103,680
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 18 40 69,120
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 SLX 9850-8 3:1 32 4 32 120 184,320
VDX 6940-144S VDX 8770-4 SLX 9850-8 2:1 32 12 32 40 122,880
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 35
37. Building Data Center Sites with Brocade
IP Fabric
• Data Center Site with Leaf-Spine Topology........................................................................................................................................... 37
• Scaling the Data Center Site with an Optimized 5-Stage Folded Clos.......................................................................................40
Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all links in the Clos
topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey
automation features used to provision, validate, remediate, troubleshoot, and monitor the networking infrastructure, and the hardware
differentiation with Brocade VDX and SLX platforms. The following sections describe these aspects of building data center sites with
Brocade IP fabrics.
Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very
high solution scale, and standards-based interoperability are leveraged.
The following are some of the key benefits of deploying a data center site with Brocade IP fabrics:
• Highly scalable infrastructure—Because the Clos topology is built using IP protocols, the scale of the infrastructure is very high.
These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies.
• Standards-based and interoperable protocols—Brocade IP fabric is built using industry-standard protocols like the Border
Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid
foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP-
EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains
by enabling Layer 2 communications and VM mobility.
• Active-active vLAG pairs—By supporting vLAG pairs on leaf switches, dual-homing of the networking endpoints is supported.
This provides higher redundancy. Also, because the links are active-active, vLAG pairs provide higher throughput to the
endpoints. vLAG pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can
participate in a vLAG.
• Support for unnumbered interfaces—Using Brocade Network OS support for IP unnumbered interfaces available in Brocade
VDX switches, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the
planning and use of IP addresses, and it simplifies operations.
• Turnkey automation—Brocade automated provisioning dramatically reduces the deployment time of network devices and
network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with
minimal effort.
• Programmable automation—Brocade server-based automation provides support for common industry automation tools such
as Python Ansible, Puppet, and YANG model-based REST and NETCONF APIs. The prepackaged PyNOS scripting library
and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique
requirements to meet technical or business objectives when the organization is ready.
• Ecosystem integration—The Brocade IP fabric integrates with leading industry solutions and products like VMware vCenter,
NSX, and vRealize. Cloud orchestration and control are provided through OpenStack and OpenDaylight-based Brocade SDN
Controller support.
Data Center Site with Leaf-Spine Topology
A data center PoD built with IP fabrics supports dual-homing of network endpoints using multiswitch port channel interfaces formed
between a pair of Brocade VDX switches participating in a vLAG. This pair of leaf switches is called a vLAG pair (see Figure 25).
Brocade Data Center Fabric Architectures
53-1004601-02 37
38. FIGURE 25 An IP Fabric Data Center PoD Built with Leaf-Spine Topology and vLAG Pairs for Dual-Homed Network Endpoint
The Brocade VDX switches in a vLAG pair have a link between them for control-plane purposes to create and manage the multiswitch
port-channel interfaces. When network virtualization with BGP EVPN is used, these links also carry switched traffic in case of downlink
failures or single-homed endpoints. Oversubscription of the inter-switch link (ISL) is an important consideration for these scenarios.
Figure 26 shows a data center site deployed using a leaf-spine topology and an edge services PoD. Here the network endpoints are
illustrated as single-homed, but dual homing is enabled through vLAG pairs where required.
FIGURE 26 Data Center Site Built with Leaf-Spine Topology and an Edge Services PoD
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
38 53-1004601-02
39. The links between the leafs, spines, and border leafs are all Layer 3 links. The border leafs are connected to the spine switches in the data
center PoD and also to the data center core/WAN edge routers. The uplinks from the border leaf to the data center core/WAN edge can
be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/WAN
edge routers.
There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for
connecting to the data center core/WAN edge routers.
Scale
Table 3 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 40-GbE links between leafs and spines.
The following assumptions are made:
• Links between the leafs and the spines are 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.)
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks.
• The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40-GbE per line card) for
connections between leafs and spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The
Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode.
NOTE
For a larger port scale in Brocade IP fabrics in a 3-stage folded Clos, the Brocade VDX 8770-4 or 8770-8 can be used as a
leaf switch.
TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q 3:1 36 4 40 1,728
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 3:1 72 4 76 3,456
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 3:1 144 4 148 6,912
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 3:1 240 4 244 11,520
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-8 3:1 480 4 484 23,040
VDX 6940-144S VDX 6940-36Q 2:1 36 12 48 3,456
VDX 6940-144S VDX 8770-4 2:1 72 12 84 6,912
VDX 6940-144S VDX 8770-8 2:1 144 12 156 13,824
VDX 6940-144S SLX 9850-4 2:1 240 12 252 23,040
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
53-1004601-02 39
40. TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines (continued)
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6940-144S SLX 9850-8 2:1 480 12 492 46,080
Table 4 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 100-GbE links between leafs and spines.
The following assumptions are made:
• Links between the leafs and the spines are 100 GbE.
• The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks.
TABLE 4 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 100-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6940-144S VDX 8770-4 2.4:1 24 12 36 2,304
VDX 6940-144S VDX 8770-8 2.4:1 48 12 60 4,608
VDX 6940-144S SLX 9850-4 2.4:1 144 12 156 13,824
VDX 6940-144S SLX 9850-8 2.4:1 288 12 300 27,648
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
If a higher scale is required, the optimized 5-stage folded Clos topology is used to interconnect the data center PoDs built using a
Layer 3 leaf-spine topology. An example topology is shown in Figure 27.
FIGURE 27 Data Center Site Built with an Optimized 5-Stage Folded Clos Topology and IP Fabric PoDs
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
40 53-1004601-02
41. Figure 27 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint
requirements, the amount of oversubscription for traffic exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Figure 28 shows a variation of the optimized 5-stage Clos. This variation includes multiple super-spine planes. Each spine in a data
center PoD connects to a separate super-spine plane.
FIGURE 28 Optimized 5-Stage Clos with Multiple Super-Spine Planes
The number of super-spine planes is equal to the number of spines in the data center PoDs. The number of uplink ports on the spine
switch is equal to the number of switches in a super-spine plane. Also, the number of data center PoDs is equal to the port density of the
super-spine switches. Introducing super-spine planes to the optimized 5-stage Clos topology greatly increases the number of data
center PoDs that can be supported. For the purposes of port scale calculations of the Brocade IP fabric in this section, the optimized
5-stage Clos with multiple super-spine plane topology is considered.
Table 5 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 40-GbE
interfaces between leafs, spines, and super-spines. The following assumptions are made:
• Links between the leafs and the spines are 40 GbE. Links between spines and super-spines are also 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, the Brocade VDX 6740T, and the Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity
on Demand license to upgrade to 10GBASE-T ports.) Four spines are used for connecting the uplinks.
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used for connecting the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from the spine toward the super-spine is equal
to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
53-1004601-02 41
42. • The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40 GbE) for connections between
spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The Brocade VDX
8770-8 supports 144 × 40-GbE ports in performance mode.
• 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number
of Super-
Spine
Planes
Number
of Super-
Spines in
Each
Super-
Spine
Plane
Number
of Data
Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 6940-36Q 3:1 18 4 4 18 36 31,104
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 12 18 36 62,208
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-4 3:1 18 4 4 18 72 62,208
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 12 18 72 124,416
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-8 3:1 18 4 4 18 144 124,416
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 12 18 144 248,832
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-4 3:1 18 4 4 18 240 207,360
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 12 18 240 414,720
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-8 3:1 18 4 4 18 480 414,720
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 12 18 480 829,440
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-4 3:1 32 4 4 32 72 110,592
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 12 32 72 221,184
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-4 3:1 32 4 4 32 240 368,640
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 32 12 12 32 240 737,280
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
42 53-1004601-02
43. TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine (continued)
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number
of Super-
Spine
Planes
Number
of Super-
Spines in
Each
Super-
Spine
Plane
Number
of Data
Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-8 3:1 32 4 4 32 480 737,280
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 32 12 12 32 480 1,474,560
Table 6 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 100-GbE
interfaces between the leafs, spines, and super spines. The following assumptions are made:
• Links between the leafs and the spines are 100 GbE. Links between spines and super-spines are also 100 GbE.
• The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks. Four spines are used for connecting the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from spine toward super-spine is equal to the
number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
• 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 6 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 100 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine- Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE
Port Count
VDX 6940-144S VDX 8770-4 VDX 8770-4 2.4:1 12 4 4 12 24 27,648
VDX 6940-144S VDX 8770-4 VDX 8770-8 2.4:1 12 4 4 12 48 55,296
VDX 6940-144S VDX 8770-8 VDX 8770-8 2.4:1 24 4 4 24 48 110,592
VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 32 4 4 32 144 442,368
VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 32 4 4 32 288 884,736
Further higher scale can be achieved by physically connecting all available ports on the switching platform and using BGP policies to
enforce the maximum ECMP scale as limited by the platform. This provides higher port scale for the topology, while still ensuring that
maximum ECMP scale is used. It should be noted that this arrangement provides a nonblocking 1:1 north-south subscription at the
spine in most scenarios.
In Table 7, the scale for a 5-stage folded Clos with 40-GbE interfaces between leaf, spine, and super-spine is shown assuming that BGP
policies are used to enforce the ECMP maximum scale.
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
53-1004601-02 43
44. TABLE 7 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leafs, Spines, and Super-Spines
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE Port
Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 VDX 8770-8 3:1 72 4 4 72 144 497,664
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 72 4 4 72 144 995,328
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-4 3:1 120 4 4 120 240 1,382,400
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 120 12 12 120 240 2,764,800
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-8 3:1 120 4 4 120 480 2,764,800
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 120 12 12 120 480 5,529,600
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-8 SLX 9850-8 3:1 240 4 4 240 480 5,529,600
VDX 6940-144S SLX 9850-8 SLX 9850-8 2:1 240 12 12 240 480 11,059,200
In Table 8, the scale for a 5-stage folded Clos with 100-GbE interfaces between leaf, spine, and super spine is shown assuming that
BGP policies are used to enforce the ECMP maximum scale.
TABLE 8 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leaf, Spines, and Super Spines
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE
Port Count
VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 72 4 4 72 144 995,328
VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 72 4 4 72 288 1,990,656
VDX 6940-144S SLX 9850-8 SLX 9850-8 2.4:1 144 4 4 144 288 3,981,312
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
44 53-1004601-02