This document provides best practices for configuring VMware vSphere 5.0 virtual switches connected to HP Networking switches. It discusses various link aggregation protocols supported by HP switches like LACP, port trunking, and Intelligent Resilient Framework (IRF). It recommends configuring the virtual switch to load balance traffic across physical network adapters using IP hash for optimal performance. Administrators should also configure HP switches to support the chosen link aggregation and failover methods.
The success of today’s organizations and enterprises highly depends on reliable and secure connectivity. Enterprise connectivity exists between different branches, between a central offi ce and geographically widespread points of activity and between an enterprise and the public internet. The connectivity enables faster, more secure transactions and improved productivity by sharing information between entities,
no matter where they are.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
The success of today’s organizations and enterprises highly depends on reliable and secure connectivity. Enterprise connectivity exists between different branches, between a central offi ce and geographically widespread points of activity and between an enterprise and the public internet. The connectivity enables faster, more secure transactions and improved productivity by sharing information between entities,
no matter where they are.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Comparing the TCO of HP NonStop with Oracle RACThomas Burg
HP NonStop is often (wrongly!) perceived as "expensive", specifically compared with the combination of "vanilla X86 hardware" and the Oracle RAC DB offering.
This presentation talks about an in-depth analysis HP did to compare the two offerings fair and square. You might be surprised at the results ...
Deploying Applications in Today’s Network InfrastructureCisco Canada
This presentation prepares networking engineers for the fundamentals of deploying application in today’s server virtualization infrastructure. The objectives for this presentation is to share best practices, tips and tricks on how best to implement Cisco technology such as Cisco UCS and Cisco Nexus 1000v with any virtualization stack. During this presentation we will analyze and dissect two server virtualization use cases recently architected. These use cases consist of a multi -tenant private cloud and virtual desktop infrastructure for thousands of users.
This article introduces LAMP software stack on zLinux (Linux on IBM System z). Let’s call it zLAMP. We will delve into configuring and starting up individual components of zLAMP and then downloading, installing and testing few LAMP based off the shelf open source applications
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
comForte CSL: a messaging middleware framework for HP NonStopThomas Burg
The comForte CSL product is a flexible and powerful messaging middleware framework for the HP NonStop platform. This presentation describes the product on several levels:
(1) Elevator pitch (Why/How/What?)
(2) Technical Use Cases (Geeky!)
(3) Business Cases
(4) How to modernize a legacy COBOL application
(5) Competing products
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
This covers the various aspects of configuration IBM Integration Bus when looking to implement a highly available system and comprehensive disaster recovery plan.
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Cisco ACI & F5 Integrate to Transform the Data CenterF5NetworksAPJ
To meet business expectations without compromising on security, availability, or performance, today’s IT organizations are expected to deliver applications with a speed and efficiency that was unimaginable just a few years ago. To keep pace, you must transform your data
center infrastructure to support the rapid provisioning and scaling of network and application services. With the joint solution of Cisco Application Centric Infrastructure (ACI) and F5 Synthesis™, you can operationalize the network and accelerate application deployment.
Architecting data center networks in the era of big data and cloudbradhedlund
Brad Hedlund's speaking session at Interop Las Vegas 2012.
Big Data clusters and SDN enabled clouds invite a new approach to data center networking. This session for data center architects will explore the transition from traditional scale-up chassis based Layer 2 centric networking, to the next generation of scale-out Layer 3 CLOS based fabrics of fixed switches.
Comparing the TCO of HP NonStop with Oracle RACThomas Burg
HP NonStop is often (wrongly!) perceived as "expensive", specifically compared with the combination of "vanilla X86 hardware" and the Oracle RAC DB offering.
This presentation talks about an in-depth analysis HP did to compare the two offerings fair and square. You might be surprised at the results ...
Deploying Applications in Today’s Network InfrastructureCisco Canada
This presentation prepares networking engineers for the fundamentals of deploying application in today’s server virtualization infrastructure. The objectives for this presentation is to share best practices, tips and tricks on how best to implement Cisco technology such as Cisco UCS and Cisco Nexus 1000v with any virtualization stack. During this presentation we will analyze and dissect two server virtualization use cases recently architected. These use cases consist of a multi -tenant private cloud and virtual desktop infrastructure for thousands of users.
This article introduces LAMP software stack on zLinux (Linux on IBM System z). Let’s call it zLAMP. We will delve into configuring and starting up individual components of zLAMP and then downloading, installing and testing few LAMP based off the shelf open source applications
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
comForte CSL: a messaging middleware framework for HP NonStopThomas Burg
The comForte CSL product is a flexible and powerful messaging middleware framework for the HP NonStop platform. This presentation describes the product on several levels:
(1) Elevator pitch (Why/How/What?)
(2) Technical Use Cases (Geeky!)
(3) Business Cases
(4) How to modernize a legacy COBOL application
(5) Competing products
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryRob Convery
This covers the various aspects of configuration IBM Integration Bus when looking to implement a highly available system and comprehensive disaster recovery plan.
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Cisco ACI & F5 Integrate to Transform the Data CenterF5NetworksAPJ
To meet business expectations without compromising on security, availability, or performance, today’s IT organizations are expected to deliver applications with a speed and efficiency that was unimaginable just a few years ago. To keep pace, you must transform your data
center infrastructure to support the rapid provisioning and scaling of network and application services. With the joint solution of Cisco Application Centric Infrastructure (ACI) and F5 Synthesis™, you can operationalize the network and accelerate application deployment.
Architecting data center networks in the era of big data and cloudbradhedlund
Brad Hedlund's speaking session at Interop Las Vegas 2012.
Big Data clusters and SDN enabled clouds invite a new approach to data center networking. This session for data center architects will explore the transition from traditional scale-up chassis based Layer 2 centric networking, to the next generation of scale-out Layer 3 CLOS based fabrics of fixed switches.
To deliver on this potential, smarter media and entertainment companies are transforming business models, operations and customer experiences. (1) Innovate business models and seize digital market opportunities, (2) Differentiate the consumer experience, (3) Improve operational efficiencies.
Ibm cognitive business_strategy_presentationdiannepatricia
IBM Cognitive Business Strategy presentation. Presented by Dianne Fodell and Jim Spohrer at the Cognitive Systems Institute Group Speaker Series call on October 8, 2015.
Top industry use cases for streaming analyticsIBM Analytics
Organizations need to get high value from streaming data to gain new clients and capitalize on market opportunities. Discover how IBM Streams is best suited for use cases that has the need for high speed and low latency.
What Social Media Analytics Can't Tell You (SXSW Interactive 2014)Vision Critical
Social media analytics can help you understand the active members of your social media audience, but what about the people who aren't posting? How do you fill in the gaps in your analytics with insight into your customers' purchases, your fans' offline interests, or your users' reasons for liking what they like?
Find a new way of tackling these questions in a session that shares a new approach to social media insight, and see how some of the web's leading thinkers are answering their research questions by combining social media analytics with survey data. Find out how this brings new insight to a 90,000-person study of the collaborative economy; how it enhances what we know about online fundraising; and what it tells Discovery about how social media users watch TV. Whether you're looking for fresh insight on what makes social media users tick, or trying to expand your own monitoring and analytics program, this session will give you a first look at the latest data and research methods.
This panel was presented at SXSW Interactive on March 7, 2014. The panel included:
- Alexandra Samuel, Vision Critical • @awsamuel
- Jeremiah Owyang, Crowd Companies • @jowyang
- Colby Flint, Discovery Communications • @discoverycomm
- Beth Kanter • @kanter
Some data from this presentation were taken from our latest report, Sharing is New Buying: How to Win in the Collaborative Economy: http://bit.ly/SharingNewBuyingSH
IBM CDO Fall Summit 2016 Keynote: Driving innovation in the cognitive eraIBM Analytics
What does it take to drive Innovation in the Cognitive Era? Bob Picciano, Senior Vice President IBM Analytics and Inderpal Bhandari, Global Chief Data Officer, IBM gave this presentation to the CDOs and data professionals in attendance at the IBM Chief Data Officer Strategy Summit in Fall of 2016.
Learn more about the role of CDO: http://ibm.co/2cXasXy
Blade Server I/O and Workloads of the Future (report)IT Brand Pulse
At the Intel Xeon E5-2600 v3 inflection point, this technology brief looks at how the latest Cisco UCS and HP BladeSystem blade servers match-up to workloads of the future.
Blade Server I/O and Workloads of the Future (slides)IT Brand Pulse
At the Intel Xeon E5-2600 v3 inflection point, this technology brief looks at how the latest Cisco UCS and HP BladeSystem blade servers match-up to workloads of the future.
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage EMC
This Cisco Validated Design paper is intended for use by Project Managers, Infrastructure Managers, Storage Managers, System Administrators, and PeopleSoft Application Database Administrators considering the sizing, deployment, and migration of Oracle PeopleSoft Applications to Cisco UCS with EMC VNX storage.
Your priorities are clear: meet the challenges of today’s dynamic world, contain costs, deal with IT skill shortages and take full advantage of new technologies. In short, manage your IT organization and infrastructure for business success. With its industry-leading flexibility, BladeCenter is the right choice for your dynamic business...
Comprehensive and Simplified Management for VMware vSphere Environments - now...Hitachi Vantara
Learn how to build out a robust private cloud infrastructure with the assurance that all the underlying server, storage, and network resources are in place and aligned to the appropriate service levels.
See how to achieve predictable reliability based on business needs in a robust, enterprise-class cloud platform – Hitachi Unified Compute Platform Pro for VMware vSphere.
We’ll take you through the latest updates to this industry-leading solution that is deeply integrated with vSphere, including HDS servers and storage, Brocade Fibre Channel, your choice of Cisco or Brocade Ethernet networking. We’ll also talk about software updates that include bare-metal support, improved monitoring and performance tuning, federated management, and non-disruptive firmware upgrades.
SDN 101: Software Defined Networking Course - Sameh Zaghloul/IBM - 2014SAMeh Zaghloul
Sameh Zaghloul
Technology Manager @ IBM
+2 0100 6066012
zaghloul@eg.ibm.com
SDN: Technology that enables data center team to use software to efficiently control network resources
SDN Overview
SDN Standards
NFV – Network Function Virtualization
SDN Scenarios and Use Cases
SDN Sample Research Projects
SDN Technology Survey
SDN Case Study
SDN Online Courses
SDN Lab SW Tools
- OpenStack Framework
- OpenDayLighyt – SDN Controller
- FloodLight – SDN Controller
- Open vSwitch – Virtual Switch
- MiniNet – Virtual Network: OpenFlow Switches, SDN Controllers, and Servers/Hosts
- OMNet++ Network Simulator
- Avior – Sample FloodLight Java Application
- netem - Network Emulation
- NOX/POX - C++/ Python OpenFlow API for building network control applications
- Pyretic = Python + Frenetic - Enables network programmers and operators to write modular network applications by providing powerful abstractions
- Resonance - Event-Driven Control for Software-Defined Networks (written in Pyretic)
SDN Project
Learn about IBM Flex System: A Solid Foundation for Microsoft Exchange Server 2010. IBM builds, tests, and publishes reference configurations and performance metrics to provide clients with guidelines for sizing their Microsoft Exchange Server 2010 environments. This document highlights the IBM Flex System x240 Compute Node and IBM Flex System V7000 Storage Node and how they can be used as the foundation for your Exchange 2010 infrastructure. For more information on Pure Systems, visit http://ibm.co/J7Zb1v.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Trabajo de fin de Ciclo Formativo Grado Superior en Administración de Sistemas en red (ASIR/ASIX).
El trabajo consiste en un proyecto de virtualizacion de servidores para dar una alta disponibilidad (HA) mediante el sistema Proxmox. El servicio a dar en cuestión finalmente fue de un servidor proxy y web, por falta de tiempo y problemas con la configuración de Zentyal, fue imposible su instalación.
Cloud Technology and Virtualization
"Project Deliverable 4: Cloud Technology and Virtualization"
Christopher Nevels
Dr. Darcel Ford
CIS 590
11-24-13
Cloud Technology and Virtualization
There are many reasons companies and organizations are investing in server virtualization. Some of the reasons are financially motivated, while others address technical concerns. Server virtualization conserves space through consolidation. It's common practice to dedicate each server to a single application. If several applications only use a small amount of processing power, the network administrator can consolidate several machines into one server running multiple virtual environments. For companies that have hundreds or thousands of servers, the need for physical space can decrease significantly. Server virtualization provides a way for companies to practice redundancy without purchasing additional hardware. Redundancy refers to running the same application on multiple servers. It's a safety measure -- if a server fails for any reason, another server running the same application can take its place. This minimizes any interruption in service. It wouldn't make sense to build two virtual servers performing the same application on the same physical server. If the physical server were to crash, both virtual servers would also fail. In most cases, network administrators will create redundant virtual servers on different physical machines. Virtual servers offer programmers isolated, independent systems in which they can test new applications or operating systems. Rather than buying a dedicated physical machine, the network administrator can create a virtual server on an existing machine. Because each virtual server is independent in relation to all the other servers, programmers can run software without worrying about affecting other applications (Strickland 2013).
Cloud computing is ideal for small companies, as it’s cost-effective, saves time and energy, and it allows for a high level of customization. According to Forbes, a 2009 study found that cloud computing could save up to 67% of the lifecycle cost for server deployment on a large scale. Another study found that using cloud solutions generally results in higher investment returns (when compared to an on-site system). There are further cost saving benefits, such as less need for expensive hardware and software, and no need for physical networks or IT maintenance. Also, cloud systems are usually ‘pay-as-you-go’, so you only pay for what you use. There are no upfront investments, and IT requirements can be easily budgeted for. Also, various cloud services can either be added or scaled back, depending on where your business is, and how much growth is taking place. The cloud is also highly customizable: you can select what platform you want, which payroll software to use, and what email marketing tools you require – all from different vendors, and all individually configurable (K2 SEO 2013).
The c.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Nerworking es xi
1. Technical white paper
Best practices when deploying VMware
vSphere 5.0 connected to HP Networking
Switches
Table of contents
Executive summary 2
Overview 2
VMware ESXi 5 2
HP Networking 3
Link aggregation 3
Link Aggregation Control Protocol
(LACP) 3
Port Trunk/Bridge Aggregations 4
Meshing 4
Distributed Trunking 4
Intelligent Resilient Framework (IRF) 4
Networking in VMware ESXi 5 5
Load balancing 5
Failover configuration 6
Intelligent Resilient Framework (IRF)
and VMware 7
Deployment best practices 7
Dual switch configuration 7
Dual switch configuration using
Intelligent Resilient Framework (IRF) 9
Best practices for configuring HP
Networking switches for use with
VMware ESXi 5 hosts 10
Configuring the ESXi vSwitch 10
Configuring the HP Networking
switches 15
Appendix A: Glossary of terms 20
For more information 21
2. 2
Executive summary
This white paper provides information on how to utilize best practices to properly configure VMware vSphere 5.0 virtual
switches directly connected to HP Networking switches. By leveraging HP Networking with VMware vSphere, HP delivers
the foundation for the data center of the future, today, by providing a unified, virtualization-optimized infrastructure.
HP Networking solutions enable the following:
Breakthrough cost reductions by converging and consolidating server, storage, and network connectivity onto a
common fabric with a flatter topology and fewer switches than the competition.
Predictable performance and low latency for bandwidth-intensive server-to-server communications.
Improved business agility, faster time to service, and higher resource utilization by dynamically scaling capacity and
provisioning connections to meet virtualized application demands by leveraging technologies such as HP Intelligent
Resilient Framework (IRF)
– VMware’s vMotion can complete 40% faster using IRF than standard technologies such as rapid Spanning Tree
Protocol (STP)1
– IRF virtually doubles network bandwidth compared with STP and Virtual Router Redundancy Protocol (VRRP), with
much higher throughput rates regardless of frame size
– IRF converged around failed links, line cards, and systems vastly faster than existing redundancy mechanisms such
as STP
– STP can take up to 30 seconds or more to recover after a line card failure; IRF can recover from the same event in as
little as 2.5 milliseconds
Removal of costly, time-consuming, and error-prone change management processes by
– Utilizing IRF to allow multiple devices to be managed using a single configuration file from a single, easy-to-
manage virtual switch operating across network layers
– Utilizing HP’s Intelligent Management Center (IMC) to manage, monitor, and control access to either a few or
thousands of switches in multiple locations from a single pane of glass
Modular, scalable, industry standards-based platforms and multi-site, multi-vendor management tools to connect
and manage thousands of physical and virtual resources
This document also includes explanations of different types of link aggregation protocols that HP leverages in its
networking products to help meet the network resiliency needs of your network and business applications.
Target audience: Network and/or System administrators configuring the network of their rack mounted servers using
VMware ESXi 5 hosts and HP Networking switches.
This white paper describes testing performed in August 2012.
Overview
VMware ESXi 5
With the release of vSphere 5.0, VMware has once again set new industry standards with its improvements to existing
features such as increased I/O performance and 32-way virtual SMP, and with brand new services such as Profile-Driven
Storage, Storage Distributed Resource Scheduler (SDRS), and vSphere Auto Deploy. New VM virtual hardware provides
additional graphic and USB 3.0 support. Most importantly, with support for up to 32 CPUs and 1TB of RAM per VM, your
virtual machines can now grow four times larger than in any previous release to run even the largest applications.
This is the first release of vSphere to rely entirely on the thinner ESXi 5.0 hypervisor architecture as its host platform.
The ESX hypervisor used in vSphere 4.1 is no longer included in vSphere; however, the vSphere 5.0 management
platform (vCenter Server 5.0) still supports ESX/ESXi 4.x and ESX/ESXi 3.5 hosts.
Use Profile-Driven Storage to identify the right storage resource to use for any given VM based on its service level. With
SDRS you can aggregate storage into pools, greatly simplifying scale management and ensuring optimum VM load
balancing while avoiding storage bottlenecks. With Auto Deploy, the new deployment model for vSphere hosts running
1
Network Test: FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology, http://www3.networktest.com/hpirf/hpirf1.pdf
3. 3
the ESXi hypervisor, you can now install new vSphere hosts in minutes and update them more efficiently than ever
before.
For more information about the new features in VMware vSphere 5.0, go to:
vmware.com/support/vsphere5/doc/vsphere-esx-vcenter-server-50-new-features.html
For complete documentation on VMware vSphere 5.0, go to: http://pubs.vmware.com/vsphere-50/index.jsp
HP Networking
HP is changing the rules of networking with a full portfolio of high-performance, standards-based products, solutions,
and services. These offerings are secure, energy-efficient, cost-saving, and developed specifically to simplify the
complexities of networking for all customers, from the largest enterprise to the smallest emerging business.
By investing in HP’s best-in-class networking technology, customers can build high-performance, secure, and efficiently
managed network environments that are flexible, interoperable, and highly cost-effective. When integrated as part of
the HP Converged Infrastructure and supported by HP and partner service capabilities, HP Networking solutions deliver
application and business services across the extended enterprise to meet critical business demand.
The HP FlexNetwork architecture
The HP FlexNetwork architecture is a key component of the HP Converged Infrastructure. Enterprises can align their
networks with their changing business needs by segmenting their networks into four interrelated modular building
blocks that comprise the HP FlexNetwork architecture: HP FlexFabric, HP FlexCampus, HP FlexBranch, and HP
FlexManagement. FlexFabric converges and secures the data center network with compute and storage. FlexCampus
converges wired and wireless networks to deliver media-enhanced, secure, and identity-based access. FlexBranch
converges network functionality and services to simplify the branch office. FlexManagement converges network
management and orchestration.
The HP FlexNetwork architecture is designed to allow IT professionals to manage these different network segments
through a single pane-of-glass management application, the HP IMC. Because the FlexNetwork architecture is based on
open standards, enterprises are free to choose the best-in-class solution for their businesses. Even with the shift to the
network cloud, the HP FlexNetwork architecture is ideal for supporting this evolution. Enterprises deploying private
clouds must implement flatter, simpler data center networks to support the bandwidth-intensive, delay-sensitive
server-to-server virtual machine traffic flows, and workload mobility that are associated with cloud computing. They
must also be able to administer and secure virtual resources, and orchestrate on-demand services. The HP FlexNetwork
architecture helps enterprises to securely deploy and centrally orchestrate video, cloud, and mobility-enhanced
architectures that scale from the data center to the network edge.
Link aggregation
Link aggregation is the general term to describe various methods of combining (aggregating) multiple network
connections in parallel to increase throughput beyond what a single connection could sustain, and to provide
redundancy in case one of the links fails. There are several forms of link aggregation used by HP products that we will
highlight here.
Link Aggregation Control Protocol (LACP)
Link Aggregation Control Protocol (LACP) is an open industry IEEE standard (IEEE 802.3ad) that provides a method to
control the bundling of several physical ports together to form a single logical channel. A LACP enabled port will send
Link Aggregation Control Protocol Data Unit (LACPDU) frames across its link in order to detect a device on the other end
of the link that also has LACP enabled. Once the other end receives the packet, it will also send frames along the same
links enabling the two units to detect multiple links between themselves and then combine them into a single logical
link. LACP can be configured in one of two modes: active or passive. In active mode it will always send frames along the
configured links. In passive mode however, it acts as "speak when spoken to", and therefore can be used as a way of
controlling accidental loops (as long as the other device is in active mode).
LACP is most commonly used to connect a user device (a server, workstation), with multiple links to a switch in order to
form a single logical channel. To form a single logical channel on the server requires the configuration of NIC teaming,
also known as bonding. For a Microsoft® Windows® host, which does not support NIC teaming natively, HP provides a
teaming tool for HP branded NICs which can be downloaded from HP on the Support and Drivers page
4. 4
(hp.com/go/support) for your server and/or as part of the HP ProLiant Support Pack. Linux has the ability to natively
configure NIC teaming. HP-UX needs the installation of additional software packages included on the HP-UX media.
Port Trunk/Bridge Aggregations
Port trunking (ProCurve OS) and Bridge Aggregation (Comware OS) allow you to assign multiple similar links, the number
depends on the type and model of the switch, to one logical link that functions as a single, higher-speed link providing
dramatically increased bandwidth to other switches and routing switches. This capability applies to connections
between backbone devices as well as to connections in other network areas where traffic bottlenecks exist. A
trunk/bridge aggregation configuration is most commonly used when aggregating ports between network
switches/routers as well as other network devices, such as ESX/ESXi, that do not speak the 802.3ad LACP protocol.
Meshing
Switch meshing technology, available as part of the ProCurve OS, allows multiple switches to be redundantly linked
together to form a meshing domain. Switch meshing eliminates network loops by detecting redundant links and
identifying the best path for traffic. When the meshing domain is established, the switches in that domain use the
meshing protocol to gather information about the available paths and to determine the best path between switches. To
select the best path, the meshed switches use the following criteria:
Outbound queue depth, or the current outbound load factor, for any given outbound port in a possible path
Port speed, based on factors such as 10 Mbps, 100 Mbps, 1000 Mbps (or 1 Gbps), 10 Gbps, full-duplex, or half-duplex
Inbound queue depth for any destination switch in a possible path
Increased packet drops, indicating an overloaded port or switch
For more information on Switch Meshing, you can view the white paper at
hp.com/rnd/pdfs/Switch_Meshing_Paper_Tech_Brief.pdf
Distributed Trunking
Distributed Trunking (DT) is a link aggregation technique, where two or more links across two switches are aggregated
together to form a trunk. The IEEE standard—802.3ad—limits the links that can be aggregated within a single
switch/device. To overcome this limitation, HP developed a new proprietary protocol called Distributed Trunking
Interconnect Protocol (DTIP) to support link aggregation for the links spanning across the switches. DT provides node-
level L2 resiliency in an L2 network, when one of the switches fails. The downstream device (for example, a server or a
switch) perceives the aggregated links as coming from a single upstream device. This makes interoperability possible
with third party devices that support IEEE 802.3ad.
Users can configure Distributed Trunking using one of the following:
Manual Trunks (without LACP)
LACP trunks
Distributed Trunking is available on HP Networking 8200, 6600, 6200, 5400, and 3500 series switches today. All
Distributed Trunking switches that are aggregated together to form a trunk must run the same software version.
For more information on Distributed Trunking, you can view the white paper at
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4841ENW
Intelligent Resilient Framework (IRF)
IRF technology extends network control over multiple active switches. Management of a group of IRF enabled switches
is consolidated around a single management IP address, which vastly simplifies network configuration and operations.
You are allowed to combine as many as nine HP 58x0 series switches to create an ultra-resilient virtual switching fabric
comprising hundreds or even thousands of 1-GbE or 10-GbE switch ports.
5. 5
One IRF member operates as the primary system switch, maintaining the control plane and updating forwarding and
routing tables for the other devices. If the primary switch fails, IRF instantly selects a new primary, preventing service
interruption and helping to deliver network, application, and business continuity for business-critical applications.
Within the IRF domain, network control protocols operate as a cohesive whole to streamline processing, improve
performance, and simplify network operation. Routing protocols calculate routes based on the single logical domain
rather than the multiple switches it represents. Moreover, edge or aggregation switches that are dual homed to IRF-
enabled core or data center switches “see” the associated switches as a single entity, thus enabling true active/active
architecture, eliminating the need for slow convergence and active/passive technologies such as spanning tree protocol
(STP). Operators have fewer layers to worry about, as well as fewer devices, interfaces, links, and protocols to configure
and manage.
For more information on IRF, you can view two white papers at http://h17007.www1.hp.com/docs/reports/irf.pdf and
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02648772/c02648772.pdf
Networking in VMware ESXi 5
This guide is not intended to highlight the best practices for performance for guests. For that information, you can view
a white paper published by VMware called “Performance Best Practices for VMware vSphere 5.0” which can be
downloaded from vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf. This document should be reviewed for
optimizing your VMware guests networking based on the hardware you are using.
With VMware ESXi 5, it is possible to connect a single virtual or distributed switch to multiple physical Ethernet adapters.
A virtual or distributed switch can share the load of traffic between physical and virtual networks among some or all of
its members and provide passive failover in the event of a hardware failure or a network outage. The rest of this section
describes what policies can be set at the port group level for your virtual and distributed switches.
Note
All physical switch ports in the same team must be in the same Layer 2
broadcast domain.
Load balancing
Load balancing allows you to spread network traffic from virtual machines on a virtual switch across two or more
physical Ethernet adapters, giving higher throughput than a single physical adapter could provide. When you set NIC
teaming policies, you have the following options for load balancing:
Route based on the originating virtual switch port ID
Route based on the originating virtual switch port ID functions by choosing an uplink based on the virtual port where the
traffic entered the virtual switch. This is the default configuration and the one most commonly deployed since this does
not require physical switch configuration in terms of link aggregation.
When using this setting, traffic from a given virtual Ethernet adapter is consistently sent to the same physical adapter
unless there is a failover to another adapter in the NIC team. Replies are received on the same physical adapter as the
physical switch learns the port association.
A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless it has multiple
virtual adapters and this setting provides an even distribution of virtual Ethernet adapters to the number of physical
adapters. A downside to this method is that a VM’s virtual adapter is paired with a busy physical Ethernet adapter, even
if there is another completely idle Ethernet adapter that is a part of the virtual switch, the traffic will never use the idle
Ethernet adapter, but will continue to use the overloaded Ethernet adapter.
Route based on source MAC hash
Route based on source MAC hash, functions exactly like route based on the originating virtual switch port ID except it
chooses an uplink based on a hash of the source Ethernet MAC address. This method is known to cause a bit more load
on the host as well as not guarantee that virtual machines using multiple virtual adapters to the same virtual or
distributed switch will use separate ports.
6. 6
Route based on IP hash
Route based on IP hash, functions by choosing an uplink based on a hash of the source and destination IP addresses of
each packet. (For non-IP packets, whatever is at those offsets is used to compute the hash.) What this means is that in
contrast to route based on the originating virtual switch port ID and route based on source MAC hash, traffic from a VM is
not limited to the use of one physical Ethernet adapter on the port group, but can leverage all Ethernet adapters for both
inbound and outbound communications as per the 802.3ad link-aggregation standard. This allows for greater network
resources for VMs, since they can now leverage the bandwidth of two or more Ethernet adapters on a host, and the
evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations, not the number of
VMs to Ethernet adapters in the port group.
All adapters in the NIC team must be attached to the same physical switch or an appropriate set of IRF or DT switches
and configured to use 802.3ad link-aggregation standard in static mode, not LACP. All adapters must be active. For ease
of management it is recommended that all port groups within a virtual switch inherit the settings on the virtual switch.
Although this method is the only one that distributes traffic evenly, it does come at a small cost to the VMware ESXi
host. Each packet that exits the virtual switch must be inspected by the VMkernel in order to make routing decisions.
Therefore, this inspection process uses CPU time to calculate which physical NIC it will use.
Failover configuration
When configuring a port group and load balancing, it is also necessary to configure the proper network failover detection
method to use for failover detection when an Ethernet adapter in your port group is lost.
Link Status only
Link Status only relies solely on the link status provided by the network adapter. This detects failures, such as cable
pulls and physical switch power failures, but it cannot detect configuration errors, such as a physical switch port being
blocked by spanning tree, misconfigured VLANs, or upstream link failures on the other side of a physical switch.
Beacon Probing
Beacon Probing addresses many of the issues that were highlighted with the Link Status method. Beacon Probing sends
out and listens for beacon probes, Ethernet broadcast frames sent by physical adapters to detect upstream network
connection failures, on all physical Ethernet adapters in the team. It uses this information, in addition to link status, to
determine link failure. Beacon probing can be useful to detect failures in the switch connected to the ESX Server hosts,
where the failure does not cause a link-down event for the host. A good example of a failure that could be detected is
where upstream connectivity is lost to an upstream switch that a switch from an ESXi host is connected to. With Beacon
Probing, this can be detected, and that Ethernet adapter will be deactivated in the port group. Beacon Probing is not
supported when using Route based on IP hash and requires a minimum of 3 Ethernet adapters in your virtual switch.
Failback
By default, NIC teaming applies a fail-back policy. That is, if a physical Ethernet adapter that had failed comes back
online, the adapter is returned to active duty immediately, displacing the standby adapter, if configured, that took over
its slot. If the primary physical adapter is experiencing intermittent failures, this setting can lead to frequent changes to
the adapter in use and affect network connectivity to the VMs using that adapter.
You can prevent the automatic fail-back by setting Failback in the vSwitch to No. With this setting, a failed adapter is left
inactive even after recovery until another currently active adapter fails, or the Administrator puts it back into operation.
Using the Failover Order policy setting, it is possible to specify how to distribute the workload for the physical Ethernet
adapters on the host. It is best practice on scenarios where IRF or DT cannot be leveraged that a second group of
adapters are connected to a separate switch in order to tolerate a switch failover.
Using the Notify Switches option enables your VMware ESXi hosts to communicate with the physical switch in the event
of a failover. If enabled, in the event of an Ethernet adapter failure, a notification is sent out over the network to update
the lookup tables on physical switches. In almost all cases, this is desirable for the lowest latency when a failover
occurs.
7. 7
Intelligent Resilient Framework (IRF) and VMware
HP tasked Network Test to assess the performance of Intelligent Resilient Framework (IRF), using VMware as part of the
workload for testing. Network Test and HP engineers constructed a large-scale test bed to compare vMotion
performance using IRF and Rapid Spanning Tree Protocol (RSTP). With both mechanisms, the goal was to measure the
time needed for vMotion migration of 128 virtual machines, each with 8 GB of RAM, running Microsoft SQL Server on
Windows Server 2008.
On multiple large-scale test beds, IRF clearly outperformed existing redundancy mechanisms such as the Spanning Tree
Protocol (STP) and the Virtual Routing Redundancy Protocol (VRRP).
Among the key findings of these tests:
Using VMware’s vMotion facility, average virtual machine migration time took around 43 seconds on a network
running IRF, compared with around 70 seconds with rapid STP.
IRF virtually doubled network bandwidth compared with STP and VRRP, with much higher throughput rates regardless
of frame size.
IRF converged around failed links, line cards, and systems vastly faster than existing redundancy mechanisms such as
STP.
In the most extreme failover case, STP took 31 seconds to recover after a line card failure; IRF recovered from the
same event in 2.5 milliseconds.
IRF converges around failed network components far faster than HP’s 50-millisecond claim.
This document can be viewed at the following URL: http://www3.networktest.com/hpirf/hpirf1.pdf
Deployment best practices
There are a many options when configuring VMware ESXi vSwitches connected to HP switches. Below we will highlight
two scenarios, one using a non-IRF configuration and one using an Intelligent Resilient Framework (IRF) or Distributed
Trunk configuration where active/active connections are used for the ESXi vSwitch.
Dual switch configuration
In the configuration in Figure 1, of the 4 connections available, two connections go to one switch, two go to the other
switch. It is recommended, even if only using 2 connections, to use two separate network cards in order to add another
layer of resiliency in the case of a card failure.
Figure 1. Standard network diagram
8. 8
Route based load balancing on the originating virtual switch port ID functions is used in this scenario which means that
no link aggregation configuration is needed on the two switches. Although all four ports are used, traffic to and from a
given virtual Ethernet adapter is consistently sent to the same physical adapter. This means that if there is contention
on the associate physical adapter, even if there is another physical adapter with little or no utilization, traffic from the
virtual Ethernet adapter on the VM will continue to send/receive traffic from the physical adapter under contention.
You cannot create a link aggregation across two separate switches without technologies such as Intelligent Resilient
Framework (IRF) or Distributed Trunking (DT), which will be talked about more in scenario two. You also cannot create
two link aggregation groups (Figure 2), one going to one switch one going to the other. VMware does not support
multiple trunks on the same vSwitch and your ESXI host and guest may encounter MAC address flopping, where a
system uses two MAC addresses one after the other which will cause serious issues on your network.
Figure 2. Example of an incorrect and unsupported configuration
Additionally, it is not recommended to create a scenario such as Figure 3. Although it will work, it is not practical since
your network traffic is not guaranteed to go over the 802.3ad link.
Figure 3. Example of an incorrect and unsupported configuration
9. 9
Dual switch configuration using Intelligent Resilient Framework (IRF)
The best and recommended configuration for performance and resiliency is to leverage HP’s Intelligent Resilient
Framework (IRF) in its data center switches powered by Comware (Figure 4). This will not only tolerate a switch failure,
but also allow a virtual Ethernet adapter in a VM to leverage the bandwidth from all four physical Ethernet adapters in a
vSwitch. With the use of IRF, you can create a link aggregation across two separate switches since IRF enables the two
switches to act as one.
Figure 4. Example diagram using Intelligent Resilient Fabric (IRF)
10. 10
Best practices for configuring HP Networking switches for
use with VMware ESXi 5 hosts
Configuring the ESXi vSwitch
1. Launch the vSphere Client and connect to your vCenter Server instance or your ESXi host.
2. Select your ESXi 5 host, and go to the Configuration tab. Select Networking under the Hardware section of the
Configuration window (Figure 5).
3. Select Properties on the first vSwitch connected to the HP Network Switch you will be configuring (Figure 5).
Figure 5. VMware vSphere Client vSwitch Configuration Window
11. 11
4. In the vSwitch Properties window, select the Network Adapters tab. From this window, you can see the current
network adapters in your vSwitch. If you need to add a network adapter, select the Add... button (Figure 6).
Figure 6. VMware vSphere vSwitch Properties Window
12. 12
5. Select the additional network adapter(s) from the list that you would like to have in the vSwitch (Figure 7). Then
select the Next button.
Figure 7. Add Adapter Wizard
13. 13
6. Ensure that the network adapter(s) you added are in Active Adapters (Figure 8) and then select Next. If it is not in
the Active Adapters section, use the Move Up button on the right side to move it into that group.
Figure 8. Add Adapter Wizard – All adapters in Active Adapters
7. Select the Finish button on the Adapter Summary screen.
14. 14
8. Select the Ports tab, and then select vSwitch (Figure 9) from the Properties window and select the Edit… button
Figure 9. vSwitch Properties Window
15. 15
9. Select the NIC Teaming tab in the vSwitch Properties window and set Load Balancing to Route based on IP hash
(Figure 10). Also, ensure that Network Failover Detection is set to Link status only. Then select the OK button.
Figure 10. vSwitch Properties Window – Configuring NIC Teaming Properties
10. If your other configurations are not set up to inherit their properties from the vSwitch configuration, which is the
default, repeat steps 8 and 9 for each configuration. You can easily see if you are not inheriting the properties by
selecting the configuration and looking at the Failover and Load Balancing section. If Load Balancing is not set to
Route based on IP hash and/or Network Failover Detection is not set to Link status only, then you need to edit
the configuration appropriately.
11. Select “Close” on the “vSwitch Properties” window. Repeat for each of your vSwitches that you are configuring in
Active/Active mode going to an HP Networking switch.
Configuring the HP Networking switches
Once you have completed configuring your vSwitch(s), next you need to configure the HP Networking switch that the
system connects to. Configuring a switch running the Comware or the ProCurve operating system is highlighted in the
following section. The end goal is to configure the port mode to trunk on the HP Switch to accomplish static link
aggregation with ESX/ESXi. Trunk mode of HP switch ports is the only supported aggregation method compatible with
ESX/ESXi NIC teaming mode IP hash.
16. 16
Note
Port Numbers will be formatted differently depending on the model and how the
switch is configured. For example, a switch configured to use Intelligent Resilient
Framework (IRF) will also include a chassis number as part of the port number.
Comware
In this example, we will be making a two port link aggregation group in an IRF configuration. You can see that this is a
two switch IRF configuration by observing the port number scheme of chassis/slot/port. A scheme of 1/0/8 means
chassis 1, or switch 1, slot 0 port 8. A scheme of 2/0/8 means chassis 2, or switch 2, slot 0 port 8.
Link aggregation groups can be larger and the number of ports depends on the model of the switch. For example, an HP
12500 series switch can have a 12 port link aggregation group.
1. Log into the network device via the console and enter system-view. The following steps can also be accomplished
using the Web UI available on most switches, but that will not be covered in this guide.
<Comware-Switch>system-view
2. Create the Bridge Aggregation interface to contain the uplinks from your server. In this example we will be creating
the interface of Bridge Aggregation 11. Your numbering may vary depending on the current configuration on the
switch you are using.
[Comware-Switch] interface Bridge-Aggregation 11
3. Give your new interface a description in order to help you identify it easier:
[Comware-Switch-Bridge-Aggregation11] description ESXi-Server-1-vSwitch0
4. Return to the main menu:
[Comware-Switch-Bridge-Aggregation11] quit
5. Enter the first interface that you will be aggregating:
[Comware-Switch] interface Ten-GigabitEthernet 1/0/8
6. Enable the interface. If it is already enabled, it will tell you that the interface is not shutdown.
[Comware-Switch-Ten-GigabitEthernet1/0/8] undo shutdown
7. Put the port in the link aggregation group:
[Comware-Switch-Ten-GigabitEthernet1/0/8] port link-aggregation group 11
8. Return back to the main menu and repeat steps 5-7 for all your interfaces going into the link aggregation group.
9. Return to the Bridge Aggregation for the final configuration:
[Comware-Switch] interface Bridge-Aggregation 11
Note
If you get an error similar to “Error: Failed to configure on interface…” during any
of the following steps, you will need to run the following command on the
interface that has the error and then re-run step 5-7.
[Comware-Switch] interface Ten-GigabitEthernet 1/0/8
[Comware-Switch-Ten-GigabitEthernet1/0/8] default
This command will restore the default settings. Continue? [Y/N]: Y
If the default command is not available:
[Comware-Switch-Ten-GigabitEthernet1/0/8] port link-type access
17. 17
10. Change the port type to a trunk:
[Comware-Switch-Bridge-Aggregation11] port link-type trunk
11. Enable the interface:
[Comware-Switch-Bridge-Aggregation11] undo shutdown
12. Set the Port Default VLAN ID (PVID) of the connection. The PVID is the VLAN ID the switch will assign to all untagged
frames (packets) received on each port. Another term for this would be your untagged or native VLAN. By default, it
is set to 1, but you will want to change it if your network is using another VLAN ID for your untagged traffic.
[Comware-Switch-Bridge-Aggregation11] port trunk pvid vlan 1
13. If you configured your vSwitch to pass multiple VLAN tags, you can configure your bridge aggregation link at this
time by running the following command. Repeat for all the VLANs you need to pass through that connection.
[Comware-Switch-Bridge-Aggregation11] port trunk permit vlan 85
Please wait... Done.
Configuring Ten-GigabitEthernet1/0/8... Done.
Configuring Ten-GigabitEthernet2/0/9... Done.
14. If you set your PVID to something other than the default 1, you will want to remove that VLAN 1 and repeat step 12
for your PVID VLAN. If you do not want to use your PVID VLAN through your virtual switch, omit entering the second
command below.
[Comware-Switch -Bridge-Aggregation11]undo port trunk permit vlan 1
Please wait... Done.
Configuring Ten-GigabitEthernet1/0/8... Done.
Configuring Ten-GigabitEthernet2/0/9... Done.
[Comware-Switch-Bridge-Aggregation11] port trunk permit vlan 2
Please wait... Done.
Configuring Ten-GigabitEthernet1/0/8... Done.
Configuring Ten-GigabitEthernet2/0/9... Done.
15. Now display your new Bridge Aggregation interface to ensure things are set up correctly. You will want to make
sure your PVID is correct, and that you are both passing and permitting the VLAN you defined. In this example, we
are not passing the untagged traffic (PVID 1) and only packets tagged with VLAN ID 85 and 134. You will also want
to make sure your interfaces are up, and you are running at the correct speed, two 10Gbps links would give you
20Gbps of aggregated performance.
[Comware-Switch] display interface Bridge-Aggregation 11
Bridge-Aggregation11 current state: UP
IP Packet Frame Type: PKTFMT_ETHNT_2, Hardware Address: 000f-e207-f2e0
Description: ESXi-Server-1-vSwitch0
20Gbps-speed mode, full-duplex mode
Link speed type is autonegotiation, link duplex type is autonegotiation
PVID: 1
Port link-type: trunk
VLAN passing : 85, 134
VLAN permitted: 85, 134
Trunk port encapsulation: IEEE 802.1q
… Output truncated…
16. Now check to make sure the trunk was formed correctly. If both connections have something other than “S” for the
status, here are a couple of troubleshooting steps. If none of these work, then delete and recreate the bridge
aggregation and reset all the ports back to default. Ensure that
a. You configured the interfaces correctly.
b. You enabled (undo shutdown) the port on the switch.
18. 18
c. The VLANs being passed/permitted match that of the group.
d. The port is connected to the switch on the interface you specified and is connected and enabled on
the server:
[Comware-Switch] display link-aggregation verbose Bridge-Aggregation 11
Loadsharing Type: Shar -- Loadsharing, NonS -- Non-Loadsharing
Port Status: S -- Selected, U -- Unselected
Flags: A -- LACP_Activity, B -- LACP_Timeout, C -- Aggregation,
D -- Synchronization, E -- Collecting, F -- Distributing,
G -- Defaulted, H -- Expired
Aggregation Interface: Bridge-Aggregation11
Aggregation Mode: Static
Loadsharing Type: Shar
Port Status Oper-Key
--------------------------------------------------------------------------------
XGE1/0/8 S 2
XGE1/0/9 S 2
17. Save the configuration. Repeat these steps for each vSwitch you have configured. Once completed, exit the switch,
and you are done.
ProCurve
In this example, we will be making a two port link aggregation group. Unlike the IRF configuration above where
everything was done from a single switch and no special setup was needed on the IRF connection, if your setup is
leveraging Distributed Trunking, additional steps will be needed to run the other switch in the DT configuration. It is also
required that you use dt-trunk and not dt-lacp in the configuration.
Link aggregation groups can be larger and the number of ports depends on the model of the switch. For example, an HP
8200 series switch can have an 8 port link aggregation group.
1. Log into the network device via the console, enter the CLI interface if at the menu view, and enter configuration
mode. The following steps may also be accomplished using the Web UI available on most switches, but that will not
be covered in this guide.
ProCurve-Switch# configure terminal
2. Create the link aggregation (trunk):
a. Not using Distributed Trunking:
ProCurve-Switch(config)# trunk 9,10 trk11 trunk
b. Using Distributed Trunking:
ProCurve-Switch(config)# trunk 9,10 trk11 dt-trunk
Note
If you are adding more than two ports, and they are contiguous, you can say
ports 9-12, then you can use a dash to define that. An example is below.
ProCurve-Switch(config)# trunk 9-12 trk11 trunk
19. 19
3. Ensure the Trunk was created:
ProCurve-Switch(config)# show trunks
Load Balancing
Port | Name Type | Group Type
--------- + ----------------------------------------------- ------------------ + ------------ -----------
9 | SFP+SR | Trk11 Trunk
10 | SFP+SR | Trk11 Trunk
4. Enable the ports you put in the trunk group:
ProCurve-Switch(config)# interface 9 enable
ProCurve-Switch(config)# interface 10 enable
5. If you configured your vSwitch to pass multiple VLAN tags, you can configure your trunk connection at this point by
running the following command. Repeat for all the VLANs you need to pass through that connection.
ProCurve-Switch(vlan-85)# tagged trk11
ProCurve-Switch(vlan-85)#quit
ProCurve-Switch(config)#show vlan 85
Status and Counters - VLAN Information - VLAN 85
VLAN ID : 85
Name : ESXi VLAN 1
Status : Port-based
Voice : No
Jumbo : Yes
Port Information Mode Unknown VLAN Status
------------------------ --------------- ------------------------- ----------
17 Untagged Learn Up
18 Untagged Learn Up
19 Untagged Learn Up
20 Untagged Learn Up
Trk1 Tagged Learn Up
Trk11 Tagged Learn Up
6. Set the Port Default VLAN ID (PVID) of the connection. The PVID is the VLAN ID the switch will assign to all untagged
frames (packets) received on each port. In the ProCurve OS, this would be the untagged VLAN. By default, it is set
and added to VLAN 1, but you will want to change it if your network is using another VLAN ID for your untagged
traffic. If you do not want to pass your untagged network through your virtual switch, omit adding the port to the
untagged network in this example (VLAN 2).
ProCurve-Switch(config)# vlan 1
ProCurve-Switch(vlan-1)# no untagged trk11
ProCurve-Switch(vlan-1)# quit
ProCurve-Switch(config)# vlan 2
ProCurve-Switch(vlan-2)# untagged trk11
ProCurve-Switch(vlan-2)# quit
7. Save the configuration:
ProCurve-Switch(vlan-2)# write me
20. 20
8. You now need to repeat the same steps on your other switch if you are using Distributed Trunking.
9. Repeat these steps for each vSwitch you have configured. Once completed, exit the switch, and you are done.
Appendix A: Glossary of terms
Table 1. Glossary
Term Definition
Bridge Aggregation Comware OS terminology for Port Aggregation.
Distributed Trunking (DT)
A link aggregation technique, where two or more links across two switches are aggregated
together to form a trunk.
IEEE 802.3ad
An industry standard protocol that allows multiple links/ports to run in parallel, providing a
virtual single link/port. The protocol provides greater bandwidth, load balancing, and
redundancy.
Intelligent Resilient Framework (IRF)
Technology in certain HP Networking switches that enables the ability to connect similar
devices together to create a virtualized distributed device. This virtualization technology
realizes the cooperation, unified management, and non-stop maintenance of multiple
devices.
LACP Link Aggregation Control Protocol (see IEEE 802.3ad)
Port Aggregation
Combining ports to provide one or more of the following benefits: greater bandwidth, load
balancing, and redundancy.
Port Bonding
A term typically used in the UNIX®/Linux world that is synonymous to NIC teaming in the
Windows world.
Port Trunking ProCurve OS terminology for Port Aggregation.
Spanning Tree Protocol (STP)
Spanning Tree Protocol (STP) is standardized as IEEE 802.1D and ensures a loop-free
topology for any bridged Ethernet local area network by preventing bridge loops and the
broadcast traffic that results from them.
Trunking
Combining ports to provide one or more of the following benefits: greater bandwidth, load
balancing, and redundancy.
Virtual Switch
Virtual switches allow virtual machines on the same ESX/ESXi host to communicate with
each other using the same protocols that would be used over physical switches, as well as
systems outside of the ESX/ESXi host when configured with one or more physical adapters
on the host, without the need for additional networking hardware.
vSphere Distributed Switch
vSphere's Distributed Switch spans many vSphere hosts and aggregates networking to a
centralized cluster level. The Distributed Switch abstracts configuration of individual virtual
switches and enables centralized provisioning, administration and monitoring through
VMware vCenter Server.