Alexis Dacquay – is CCIE with over 10 years experience in the networking industry. He has in the past been designing, deploying, and supporting some large corporate LAN/WAN networks. He has in the last 4 years specialised in high performance datacenter networking to satisfy the needs of cloud providers, web2.0, big data, HPC, HFT, and any other enterprise for which high performing network is critical to their business. Originally from Bretagne, privately a huge fan of polish cuisine.
Topic of Presentation: Handling high-bandwidth-consumption applications in a modern DC design
Language: English
Abstract: Modern Data Centre requires proper handling of high-bandwidth consuming applications, like BigData or IP Storage. To achieve this, next generation Ethernet speeds of 25, 50 and 100Gbps are being pursued. We are to show _why_ these new Ethernet speeds are vital from technology standpoint and _how_ to cope with the those sparkling new requirements by networking hardware enablements. We are to share ethernet switches’ design considerations, with the biggest emphasis put on the importance of big buffers and how they accommodate this bursty traffic. Throughout the presentation we will additionally elaborate on the evolution of variety of modern applications, and how we can handle those with the properly designed hardware, software, and Data Centre itself.
In this deck from the 2019 OpenFabrics Workshop in Austin, Ariel Almog from Mellanox presents: To HDR and Beyond.
"Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. These technologies exposing various new physical interfaces for copper and optical interfaces and type of transceiver like SFP-DD. Supporting these speeds also toughen the task to get low BER (Bit Error Rate) through FEC (Forward Error Correction) algorithm. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."
Watch the video: https://wp.me/p3RLHQ-k0B
Learn more: http://mellanox.com
and
https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
In this deck from the 2019 OpenFabrics Workshop in Austin, Ariel Almog from Mellanox presents: To HDR and Beyond.
"Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. These technologies exposing various new physical interfaces for copper and optical interfaces and type of transceiver like SFP-DD. Supporting these speeds also toughen the task to get low BER (Bit Error Rate) through FEC (Forward Error Correction) algorithm. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."
Watch the video: https://wp.me/p3RLHQ-k0B
Learn more: http://mellanox.com
and
https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
ASCI Terascale Simulation Requirements and DeploymentsGlenn K. Lockwood
Presented at the Oak Ridge Interconnects Workshop in 1999. A fun historical perpsective on where the HPC industry in 1999 thought we would be going forward into the petascale industry.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
Next Generation Ethernet
Next Generation Ethernet is a platform that should deliver all of previous function requirements under on hood. I have grouped the Generations in this way because Cisco has different purpose-built product lines for each of 4 waves of technology. Counter to that Extreme offers a platform solution for a customer to build his network on. Extreme does not require different switches to address different convergence requirements, this would be cost prohibitive for most customers and complicated. Simply put to disrupt the Cisco market, Extreme must deliver more with less.
The IEEE is pushing Ethernet to unimaginable speeds, with the 40/100Gigabit Ethernet standard expected to be ratified in 2010 and Terabit Ethernet on the drawing board for 2015. Here's a timeline showing key milestones in the growth of Ethernet Sstandard's-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard.
Complexity - Complex systems are a special type of chaotic system. They display a very interesting type of emergent behavior called, logically enough, complex adaptive behavior. But we are getting ahead of ourselves. There’s a need to back up a bit and describe a fundamental behavior that occurs at the granular level and leads to complex adaptive behavior. It is self -organization. Complex Adaptive Behavior is the name given to this forming-falling apart-reforming-falling apart-… behavior. Specifically it is defined as many agents working in parallel to accomplish a goal. It is conflict ridden, very fluid, and very positive. The hallmark of emergent, complex adaptive behavior is it brings about a change from the starting point that is not just different in degree but in kind. In biology a good example of this is the emergence of consciousness. Another example is the Manhattan Project and the development of the atomic bomb. Below is a checklist that helps facilitate a qualitative assessment of the level of complexity. It is in everyday language to facilitate use by a broad range of stakeholders and team members. In other words, it stays away from jargon, which can be the kiss of death when requesting information from people.
The Checklist
Not sure how the project will get done; Many stakeholders, teams and sub-teams;
Too Many vendors; New vendors;
New client; Team members are geographically dispersed;
End-users are geographically dispersed; Many organizations;
Many cultures (professional, organizational, sociological);
Many languages (professional, organizational, sociological);
High risk;
Lack of quality best characterized by lack of acceptance criteria;
Lack of clear requirements and too Many tasks;
Arbitrary budget or end date;
Inadequate resources;
Leading-edge technology;
New, unproven application of existing technology;
High degree of interconnectedness (professional, technological, political, sociological).
Webinar NETGEAR - Soluzioni Switch Smart 10 gigabit & how to eliminate bottle...Netgear Italia
Introduzione delle soluzioni di switch Prosafe Smart 10Gigabit ed una disamina sui colli di bottiglia delle reti ed indicazioni basilari su come dimensionare correttamente l'intera rete.
Bottleneck - Oversubscription - Line Rate - Wire Speed
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
IBM 40Gb Ethernet - A competitive alternative to InfinibandAngel Villar Garea
SOURCE URL: http://www.chelsio.com/wp-content/uploads/2013/11/40Gb-Ethernet-A-Competitive-Alternative-to-InfiniBand.pdf
IBM’s Rackswitch G8316 40Gb Ethernet switch and Chelsio Communications’ Unified Wire line of 40Gb Ethernet adapters, is an attractive plug-and-play alternative to InfiniBand FDR that provides equivalent application performance levels, and closes the gap that so far has separated the raw capabilities of these two fabrics
Business Model Concepts for Dynamically Provisioned Optical NetworksTal Lavian Ph.D.
Business Continuity/Disaster Recovery:
Remote file storage/back-up
Recovery after equipment or path failure
Alternate site operations after disaster
Storage and Data on Demand:
Rapid expansion of NAS capacity
Archival storage and retrievals
Logistical networking – pre-fetch and cache
Financial Community and Transaction GRIDs:
Distributed computation and storage
Shared very high bandwidth network
Pay-for-use utility computing
An alternative to the core/aggregation/access layer network topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. One way is to create a Spine and Leaf architecture, also known as a Distributed Core. This architecture has two main components: Spine switches and Leaf switches. Intuition Systems can think of spine switches as the core, but instead of being a large, chassis-based switching platform, the spine is composed of many high-throughput Layer 3 switches with high port density. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between access-layer switches. When networking vendors speak of an Ethernet fabric, this is generally the sort of topology they have in mind.
Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now, is it something different? Strictly speaking, to “disaggregate” means to divide
We demonstrated how at Criteo we have introduced on our Mesos clusters:
* network isolation between our containers
* a network bandwidth custom resource patching all our frameworks (marathon and aurora).
This talk has been presented at MesosCon18 in SF.
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
Next Generation Optical Networking: Software-Defined Optical NetworkingADVA
Check out Stephan Rettenberger’s presentation from the Next Generation Optical Networking Conference in Monaco. It's all about Software Defined Optical Networking.
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
To better serve the new application requirements, Cisco is introducing a New high-performance Analytics ready 32G Fibre Channel Module on MDS 9700 Directors and a new 32G Host Bus Adapter for UCS C-series. The end to end 32G FC support across Cisco DC platforms set new standards for Storage Networking providing customers with choice. Along with this announcement, Cisco is also announcing NVMe over Fabric support on MDS 9000 Series enabling customers to take advantage of the performance and low latency benefits offered by the new technology to scale efficiently in the post-flash environments.
ASCI Terascale Simulation Requirements and DeploymentsGlenn K. Lockwood
Presented at the Oak Ridge Interconnects Workshop in 1999. A fun historical perpsective on where the HPC industry in 1999 thought we would be going forward into the petascale industry.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
Next Generation Ethernet
Next Generation Ethernet is a platform that should deliver all of previous function requirements under on hood. I have grouped the Generations in this way because Cisco has different purpose-built product lines for each of 4 waves of technology. Counter to that Extreme offers a platform solution for a customer to build his network on. Extreme does not require different switches to address different convergence requirements, this would be cost prohibitive for most customers and complicated. Simply put to disrupt the Cisco market, Extreme must deliver more with less.
The IEEE is pushing Ethernet to unimaginable speeds, with the 40/100Gigabit Ethernet standard expected to be ratified in 2010 and Terabit Ethernet on the drawing board for 2015. Here's a timeline showing key milestones in the growth of Ethernet Sstandard's-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard.
Complexity - Complex systems are a special type of chaotic system. They display a very interesting type of emergent behavior called, logically enough, complex adaptive behavior. But we are getting ahead of ourselves. There’s a need to back up a bit and describe a fundamental behavior that occurs at the granular level and leads to complex adaptive behavior. It is self -organization. Complex Adaptive Behavior is the name given to this forming-falling apart-reforming-falling apart-… behavior. Specifically it is defined as many agents working in parallel to accomplish a goal. It is conflict ridden, very fluid, and very positive. The hallmark of emergent, complex adaptive behavior is it brings about a change from the starting point that is not just different in degree but in kind. In biology a good example of this is the emergence of consciousness. Another example is the Manhattan Project and the development of the atomic bomb. Below is a checklist that helps facilitate a qualitative assessment of the level of complexity. It is in everyday language to facilitate use by a broad range of stakeholders and team members. In other words, it stays away from jargon, which can be the kiss of death when requesting information from people.
The Checklist
Not sure how the project will get done; Many stakeholders, teams and sub-teams;
Too Many vendors; New vendors;
New client; Team members are geographically dispersed;
End-users are geographically dispersed; Many organizations;
Many cultures (professional, organizational, sociological);
Many languages (professional, organizational, sociological);
High risk;
Lack of quality best characterized by lack of acceptance criteria;
Lack of clear requirements and too Many tasks;
Arbitrary budget or end date;
Inadequate resources;
Leading-edge technology;
New, unproven application of existing technology;
High degree of interconnectedness (professional, technological, political, sociological).
Webinar NETGEAR - Soluzioni Switch Smart 10 gigabit & how to eliminate bottle...Netgear Italia
Introduzione delle soluzioni di switch Prosafe Smart 10Gigabit ed una disamina sui colli di bottiglia delle reti ed indicazioni basilari su come dimensionare correttamente l'intera rete.
Bottleneck - Oversubscription - Line Rate - Wire Speed
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
IBM 40Gb Ethernet - A competitive alternative to InfinibandAngel Villar Garea
SOURCE URL: http://www.chelsio.com/wp-content/uploads/2013/11/40Gb-Ethernet-A-Competitive-Alternative-to-InfiniBand.pdf
IBM’s Rackswitch G8316 40Gb Ethernet switch and Chelsio Communications’ Unified Wire line of 40Gb Ethernet adapters, is an attractive plug-and-play alternative to InfiniBand FDR that provides equivalent application performance levels, and closes the gap that so far has separated the raw capabilities of these two fabrics
Business Model Concepts for Dynamically Provisioned Optical NetworksTal Lavian Ph.D.
Business Continuity/Disaster Recovery:
Remote file storage/back-up
Recovery after equipment or path failure
Alternate site operations after disaster
Storage and Data on Demand:
Rapid expansion of NAS capacity
Archival storage and retrievals
Logistical networking – pre-fetch and cache
Financial Community and Transaction GRIDs:
Distributed computation and storage
Shared very high bandwidth network
Pay-for-use utility computing
An alternative to the core/aggregation/access layer network topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. One way is to create a Spine and Leaf architecture, also known as a Distributed Core. This architecture has two main components: Spine switches and Leaf switches. Intuition Systems can think of spine switches as the core, but instead of being a large, chassis-based switching platform, the spine is composed of many high-throughput Layer 3 switches with high port density. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between access-layer switches. When networking vendors speak of an Ethernet fabric, this is generally the sort of topology they have in mind.
Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now, is it something different? Strictly speaking, to “disaggregate” means to divide
We demonstrated how at Criteo we have introduced on our Mesos clusters:
* network isolation between our containers
* a network bandwidth custom resource patching all our frameworks (marathon and aurora).
This talk has been presented at MesosCon18 in SF.
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
Next Generation Optical Networking: Software-Defined Optical NetworkingADVA
Check out Stephan Rettenberger’s presentation from the Next Generation Optical Networking Conference in Monaco. It's all about Software Defined Optical Networking.
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
To better serve the new application requirements, Cisco is introducing a New high-performance Analytics ready 32G Fibre Channel Module on MDS 9700 Directors and a new 32G Host Bus Adapter for UCS C-series. The end to end 32G FC support across Cisco DC platforms set new standards for Storage Networking providing customers with choice. Along with this announcement, Cisco is also announcing NVMe over Fabric support on MDS 9000 Series enabling customers to take advantage of the performance and low latency benefits offered by the new technology to scale efficiently in the post-flash environments.
Introducing the Future of Data Center Interconnect NetworksADVA
Our ADVA FSP 3000 CloudConnect™ is the future of Data Center Interconnect (DCI) networks. It’s a highly scalable, energy efficient and truly open platform. With our DCI technology, there are no more limits, no more restrictions. A new era of possibilities has arrived.
PLNOG14: Konwergentność, Wydajność, Szybkość w Data Center - Kazimierz JantasPROIDEA
Kazimierz Jantas - Zycko
Language: Polish
Współczesne trendy rozwoju technologii informacyjnych stawiają nowe wyzwania. Unifikacja infrastruktury i „usług danych” w IT; konsolidacja , wirtualizacja oraz rozwój usług przetwarzania chmurowego „cloud”, wymagają zastosowania nowych idei i innowacyjnych rozwiązań umożliwiających realizację zakładanych celów oraz poprawne funkcjonowanie współczesnych środowisk teleinformatycznych. Mellanox
Zarejestruj się już dziś na kolejną edycję PLNOG: krakow.plnog.pl
Slawomir Janukowicz, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Thông tin liên hệ tư vấn hệ thống an ninh mạng:
Công ty Cổ phần Tin học Lạc Việt
Hotline: (+84.8) 38.444.929
Email: info@lacviet.com.vn
Website: http://www.lacviet.vn/
Check out this presentation on SAN and FICON Long-Distance Connectivity from ADVA Optical Networking's Uli Schlegel and David Lytle from Brocade. This dynamic duo presented at this year's SHARE 2014 in Pittsburgh
Designing LoRaWAN for dense IoT deployments webinarActility
As more and more IoT devices are being added to the network in increasingly massive deployments, it is important to design IoT networks from the beginning to meet the scalability requirements of the future.
In this webinar, Actility’s Olivier Hersent and Rohit Gupta welcome special guest Bill Versteeg of JumpStartIoT.com to reveal various solutions based on learnings from Actility’s deployments that can be used to design LoRaWANs for scalability. They will also explore how densification leads to lower power consumption by end devices, resulting in dramatic reduction in TCO for the end customer. Last but not least, you will discover how operators, whether mobile or fixed, can leverage their assets to deploy low-cost LoRaWAN picocells. Discover:
Why adaptive data rate is key to LoRaWAN scaling
How combining macro and picocells delivers coverage AND capacity
The dramatic impact of network densification on capacity and device TCO
Why micro-cellular networks are the future of LoRaWAN
How to deploy coverage for a real-world water metering application
Overview of the upcoming 802.11ac standard and what to expect from wave 1 and wave 2 products.
Customer expectations vs. the real features which are going to be available in "wave 1" and "wave 2" products. To avoid the unnecessary frustration...
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
3. Drivers for bandwidth increase
ü Application Clustering
ü High Density non-blocking scale
ü ECMP to provide scale and fault tolerance
ü IP Storage / Big Data and Hadood
ü 2 tier active / active with low oversubscription ratio
ü Dual home or single home server
ü Distributed traffic, Mesh, anything-anywhere
ü Fan-in, Fan-out
ü Virtualized Cloud – Scale
ü VXLAN with Equal Cost Multipathing
4. Storage – Arista european customer case
40G Ethernet storage
8 x 40G Ethernet
6PB Storage
(GPFS)
High density
10/40/100G Ethernet
Compute
Workstations
1000+ compute
Replication
Users
• 39.5 Gb/s utilization per 40G Ethernet link, on all 8 simultaneously (=316Gbps)
• 30 GB/s GPFS aggregate throughput, with some disk drawers still unpopulated
• Low latency, large buffers. Highest performance without tuning on the network
5. 2000
0
0
1
2
3
4
5
6
7
8
9
10
Time
Eth1
Eth2
Eth3
Non stressful traffic
Buffer limit
Packet
segments
Eth 4
100
80
60
40
20
0
120
10 20 30 40 50 60 70 80 90 100 Utilisation (%) Time (ms)
Av Throughput
Buffer Usage
Current Throughput
6. Buffering Visibility with LANZ (trigger-based)
• Offers visibility of μburst
• Impact of congestion on latency, drops
• Trigger-based
• Guaranteed visibility (vs Polls)
• Configurable high/low threshold
Arista 7150S#show queue-monitor length drops
Report generated at 2013-01-16 20:48:09
Time Interface TX Drops
-----------------------------------------------------------------
0:02:32.18999 ago Et46 32755054
0:02:35.29710 ago Et46 53552534
0:02:40.29720 ago Et46 53552633
High
Threshold
Eth8
Eth9
Eth10
Eth1
Eth2
Eth3
Congestion
Low
Threshold
Congestion
Event
triggered by an
Over-threshold
event
Packet buffering on
Eth8 queue
due to temp μburst
from eth1 and eth2
EOS
LANZ Agent
7. What Causes Congestion?
Buffer starvation à TCP collapse
• Oversubscribed networks with bursts > available bandwidth
• Multiple nodes trying to read/write to one node (e.g.:
Storage)
• Lack of buffers means drops, which result in lower goodput
1
2
3
4
Data
Block
Storage
Servers
Client Switch
SRU (Server
Request Unit)
8. 2000
0
0
1
2
3
4
5
6
7
8
9
10
Time
Eth1
Eth2
Eth3
Bursty traffic on shallow buffer
Buffer limit
Packet
segments
Eth 4
140
120
100
80
60
40
20
0
160
10
20
30
40
50
60
70
80
90
100
Utilisation (%)
Time (ms)
Av Throughput
Buffer Usage
Current Throughput
9. 3 second response time 5 second response time
Bandwidth
100% Utilization
Time
Why are deep buffers required?
Packet Loss
Backoff and Slowstart Window Increasing
Greater than 3 second screen paint time will cause you to lose 43% of your
customers! Akamai report on page response time
10. Deep Buffers Matter! … Fairness
• 20 node test with 10 flows per node (200 flows)
• Two tests:
• 4MB buffer
• 256MB buffer
Results:
• Complete fairness
with large buffers
• Small buffers caused
confusing flow
transmission rates
11. 2000
0
0
1
2
3
4
5
6
7
8
9
10
Time
Eth1
Eth2
Eth3
Bursty traffic on deep buffer
Buffer limit
Packet
segments
Eth 4
160
140
120
(%) 100
Utilisation 80
60
40
20
0
10 20 30 40 50 60 Time (ms)
70 80 90 100 Av Throughput
Buffer Usage
Current Throughput
13. Deep Buffers Matter! …Hadoop Test
1000000
750000
500000
250000
0
1MB 4:1
1MB 5.33:1
48MB 5.33:1
Zero!
Packets dropped per TeraGen
4x10G ⇒ 4:1
3x10G ⇒ 5.33:1
...
16 hosts
x 10G
...
16 hosts
x 10G
1k TCP slow start/sec
14. Buffer Impact to High Performance
Deep Buffer
Shallow Buffer
Oversubscription
Goodhput
• Use Cases:
ü Optimizing multi-speed: 40à10G , 100Gà10,40G, 10Gà1G
ü Improving uplink contention in mixed speed networks
ü High Density in core/spine (many-to-one, in-cast, fan-in)
15. 130
Buffer Utilization per Port – High Perf Networking
Buffer consumed (MB) per port th percentile
120
110
100
90
80
70
60
50
40
30
20
10
0
2 10 18 26 34 42 50 58 66 74 82 90 98
Arista
7500E :
125MB
Per 10G
Trident+
ASIC
9MB per
64 ports
16. How much Buffer Memory do you need ?
NS3 Network simulations match real-world data showing TCP incast
issue - large # of TCP flows create microburst congestion
Customer Real Buffer Utilization Observations Max Buffer Used per
Port
HPC Storage Cluster – Medium 33 MB
Animation Storage Filer (NFS) 6.2 MB
Software vendor Engineering Build Servers (Perforce) 14.9 MB
Online shopping Hadoop 2K servers – Big Data 52.3 MB
Educational Enterprise Data Center (Virtualization) 52.4 MB
Real World Data
18. How
to
catch
microbursts?
LANZ – Trigger-based vs Polling
• Microbursts occur in very short periods, micro or even nanoseconds, they are
undetectable using standard polling methods.
• LANZ on 7150, is event-driven, offer real-time visibility of microbursts
poll
SNMP Polling Rate (1/sec)
poll
Average
u6liza6on
based
on
1
second
polling:
0%
At
10Gbps
1
Second
=
~30
Million
Packets
!
20. Cloud Data Center 100G Requirements
Increasing port density choice and transceiver
distance will accelerate 100G adoption
Customers will only deploy 100G Ethernet
in volume once it is cost-competitive, i.e.
100 GigE less or equal to 10 x 10 GigE
21. 100G Deployment in Data Center
100G Any-scale Pods
10 G
10 G
100G
10 G
100G
10 G
100G Rack to DC Spine
High Performance Storage
Mix 1km and 10km
Broadest choice of 100G ports
Highest Density for DC Spines
Mix and match 40G and 100G
100G at the PoP
Data
Center
Interconnecting
Data Centers &
POPs
10km
Long Distances
Data
Center
Small
DC
or PoP
Smaller Footprint Option
IEEE LR4 and SR10
Optical interconnect
Metro, Core, Edge Routers
Mix and Match SM and MM
100G to the ToR
Up to
400m
1/10G
Scale Built Spine
10 G
40 G
Server/Storage Expansion
Leaf and Spine
Mix 10/40/100G
400m reaches
Investment Protection
10G to 40G Server Transition
22. The Foundation for Virtualized Clouds
Arista 7500E Series
• Architected to operate at massive network scale
• Designed and Optimized for Virtualization and Cloud
• Energy Efficient
• 1,152 10GbE / 288 40GbE / 96 100GbE
• 30 Tbps
Highest Density 10/40/100GbE Switch
23. Pay As You Grow 100G Deployment Flexibility
Cost Effective MXP
Integrated Triple Speed SR10 7500E-12CM-LC Optics 10/40/100G
7500E-6C2-LC
Flexible short and long reach CFP2
LR4 - 100GbE over 10km / SR10 - 300m
7500E-12CQ-LC
High Density QSFP-100G
Broad 10/40/100G QSFP Optics
Dense 100/40/10G Ÿ Deep Buffers Ÿ Feature Parity Ÿ Investment Protection
24. Wire Speed 10/40/100G with Deep Buffers
7280SE Fixed Configuration Switches
900 Million Packets per Second
1.44 Terabits per second
Less than 4usec Latency
Ultra deep 9GB packet buffers
VOQ architecture for lossless forwarding
Wire speed L2 and L3 forwarding
40G and 100G uplinks for HPC and CDN
Leaf and Spine 40/100G ECMP and MLAG
Integrated SSD for local traffic analysis
Reversible airflow and AC / DC Power Options
25. Flexible Optics: 100G CFP2 & QSFP
CFP2
• Hot pluggable transceiver for 100GbE
• Full support for IEEE 100G standards – SR, CR, LR, ER
• Interoperable with IEEE compliant 100G optics
• Half the size of CFP – allows higher density
• Lower power consumption than CFP reducing concerns on
optic cooling
QSFP100
• Smallest form factor transceiver for 40/100GbE
• Support for IEEE 100G standards – SR, CR, LR
• Interoperable with IEEE compliant 40G and 100G optics
• Power efficient with only 3.5W/port
• Low power and size allows for high 100G density
Highest density, lowest cost
Broad MM and SM choice
26. 7500E-12CQ
Use Case: Long Distance Single Mode
Direct Data Center Interconnect
Data
Center
Interconnecting Data
Centers & POPs
10km
Data
Center
Small DC
or PoP
• Up to 10km reach over Single Mode Fiber
• Connect to optical transport and core routers
• IEEE Standards for multi-vendor interoperability
• Broad range of pluggable CFP2 optics
• Lowest cost solution for cross-site 100GbE
7280SE-68
Small Data Center/PoP Interconnect
• QSFP100 drives up to 10km distance
• Provide up to 2x100G bandwidth
• 1RU form factor ensures minimal space and very
low power requirements
28. 25G and 50G Ethernet Consortium
• Founded by Arista, Broadcom, Google, Mellanox & Microsoft
• 25gEthernet.org consortium website
• An open specification for the new speeds
• Consortium open to everyone in the industry
29. Cloud Applications that drive bandwidth
25G
• Compute/BigData that needs lowest cost per Gbps
• Servers can push more than 10Gbps but not willing to pay a premium
• Need same port density as 10G
50G
• IP Storage
• 2x25G is most cost effective
• Higher port density than 40G, so single Leaf switch sufficient
• Easier to scale on NICs too
Arista is leading the industry here
25G and 50G support needed in silicon
Products expected in the next 18 to 36 months on both switches and NICs
30. Why need another speed?
1G
10G
40G:
4x10G
100G:
4x25G
• 1G and 10G use single lanes (1 pair)
• 40G and 100G use parallel lanes (4 pairs)
• 40G and 100G ports need more SerDes, consume more power and reduce port density
• The Cloud needs to hit the sweet-spot of lowest price per Gigabit vs optimal performance
31. 25G and 50G Ethernet
25G
50G:
2x25G
• 25G is a single lane specification, just like 10G
• Leverages IEEE 802.3 ethernet framing
• Offers 2.5X the speed at a cost structure closer to 10G
• Same port density & connectors as 10G SFP+
• 50G is dual-lane
• Offers 1.25X the speed of 40G
• Cost structure is closer to 2X of 10G
• 2X the port density as 40G using splitter cables from QSFP
32. The Sweet Spots
120"
100"
80"
60"
40"
20"
Price&per&Gbps&
• 25G is a single lane specification, just like
10G
• Leverages IEEE 802.3 ethernet framing
• Offers 2.5X the speed at a cost structure
closer to 10G
• 50G is dual-lane
• Offers 1.25X the speed of 40G
• Cost structure is closer to 2X of 10G
0"
1G" 10G" 25G" 40G" 50G" 100G"