MetroXTM provides long-haul InfiniBand solutions to connect data centers over distances of 1km to 80km. This allows organizations to build mega clouds and datacenters across multiple physical sites while maintaining high-speed, low latency connectivity. MetroXTM offers advantages for disaster recovery and business continuity by enabling active-active data centers with 40Gb/s bandwidth and simple management. It is a cost-effective solution with a small footprint that is well-suited to the needs of high performance computing and large mega data centers.
The document discusses Mellanox's interconnect solutions for high performance computing and cloud environments. It notes that computing needs are growing exponentially, driving the need for faster interconnect technologies. Mellanox provides InfiniBand and Ethernet solutions that deliver the highest performance and return on investment, maintaining a generation lead over competitors. Mellanox's open platforms allow for flexibility and freedom of choice in software and management.
Mellanox is a leading provider of high-performance interconnect solutions for server and storage applications. It has over 1,300 employees worldwide and reported record revenue in 2012 of $500.8 million, up 93% year-over-year. Mellanox offers a comprehensive portfolio of InfiniBand and Ethernet adapters, switches, cables, and software to connect servers, storage and switches. It has a unique ability to design and manufacture its own interconnect components to ensure high performance, quality and reliability.
Mellanox provides end-to-end interconnect solutions including ICs, adapter cards, switches, software, and cables to connect data centers, metro areas, and WANs. Mellanox's Connect-IB adapters deliver the highest performance and ROI for clustering, with the world's first 100Gb/s adapter card and a 4x increase in messages per second over competitors. Connect-IB also provides the highest application performance, with up to 200% faster performance than competitors for WIEN2k and 54% faster performance for WRF simulations. Mellanox's GPUDirect RDMA technology further accelerates GPU-GPU communication with 69% lower latency and 3x higher throughput between nodes.
1. Open Ethernet is an alternative to closed network solutions that allows users to choose their switch hardware, operating system, and software stack.
2. It enables freedom of choice, lower costs through scale, and a diversity of solutions through open source development.
3. Mellanox is leading the generation of Open Ethernet through its SwitchX silicon and open Ethernet switch portfolio that supports open operating systems and software defined networking.
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationMellanox Technologies
Presented at OpenStack Summit Vancouver by Chloe Jian Ma, Senior Director, Cloud Market Development (@chloe_ma)
Colin Tregenza Dancer, Director of Architecture
Mellanox's Chief Technology Officer Michael Kagan presented on Mellanox's technological advantage and roadmap. He discussed how the volume of data is growing exponentially and will reach 20 zettabytes by 2020. Mellanox is addressing this growth through innovations in high-speed interconnects like InfiniBand that use RDMA to provide high bandwidth and low latency connectivity for data centers and cloud computing. Mellanox has also achieved a strong track record of executing on its product roadmap over the past 15 years to deliver successive generations of InfiniBand and Ethernet adapters, switches, and software.
MetroXTM provides long-haul InfiniBand solutions to connect data centers over distances of 1km to 80km. This allows organizations to build mega clouds and datacenters across multiple physical sites while maintaining high-speed, low latency connectivity. MetroXTM offers advantages for disaster recovery and business continuity by enabling active-active data centers with 40Gb/s bandwidth and simple management. It is a cost-effective solution with a small footprint that is well-suited to the needs of high performance computing and large mega data centers.
The document discusses Mellanox's interconnect solutions for high performance computing and cloud environments. It notes that computing needs are growing exponentially, driving the need for faster interconnect technologies. Mellanox provides InfiniBand and Ethernet solutions that deliver the highest performance and return on investment, maintaining a generation lead over competitors. Mellanox's open platforms allow for flexibility and freedom of choice in software and management.
Mellanox is a leading provider of high-performance interconnect solutions for server and storage applications. It has over 1,300 employees worldwide and reported record revenue in 2012 of $500.8 million, up 93% year-over-year. Mellanox offers a comprehensive portfolio of InfiniBand and Ethernet adapters, switches, cables, and software to connect servers, storage and switches. It has a unique ability to design and manufacture its own interconnect components to ensure high performance, quality and reliability.
Mellanox provides end-to-end interconnect solutions including ICs, adapter cards, switches, software, and cables to connect data centers, metro areas, and WANs. Mellanox's Connect-IB adapters deliver the highest performance and ROI for clustering, with the world's first 100Gb/s adapter card and a 4x increase in messages per second over competitors. Connect-IB also provides the highest application performance, with up to 200% faster performance than competitors for WIEN2k and 54% faster performance for WRF simulations. Mellanox's GPUDirect RDMA technology further accelerates GPU-GPU communication with 69% lower latency and 3x higher throughput between nodes.
1. Open Ethernet is an alternative to closed network solutions that allows users to choose their switch hardware, operating system, and software stack.
2. It enables freedom of choice, lower costs through scale, and a diversity of solutions through open source development.
3. Mellanox is leading the generation of Open Ethernet through its SwitchX silicon and open Ethernet switch portfolio that supports open operating systems and software defined networking.
Ahead of the NFV Curve with Truly Scale-out Network Function CloudificationMellanox Technologies
Presented at OpenStack Summit Vancouver by Chloe Jian Ma, Senior Director, Cloud Market Development (@chloe_ma)
Colin Tregenza Dancer, Director of Architecture
Mellanox's Chief Technology Officer Michael Kagan presented on Mellanox's technological advantage and roadmap. He discussed how the volume of data is growing exponentially and will reach 20 zettabytes by 2020. Mellanox is addressing this growth through innovations in high-speed interconnects like InfiniBand that use RDMA to provide high bandwidth and low latency connectivity for data centers and cloud computing. Mellanox has also achieved a strong track record of executing on its product roadmap over the past 15 years to deliver successive generations of InfiniBand and Ethernet adapters, switches, and software.
Mellanox InfiniBand interconnect solutions provide the highest performance and efficiency for HPC systems. The document discusses Mellanox's dominance in connecting the world's top supercomputers:
- Mellanox InfiniBand connects over 50% of systems on the TOP500 list, including the top 17 most efficient systems.
- It connects 33 of the 66 petascale-performance systems, including the fastest and most powerful clusters.
- Mellanox is the interconnect leader across the TOP100, TOP200, TOP300 and TOP400 lists.
Mellanox has a worldwide presence with sales offices across North America, Europe, Asia, and other regions. It employs a push/pull sales strategy working with OEMs, distributors, solution providers, and directly with end users in markets like HPC, government, finance, and cloud. Key growth drivers include increased adoption of high-speed InfiniBand in hyperscale and HPC, new storage solutions and appliances, and opportunities in big data, virtualized environments, and government infrastructure investment. Case studies provide examples of Mellanox solutions for an OpenStack cloud, Asian webscale provider, and European scientific compute facility.
Mellanox demos presented at Interop Tokyo, 2014 by Kazusa Tomonaga, Sr. System Engineer for Mellanox. The event was held June 11-13, 2014 at Makuhari Messe, Chiba Pref., Japan
This document discusses Mellanox's ConnectX-3 Pro networking adapter, which provides hardware offloading for overlay networks like VXLAN and NVGRE. This dramatically lowers CPU overhead, allowing more virtual machines to be supported per server. Test results show that with hardware offloading, VXLAN throughput is higher and CPU utilization is lower compared to software-only implementations. Offloading also reduces capital and operating expenses through increased server utilization and efficiency.
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Mellanox Technologies
This document provides guidance on deploying a high-performance computing (HPC) cluster with Mellanox InfiniBand interconnect solutions. It covers designing a fat-tree topology, performance calculations, communication library and quality of service support, subnet manager configuration, installation, verification and testing procedures. Best practices for cabling, labeling, and stress testing the cluster are also outlined.
Eyal Waldman, President and CEO of Mellanox Technologies, presented at the 2013 Mellanox Analyst Day. The presentation covered the exponential growth of data and need for the fastest interconnects to handle this growth. Mellanox offers InfiniBand and Ethernet interconnect solutions that provide the lowest latency, highest throughput, and best return on investment. Mellanox is well positioned for continued growth and leadership as data needs increase and they plan to stay ahead of competitors by being the first to market with the next generations of higher speed interconnect technologies.
Mellanox provides high-performance interconnect solutions for HPC systems. According to benchmarks, Mellanox InfiniBand delivers higher performance than other interconnects using half the number of cores. Mellanox InfiniBand also provides the highest system efficiency and is used to connect half of the world's petascale systems. Mellanox offers end-to-end solutions including adapters, switches, software, and management tools to optimize performance and efficiency for HPC workloads.
We will also discuss optimizations for MPI collectives communications, that are frequently used for processes synchronization and show how their performance is critical for scalable, high-performance applications.
Presented by Eran Bello at the "NFV & SDN Summit" held March 2014 in Paris, France
Ideal for Cloud DataCenter, Data Processing Platforms and Network Functions Virtualization
Leading SerDes Technology: High Bandwidth – Advanced Process
10/40/56Gb VPI with PCIe 3.0 Interface
10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
RDMA/RoCE technology: Ultra Low Latency Data Transfer
Software Defined Networking: SDN Switch and Control End to End Solution
Cloud Management: OpenStack integration
Paving the way to 100Gb/s Interconnect
End to End Network Interconnect for Compute/Processing and Switching
Software Defined Networking
High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Mellanox has successfully deployed 56Gb/s interconnect solutions since 2011 and is the leading provider of high-volume 56Gb/s solutions for computing, storage, and switching applications. Mellanox's 100Gb/s technology is based on proven and deployed 56Gb/s technologies, and the company is developing mainstream VCSEL-based and silicon photonics-based 100Gb/s solutions using its expertise in SerDes, switch/NIC chips, silicon photonics, and packaging. Mellanox aims to provide the highest performance, lowest power and cost, and highest quality and reliability products through a strategy of using best-in-breed silicon physical technologies, conservative innovation, and focusing on delivering the best return on investment for
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapterinside-BigData.com
In this deck, Mellanox announces the ConnectX-5 adapter for high performance communications.
“The new ConnectX-5 100G adapter further enables high performance, data analytics, deep learning, storage, Web 2.0 and more applications to perform data-related algorithms on the network to achieve the highest system performance and utilization,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”
Learn more: http://mellanox.com
Watch the video presentation: http://wp.me/p3RLHQ-fll
Mellanox provides high-performance networking solutions that enable Web 2.0 customers to improve operational efficiency and reduce costs. Their solutions eliminate the complexity of traditional infrastructure and enable high-performance storage and scalable interconnects. Mellanox solutions improve data movement and accelerate applications. Mellanox is expanding its product portfolio and customer base internationally to capture growth in new vertical markets like financial services, telecommunications, and the data center industry.
Mellanox is a leading provider of high-performance interconnect solutions including InfiniBand and Ethernet technologies. It has over 1,200 employees worldwide and reported record revenue in 2012 of $500.8 million, up 93% year-over-year. Mellanox's interconnect solutions reduce application wait times for data and increase ROI on data center infrastructure.
The European Advanced Networking Test Center (EANTC) conducted an evaluation of the performance and functionality of the ADVA FSP 150 ProVMe and confirmed its unique capabilities. The independent tests found that the ADVA Optical Networking edge NFV device succeeded in minimizing latency and that its hardware-assisted support functions, such as synchronization and service assurance, can be activated without requiring compute resources. This removes negative impact on revenue-generating VNFs and enhances performance.
The document provides an overview of InfiniBand essentials that every HPC expert must know. It discusses InfiniBand principles like fabric components, architecture, and discovery stages. It also covers protocol layers, Mellanox products, and implementations. The document is meant to educate professionals on InfiniBand fundamentals through topics like switches, adapters, cables, fabric management, and more.
This document discusses the transformation of the telecommunications industry towards digital technologies and software-defined networks. It specifically focuses on software-defined wide area networks (SD-WAN) and how SD-WAN is driving the adoption of network functions virtualization infrastructure (NFVi) and universal customer premise equipment (uCPE). The document provides an overview of SD-WAN and uCPE deployment options, reference architectures, and Intel's product portfolio for enabling virtualized network functions on uCPE devices.
Our Ensemble Simulator precisely mirrors production networks for risk-free testing, training and development. Hosted on a self-provided server or in the cloud, this powerful new software tool enables multiple users to work independently in a virtual sandbox to evaluate configurations, verify APIs and simulate what-if scenarios. With Ensemble Simulator, optical network operators can significantly reduce the cost and unpredictability of network upgrades, accelerate innovation adoption and improve quality of experience for end users.
New Breed of Carrier Chooses ADVA Ensemble for Intel-Powered NFV SolutionsADVA
In a market where big data centers are considered the best way to serve customers, DartPoints is showing that the micro data center can be enormously successful. Read how NFV-enabled services based on ADVA Optical Networking’s Intel-powered FSP 150vSE and Ensemble Orchestrator software are a key driver for this success.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
Mellanox InfiniBand interconnect solutions provide the highest performance and efficiency for HPC systems. The document discusses Mellanox's dominance in connecting the world's top supercomputers:
- Mellanox InfiniBand connects over 50% of systems on the TOP500 list, including the top 17 most efficient systems.
- It connects 33 of the 66 petascale-performance systems, including the fastest and most powerful clusters.
- Mellanox is the interconnect leader across the TOP100, TOP200, TOP300 and TOP400 lists.
Mellanox has a worldwide presence with sales offices across North America, Europe, Asia, and other regions. It employs a push/pull sales strategy working with OEMs, distributors, solution providers, and directly with end users in markets like HPC, government, finance, and cloud. Key growth drivers include increased adoption of high-speed InfiniBand in hyperscale and HPC, new storage solutions and appliances, and opportunities in big data, virtualized environments, and government infrastructure investment. Case studies provide examples of Mellanox solutions for an OpenStack cloud, Asian webscale provider, and European scientific compute facility.
Mellanox demos presented at Interop Tokyo, 2014 by Kazusa Tomonaga, Sr. System Engineer for Mellanox. The event was held June 11-13, 2014 at Makuhari Messe, Chiba Pref., Japan
This document discusses Mellanox's ConnectX-3 Pro networking adapter, which provides hardware offloading for overlay networks like VXLAN and NVGRE. This dramatically lowers CPU overhead, allowing more virtual machines to be supported per server. Test results show that with hardware offloading, VXLAN throughput is higher and CPU utilization is lower compared to software-only implementations. Offloading also reduces capital and operating expenses through increased server utilization and efficiency.
Deploying HPC Cluster with Mellanox InfiniBand Interconnect Solutions Mellanox Technologies
This document provides guidance on deploying a high-performance computing (HPC) cluster with Mellanox InfiniBand interconnect solutions. It covers designing a fat-tree topology, performance calculations, communication library and quality of service support, subnet manager configuration, installation, verification and testing procedures. Best practices for cabling, labeling, and stress testing the cluster are also outlined.
Eyal Waldman, President and CEO of Mellanox Technologies, presented at the 2013 Mellanox Analyst Day. The presentation covered the exponential growth of data and need for the fastest interconnects to handle this growth. Mellanox offers InfiniBand and Ethernet interconnect solutions that provide the lowest latency, highest throughput, and best return on investment. Mellanox is well positioned for continued growth and leadership as data needs increase and they plan to stay ahead of competitors by being the first to market with the next generations of higher speed interconnect technologies.
Mellanox provides high-performance interconnect solutions for HPC systems. According to benchmarks, Mellanox InfiniBand delivers higher performance than other interconnects using half the number of cores. Mellanox InfiniBand also provides the highest system efficiency and is used to connect half of the world's petascale systems. Mellanox offers end-to-end solutions including adapters, switches, software, and management tools to optimize performance and efficiency for HPC workloads.
We will also discuss optimizations for MPI collectives communications, that are frequently used for processes synchronization and show how their performance is critical for scalable, high-performance applications.
Presented by Eran Bello at the "NFV & SDN Summit" held March 2014 in Paris, France
Ideal for Cloud DataCenter, Data Processing Platforms and Network Functions Virtualization
Leading SerDes Technology: High Bandwidth – Advanced Process
10/40/56Gb VPI with PCIe 3.0 Interface
10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
RDMA/RoCE technology: Ultra Low Latency Data Transfer
Software Defined Networking: SDN Switch and Control End to End Solution
Cloud Management: OpenStack integration
Paving the way to 100Gb/s Interconnect
End to End Network Interconnect for Compute/Processing and Switching
Software Defined Networking
High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Mellanox has successfully deployed 56Gb/s interconnect solutions since 2011 and is the leading provider of high-volume 56Gb/s solutions for computing, storage, and switching applications. Mellanox's 100Gb/s technology is based on proven and deployed 56Gb/s technologies, and the company is developing mainstream VCSEL-based and silicon photonics-based 100Gb/s solutions using its expertise in SerDes, switch/NIC chips, silicon photonics, and packaging. Mellanox aims to provide the highest performance, lowest power and cost, and highest quality and reliability products through a strategy of using best-in-breed silicon physical technologies, conservative innovation, and focusing on delivering the best return on investment for
Announcing the Mellanox ConnectX-5 100G InfiniBand Adapterinside-BigData.com
In this deck, Mellanox announces the ConnectX-5 adapter for high performance communications.
“The new ConnectX-5 100G adapter further enables high performance, data analytics, deep learning, storage, Web 2.0 and more applications to perform data-related algorithms on the network to achieve the highest system performance and utilization,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”
Learn more: http://mellanox.com
Watch the video presentation: http://wp.me/p3RLHQ-fll
Mellanox provides high-performance networking solutions that enable Web 2.0 customers to improve operational efficiency and reduce costs. Their solutions eliminate the complexity of traditional infrastructure and enable high-performance storage and scalable interconnects. Mellanox solutions improve data movement and accelerate applications. Mellanox is expanding its product portfolio and customer base internationally to capture growth in new vertical markets like financial services, telecommunications, and the data center industry.
Mellanox is a leading provider of high-performance interconnect solutions including InfiniBand and Ethernet technologies. It has over 1,200 employees worldwide and reported record revenue in 2012 of $500.8 million, up 93% year-over-year. Mellanox's interconnect solutions reduce application wait times for data and increase ROI on data center infrastructure.
The European Advanced Networking Test Center (EANTC) conducted an evaluation of the performance and functionality of the ADVA FSP 150 ProVMe and confirmed its unique capabilities. The independent tests found that the ADVA Optical Networking edge NFV device succeeded in minimizing latency and that its hardware-assisted support functions, such as synchronization and service assurance, can be activated without requiring compute resources. This removes negative impact on revenue-generating VNFs and enhances performance.
The document provides an overview of InfiniBand essentials that every HPC expert must know. It discusses InfiniBand principles like fabric components, architecture, and discovery stages. It also covers protocol layers, Mellanox products, and implementations. The document is meant to educate professionals on InfiniBand fundamentals through topics like switches, adapters, cables, fabric management, and more.
This document discusses the transformation of the telecommunications industry towards digital technologies and software-defined networks. It specifically focuses on software-defined wide area networks (SD-WAN) and how SD-WAN is driving the adoption of network functions virtualization infrastructure (NFVi) and universal customer premise equipment (uCPE). The document provides an overview of SD-WAN and uCPE deployment options, reference architectures, and Intel's product portfolio for enabling virtualized network functions on uCPE devices.
Our Ensemble Simulator precisely mirrors production networks for risk-free testing, training and development. Hosted on a self-provided server or in the cloud, this powerful new software tool enables multiple users to work independently in a virtual sandbox to evaluate configurations, verify APIs and simulate what-if scenarios. With Ensemble Simulator, optical network operators can significantly reduce the cost and unpredictability of network upgrades, accelerate innovation adoption and improve quality of experience for end users.
New Breed of Carrier Chooses ADVA Ensemble for Intel-Powered NFV SolutionsADVA
In a market where big data centers are considered the best way to serve customers, DartPoints is showing that the micro data center can be enormously successful. Read how NFV-enabled services based on ADVA Optical Networking’s Intel-powered FSP 150vSE and Ensemble Orchestrator software are a key driver for this success.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
The document discusses how Mellanox storage solutions can maximize data center return on investment through faster database performance, increased virtual machine density per server, and lower total cost of ownership. Mellanox's high-speed interconnect technologies like InfiniBand and RDMA can provide over 10x higher storage performance compared to traditional Ethernet and Fibre Channel solutions.
RoCEv2 is an extension of the original RoCE specification announced in 2010 that brought the benefits of Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based networks. RoCEv2 addresses the needs of today’s evolving enterprise data centers by enabling routing across Layer 3 networks. Extending RoCE to allow Layer 3 routing provides better traffic isolation and enables hyperscale data center deployments.
Watch the video presentation: http://insidehpc.com/2014/09/slidecast-ibta-releases-updated-specification-rocev2/
The document discusses using RDMA (Remote Direct Memory Access) efficiently for key-value services. It summarizes background on key-value stores and RDMA. The presentation then explores using one-sided and two-sided RDMA operations for writes versus reads in key-value systems. Experimental results show that optimizing for writes using inline, unreliable, and unsignaled RDMA writes can outperform read-based approaches. While this approach works well, limitations include its assumption of an asymmetric system model and lack of generality. The presentation concludes by discussing lessons learned about challenging assumptions and the need to experiment and optimize for common cases.
This document summarizes a research project on GPUrdma, which enables direct RDMA communication from GPU kernels without CPU intervention. GPUrdma provides a 5 microsecond latency for GPU-to-GPU communication and up to 50 Gbps bandwidth. It implements a direct data path and control path from the GPU to the InfiniBand HCA. Evaluation shows GPUrdma outperforms CPU-based RDMA by a factor of 4.5x for small messages. The document also discusses using GPUrdma to enable the GPI2 framework for partitioned global address space programming across GPUs.
Paper on RDMA enabled Cluster FileSystem at Intel Developer Forumsomenathb
The document summarizes Veritas Cluster File System (CFS) with Remote Direct Memory Access (RDMA). CFS provides a scalable, shared file system across cluster nodes. RDMA capabilities from InfiniBand Architecture can improve CFS performance by reducing CPU usage and latency through zero-copy data transfers and remote direct memory access. Key CFS components like the Group Lock Manager benefit from RDMA to enhance coherency and recovery. The Common RDMA Transport Access Layer abstracts RDMA calls to enable CFS to support different transports.
The document discusses Remote Direct Memory Access (RDMA) over IP as a way to avoid data copying and reduce host processing overhead for high-speed data transfers. It proposes an architecture with two layers - Direct Data Placement (DDP) and RDMA control - running over IP transports. RDMA over IP aims to make network I/O "free" by allowing the network adapter to directly place data into application buffers without involving the host CPU. This could improve throughput and allow more machines to be supported for high-bandwidth data center applications. Open issues that still need to be addressed include security, interaction with TCP, atomic operations, and impact on network behaviors.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses persistent memory and the Linux software stack. It begins by covering the evolution of non-volatile memory from battery backed RAM to emerging technologies like PCM and memristors. It then outlines the persistent memory Linux software stack, including the kernel subsystem and NVDIMM architecture. Finally, it discusses using and emulating persistent memory on Linux, including kernel configuration, hardware options, and libraries for programming with persistent memory.
This document discusses I/O virtualization using InfiniBand and 40 Gigabit Ethernet. It covers technologies like RDMA, VPI, IPoIB, iSER and SRP that improve performance, CPU utilization, bandwidth and latency. Mellanox provides software solutions for RDMA storage and networking that integrate with VMware vSphere. Performance results show significant improvements in VM migration times, IOPS and bandwidth when using Mellanox's 40GbE adapters compared to 10GbE.
Presentation from OpenStack Summit Tokyo
Online video link is below.
https://www.openstack.org/summit/tokyo-2015/videos/presentation/approaching-open-source-hyper-converged-openstack-using-40gbit-ethernet-network
The document compares the performance of Ceph storage cluster using TCP and RDMA (XIO) as the transport mechanisms. It finds that XIO provides around 30-50% higher IOPS and bandwidth compared to TCP with the same hardware setup. However, TCP performance is improving and catching up to XIO as the number of OSDs increases. While XIO provides better CPU utilization, it requires over 2x more memory usage than TCP. Scaling out to multiple nodes shows TCP scaling better than XIO. XIO performance is also unstable and connection startup times are longer compared to TCP.
Mellanox presentation for Agile Conference June 2015Chai Forsher
Mellanox has transitioned to an Agile development methodology over the past two years. They started by prioritizing factors to improve development efficiency, then piloted Agile with select teams. Now they have over 40 Scrum teams and a structured process including monthly sprints, daily stand-ups, and tools like Redmine. Current challenges include building cross-functional global teams, coordinating releases, roadmap planning, and expanding Agile adoption across the organization.
The document discusses new advancements in high-performance computing (HPC) interconnect technology from Mellanox Technologies. It outlines how Mellanox's FDR InfiniBand has become the most commonly used interconnect solution for HPC, connecting more of the world's fastest supercomputers. It also presents Mellanox's roadmap and new products that support higher speeds and capabilities to pave the way for exascale computing through solutions like Connect-IB and optimizations for GPUs and accelerators.
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
This document discusses how hardware acceleration can improve the performance of modern data centers and machine learning workloads. It covers several key points:
1) Software-defined networking allows for flexibility but suffers from performance issues without hardware offloading. Hardware acceleration is needed to gain efficiency.
2) Technologies like SR-IOV, overlay networking, and RDMA can provide direct access and high-speed networking to virtual machines and accelerate workloads. Hardware offloads from NICs improve performance.
3) Frameworks like DPDK and ASAP2 can further accelerate workloads by offloading processing to the NIC and bypassing the CPU. This improves performance without additional CPU resources.
This document discusses Mellanox's CloudX solution for OpenStack clouds. It highlights how exponential data growth is driving the need for faster interconnects. Mellanox's CloudX architecture uses off-the-shelf components with Mellanox interconnects and provides performance advantages for storage, overlay networks, and virtualization. Mellanox also offers comprehensive integration with OpenStack through plugins to optimize networking and storage performance.
The document discusses a TechTalk webinar on hyperconverged infrastructure from Cisco Thailand that includes a live demo. It provides definitions and explanations of key concepts like hyperconvergence, software defined storage, and hyperconverged architectures. The webinar highlights benefits like agility, efficiency, simplicity and scalability and discusses how hyperconvergence is shifting the market towards server-based ecosystems.
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
This document summarizes a presentation about deploying flash storage for Ceph. It discusses how flash storage provides much higher performance than hard disk drives, but the network must be able to keep up with this performance to avoid becoming a bottleneck. It recommends using a high-performance separate cluster network for Ceph in order to realize the full benefits of flash storage. The presentation also discusses how RDMA can further optimize Ceph performance on flash by enabling more efficient networking.
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...OpenStack Korea Community
1) Mellanox's CloudX platform enhances cloud performance through technologies like its Spectrum switch, ConnectX-4 adapters, and software solutions.
2) These solutions provide high-speed networking, efficient virtual networking through overlay acceleration, and data transfer technologies like RDMA.
3) CloudX reference architectures allow building efficient, high-performance, and scalable IaaS clouds using Mellanox interconnect solutions and off-the-shelf components.
Delivering Virtualization 2.0 with scalable, easy-to-install, easy-to-manage solution. Save money and improve performance by combining server and storage virtualization.
Storage is one of the main 3 pillars of any data center, along with compute and networking.
OpenStack provides flexibility and automation for storage provisioning, no matter if one uses iSCSI integrated with Cinder or Ceph for block and object storage.
But what about performance? How can one enjoy storage flexibility without compromising on state of the art, low-latency, high-throughput storage, that is required by today’s applications?
In this session, we will present three storage solutions for OpenStack and how they can be accelerated natively in OpenStack with Remote Direct memory Access (RDMA) technology.
Join us to learn how RDMA boosts storage performance in the cloud.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
Virtualization is an increasingly critical part of data center computing. Selecting a server that excels at virtualization makes good business sense. Two Lenovo ThinkServer RD630 servers, paired with Dot Hill AssuredSAN Pro5720 tiered storage, ran 10 VMmark tiles for a total of 80 running VMs and achieved a score of 11.17@10 tiles, placing it in the top 8 percent of the 32-core server configurations. This makes the Lenovo ThinkServer RD630 an excellent choice for any enterprise that uses virtualization.
Kyle Turner from Brocade gave a presentation on simplifying virtual infrastructures with Ethernet fabrics and IP storage. The presentation discussed developing data center trends driving technology alignment, benefits of Ethernet fabrics, optimizations for network attached storage, and virtual machine integration. It provided an overview of Brocade's Virtual Cluster Switching solution and how it aligns with scale-out NAS strategies from vendors like EMC Isilon to provide high availability, storage optimization, and ease of use.
This document discusses Mellanox Technologies' journey to helping customers transition their products and services to the cloud. It outlines Mellanox's value propositions for accelerating cloud applications through benchmark testing, integrating with open source cloud environments, and making deployment easier with solutions like CloudX OpenCloud Architecture. The document also highlights examples of Mellanox's work with customers like Microsoft and the National Computational Research Cloud in Australia.
The document provides an introduction to NVMe over Fabrics, including:
- What NVMe over Fabrics is and its advantages like end-to-end NVMe semantics and low latency remote storage.
- How NVMe is being expanded to support message-based operations over various fabrics like RDMA, Fibre Channel, and Ethernet.
- Examples of how NVMe over Fabrics is being implemented in data center architectures and storage solutions.
Ceph Day London 2014 - Ceph Over High-Performance Networks Ceph Community
Mellanox provides high performance networking solutions for Ceph storage clusters. They discussed how Ceph relies on high performance networks for scalability and availability. Mellanox offers end-to-end 40/56GbE and InfiniBand solutions with full CPU offloading. They presented examples of how customers deploy Ceph with Mellanox's 40GbE interconnects across cluster, client, and public networks. Mellanox also discussed ongoing work to integrate RDMA support into Ceph to further improve performance.
In this session you will learn about architecting your private cloud infrastructure for speed and agility using Citrix cloud solutions, including:
Considerations for cloud infrastructure deployment
How Citrix diamond-validated partner SSI used Citrix cloud solutions to enhance business for their customers
A cloud product demo highlighting speed and agility of infrastructure deployment
Ceph Day New York 2014: Ceph over High Performance NetworksCeph Community
Mellanox provides high performance networking solutions for Ceph storage clusters. They discussed how Ceph relies on high performance networks for scalability and availability. Mellanox offers end-to-end 40/56GbE and InfiniBand solutions with full CPU offloading. They presented examples of how customers deploy Ceph with Mellanox's 40GbE interconnects across cluster, client, and public networks. Mellanox also discussed ongoing work to integrate RDMA support into Ceph to further improve performance.
VMWARE Professionals - Security, Multitenancy and FlexibilityPaulo Freitas
This document provides information about virtualization capabilities and features of Hyper-V 2012 and VMware vSphere 5.1. It discusses network virtualization, live migration capabilities like simultaneous migrations and storage migrations. Hyper-V 2012 supports many advanced features out of the box, while some VMware features require additional licenses or components. The document also provides configuration examples and diagrams to illustrate network virtualization and live migration workflows between Hyper-V hosts.
This hands on workshop for OpenContrail will be led by Sreelakshmi Sarva & Aniket Daptari.
This is a labs session so we will have hard RSVP limits. Please RSVP only if you are confident that you will be able to attend.
About Sreelakshmi Sarva
Sree is currently working as part of solution engineering team at Juniper’s Contrail team. She is responsible for delivering & managing SDN solutions & partnerships relating to Contrail. She has been with Juniper for the last 13 years working on various Routing, Switching, Network programmability & virtualization platforms. Prior to Juniper, She worked at Nortel networks in the Systems Engineering group. Sree received her Masters in Computer Science from University of Texas at Dallas and Bachelor’s in Computer Science from India.
About Aniket Daptari
Aniket is currently working as part of Juniper Networks' Contrail Cloud Solutions team. He is responsible for delivering SDN solutions and technology partnerships related to Contrail. He has been with Juniper for the last 3 years working on various Network programmability & virtualization platforms. Prior to Juniper, he worked at Cisco Systems in the Internet Systems Business Unit (Catalyst 6500). Aniket received his Masters in Computer Science from University of Southern California and a graduate certificate in Management Science and Engineering from Stanford University.
Course Abstract
This session will be the first of a series of OpenContrail hands-on tutorials for developers who want to get deep into OpenContrail code.
This “Basic OpenContrail Programming” Hands-on Session will focus on making developers proficient in writing and contributing code for our OpenContrail Project.
Session will cover the following areas
1) Contrail Overview
· Use Cases
· Architecture recap
2) Contrail Hands on
· Demo + Hands on - Configuration , VN, VM, Network Policies etc
· DevStack introduction
6WINDGate™ - Enabling Cloud RAN Virtualization6WIND
Traditional mobile networks are based on stand-alone Base Transceiver Stations covering a radio area. BTS overlap to provide a wide coverage to mobile users and are connected to the mobile core network through a backhaul network. Cloud Radio Access Network is a new architecture for mobile access networks that rely on simple radio front-ends connected to a pool of remote network resources. By leveraging cloud infrastructures, CAPEX and OPEX is lowered substantially.
The document provides an overview of InfiniBand, including its key components and advantages over traditional network protocols. It describes InfiniBand as a high-performance switched fabric interconnect that uses Remote Direct Memory Access to transfer data between nodes with low latency and CPU overhead. The primary elements of an InfiniBand network are outlined as Host Channel Adapters, switches, subnet managers, and optional gateways and routers.
This document discusses how Mellanox networks enable high performance Ceph storage clusters. It notes that Ceph performance and scalability are dictated by the backend cluster network performance. It provides examples of customers deploying Ceph with Mellanox 40GbE and 10GbE interconnects, and highlights how these networks allow building scalable, high performing storage solutions. Specifically, it shows how 40GbE cluster networks and 40GbE client networks provide much higher throughput and IOPS compared to 10GbE. The document concludes by mentioning how RDMA offloads can free CPU for application processing, and how the Accelio library enables high performance RDMA for Ceph.
Management software is a critical component in today’s clusters. As clusters become larger, more complex and business critical, they require a proper end-to-end means to monitor, provision and control them. Traditionally, cluster administrators have had to manage the server and network sides separately without visibility into network performance and health. This results in manual, time consuming root cause analysis of events, and relatively long duration till resolution.
The CMU-UFM Connector combines HP’s Insight CMU server information with Mellanox’s Unified Fabric Manager™ (UFM™) fabric information. This enables the cluster administrator to view, in one location, the server and network information which greatly reduces operational efforts and duration till resolution.
The CMU-UFM Connector is an add-on software package installed on the HP-CMU management node.
Unified cluster and fabric topology view. One pane of glass to monitor both servers and fabric performance parameters. Fabric alert propagation from UFM to HP Insight CMU UFM fabric health reports launched from HP Insight CMU.
This issue of Print 'n Fly previews the upcoming SC13 conference in Denver, Colorado, which marks the 25th anniversary of the SC series. It includes an interview with Bill Gropp, the SC13 Conference Chair, who discusses some of the new elements at this year's conference including the HPC Impact Showcase and Emerging Technologies program. In addition, the issue provides overviews of several technical sessions and social events taking place during SC13.
Storage, cloud computing, web 2.0 technologies, and big data are driving significant growth opportunities for Mellanox. Mellanox's InfiniBand and Ethernet interconnect solutions provide significantly higher performance compared to alternatives across HPC, database, cloud, and web 2.0 workloads. Mellanox is uniquely positioned to capitalize on the shift to more efficient cloud infrastructures and the convergence of storage, compute, and networking fabrics in the data center. Rapid changes in the market are creating opportunities for Mellanox to deliver faster, more scalable, and virtualized interconnect solutions to support continued exponential data growth.
Mellanox Technologies presented a financial overview, highlighting:
- Exponential data growth driving demand for their interconnect solutions.
- Acquisitions expanded their total addressable market, team, and geography.
- Historical annual revenue grew at a 30% compound annual growth rate over 5 years, though growth slowed in 2013.
- The presentation outlined financial metrics including revenue breakdown by product and data rate, headcount trends, cash flow, and long-term financial targets.
This document discusses how Mellanox technologies can accelerate big data solutions using RDMA. It summarizes that Mellanox provides end-to-end interconnect solutions including adapters, switches, and cables. It also discusses three key areas for acceleration: data analytics, storage, and distributed storage. The document presents the Unstructured Data Accelerator plugin which can double MapReduce performance using RDMA for efficient data shuffling. It also discusses using RDMA and SSDs to unlock higher throughput in HDFS and overcome bandwidth limitations of 1GbE and 10GbE networks.
Mellanox provides end-to-end interconnect solutions using InfiniBand and Ethernet technologies to enable highly scalable and fault tolerant database systems with extreme performance. Their solutions offer the lowest total cost of ownership through a unified interconnect fabric and optimized hardware and software. Case studies demonstrate that Mellanox solutions can reduce database recovery times by 97% and accelerate queries by up to 100 times.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.