The document discusses Hewlett-Packard's latest server hardware and management solutions. It introduces the HP ProLiant Gen8 servers which feature improved performance, capacity, and energy efficiency compared to previous generations. New capabilities of the HP iLO 4 management engine and HP SUM software allow for easier remote administration and firmware updates. The document also highlights enhanced storage features such as predictive failure detection and automated data mirroring in HP's Smart Array controllers.
The document discusses plans to establish an institutional high performance computing (HPC) facility at North-West University. It outlines the technical goals of building a Beowulf cluster to link existing departmental clusters and integrate with national and international computational grids. It also discusses management principles for the new HPC facility to ensure sustainability, efficiency, reliability, availability and high performance.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CFD acceleration with FPGA (byteLAKE's presentation from PPAM 2019)byteLAKE
byteLAKE's presentation from the PPAM 2019 conference.
Abstract:
The goal of this work is to adapt 4 CFD kernels to the Xilinx ALVEO U250 FPGA, including first-order step of the non-linear iterative upwind advection MPDATA schemes (non-oscillatory forward in time), the divergence part of the matrix-free linear operator formulation in the iterative Krylov scheme, tridiagonal Thomas algorithm for vertical matrix inversion inside preconditioner for the iterative solver, and computation of the psuedovelocity for the second pass of upwind algorithm in MPDATA. All the kernels use 3-dimensional compute domain consisted from 7 to 11 arrays. Since all kernels belong to the group of memory bound algorithms, our main challenge is to provide the highest utilization of global memory bandwidth. Our adaptation allows us to reduce the execution time upto 4x.
Find out more at: www.byteLAKE.com/en/CFD
Foot note:
This is the presentation about the non-AI version of byteLAKE's CFD kernels, highly optimized for Alveo FPGA. Based on this research project and many others in the CFD space, we decided to shift the course of the CFD Suite product development and leverage AI to accelerate computations and enable new possibilities. Instead of adapting CFD solvers to accelerators, we use AI and work on a cross-platform solution. More on the latest: www.byteLAKE.com/en/CFDSuite.
-
Update for 2020: byteLAKE is currently developing CFD Suite as AI for CFD Suite, a collection of AI/ Artificial Intelligence Models to accelerate and enable new features for CFD simulations. It is a cross-platform solution (not only for FPGAs). More: www.byteLAKE.com/en/CFDSuite.
This document discusses OpenCAPI acceleration using the OpenCAPI Acceleration Framework (oc-accel). It provides an overview of the oc-accel components and workflow, benchmarks the OC-Accel bandwidth and latency, and provides examples of how to fully utilize OC-Accel capabilities to accelerate functions on an FPGA. The document also outlines the OC-Accel development process and previews upcoming features like support for ODMA to port existing PCIe accelerators to OpenCAPI.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialGanesan Narayanasamy
This document introduces hardware acceleration using FPGAs with OpenCAPI. It discusses how classic FPGA acceleration has issues like slow CPU-managed memory access and lack of data coherency. OpenCAPI allows FPGAs to directly access host memory, providing faster memory access and data coherency. It also introduces the OC-Accel framework that allows programming FPGAs using C/C++ instead of HDL languages, addressing issues like long development times. Example applications demonstrated significant performance improvements using this approach over CPU-only or classic FPGA acceleration methods.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
The document discusses plans to establish an institutional high performance computing (HPC) facility at North-West University. It outlines the technical goals of building a Beowulf cluster to link existing departmental clusters and integrate with national and international computational grids. It also discusses management principles for the new HPC facility to ensure sustainability, efficiency, reliability, availability and high performance.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CFD acceleration with FPGA (byteLAKE's presentation from PPAM 2019)byteLAKE
byteLAKE's presentation from the PPAM 2019 conference.
Abstract:
The goal of this work is to adapt 4 CFD kernels to the Xilinx ALVEO U250 FPGA, including first-order step of the non-linear iterative upwind advection MPDATA schemes (non-oscillatory forward in time), the divergence part of the matrix-free linear operator formulation in the iterative Krylov scheme, tridiagonal Thomas algorithm for vertical matrix inversion inside preconditioner for the iterative solver, and computation of the psuedovelocity for the second pass of upwind algorithm in MPDATA. All the kernels use 3-dimensional compute domain consisted from 7 to 11 arrays. Since all kernels belong to the group of memory bound algorithms, our main challenge is to provide the highest utilization of global memory bandwidth. Our adaptation allows us to reduce the execution time upto 4x.
Find out more at: www.byteLAKE.com/en/CFD
Foot note:
This is the presentation about the non-AI version of byteLAKE's CFD kernels, highly optimized for Alveo FPGA. Based on this research project and many others in the CFD space, we decided to shift the course of the CFD Suite product development and leverage AI to accelerate computations and enable new possibilities. Instead of adapting CFD solvers to accelerators, we use AI and work on a cross-platform solution. More on the latest: www.byteLAKE.com/en/CFDSuite.
-
Update for 2020: byteLAKE is currently developing CFD Suite as AI for CFD Suite, a collection of AI/ Artificial Intelligence Models to accelerate and enable new features for CFD simulations. It is a cross-platform solution (not only for FPGAs). More: www.byteLAKE.com/en/CFDSuite.
This document discusses OpenCAPI acceleration using the OpenCAPI Acceleration Framework (oc-accel). It provides an overview of the oc-accel components and workflow, benchmarks the OC-Accel bandwidth and latency, and provides examples of how to fully utilize OC-Accel capabilities to accelerate functions on an FPGA. The document also outlines the OC-Accel development process and previews upcoming features like support for ODMA to port existing PCIe accelerators to OpenCAPI.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
SCFE 2020 OpenCAPI presentation as part of OpenPWOER TutorialGanesan Narayanasamy
This document introduces hardware acceleration using FPGAs with OpenCAPI. It discusses how classic FPGA acceleration has issues like slow CPU-managed memory access and lack of data coherency. OpenCAPI allows FPGAs to directly access host memory, providing faster memory access and data coherency. It also introduces the OC-Accel framework that allows programming FPGAs using C/C++ instead of HDL languages, addressing issues like long development times. Example applications demonstrated significant performance improvements using this approach over CPU-only or classic FPGA acceleration methods.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
The IBM Power System AC922 is a high-performance server designed for supercomputing and AI workloads. It features IBM's POWER9 CPUs, NVIDIA Tesla V100 GPUs connected via NVLink 2.0, and a high-speed Mellanox interconnect. The AC922 delivers high memory bandwidth, GPU computing power, and optimized hardware and software for workloads like deep learning. Several of the world's most powerful supercomputers, including Summit and Sierra, use large numbers of AC922 nodes to achieve exascale-level performance for scientific research.
TitanIC presented, "ODSA Use Case - SmartNIC," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document provides an overview of the PCI Express system architecture. It discusses the architectural perspective of PCI Express including how it maintains backwards compatibility with PCI/PCI-X while improving performance through serial point-to-point connectivity and packet-based transactions. It also covers the PCI Express transaction model and types, including memory, I/O, configuration and message transactions, as well as posted and non-posted transaction types.
This document discusses how HPC infrastructure is being transformed with AI. It summarizes that cognitive systems use distributed deep learning across HPC clusters to speed up training times. It also outlines IBM's hardware portfolio expansion for AI training, inference, and storage capabilities. The document discusses software stacks for AI like Watson Machine Learning Community Edition that use containers and universal base images to simplify deployment.
NXP presented, "ODSA Workshop: Development Effort Summary," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
This document lists over 80 world records set by AMD EPYC 7002 series processors across various computing workloads and benchmarks. These records include the highest performance and efficiency in big data analytics, cloud computing, virtualization, enterprise applications, high performance computing, and more. All records were verified as of April 14, 2020 and additional details can be found at AMD.com/worldrecords.
"OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others."
Watch the video: http://wp.me/p3RLHQ-gKz
Learn more: http://openhpc.community/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses HP Moonshot, a new server architecture from Hewlett-Packard. It is described as the world's first software-defined server, which uses dense cartridge-based servers in a single chassis for increased efficiency. The Moonshot system is said to provide up to 77% lower costs, 80% less space, 97% less complexity, and 89% less energy compared to traditional servers. It allows for flexible deployment of workloads on optimized cartridges. HP Cloud OS software is also introduced to simplify cloud service delivery on the Moonshot platform.
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...Filipe Miranda
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise Linux - Learn about the new IBM Power8 architecture, about Red Hat Enterprise Linux 7 for Power Systems and additional information on EnterpriseDB on how to migrate from Oracle to PostgreSQL.
UPDATED!
This document provides an introduction to high-performance computing (HPC) including definitions, applications, hardware, and software. It defines HPC as utilizing parallel processing through computer clusters and supercomputers to solve complex modeling problems. The document then describes typical HPC cluster hardware such as computing nodes, a head node, switches, storage, and a KVM. It also outlines cluster management software, job scheduling, and parallel programming tools like MPI that allow programs to run simultaneously on multiple processors. An example HPC cluster at SIU called Maxwell is presented with its technical specifications and a tutorial on logging into and running simple MPI programs on the system.
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
AMD is introducing two new x86 CPU cores called "Bulldozer" and "Bobcat". Bulldozer is aimed at mainstream client and server markets, featuring improved performance and scalability. It uses a shared/dedicated resource approach to increase performance per watt. Bobcat is optimized for low power markets like cloud clients. It features a smaller, more efficient design to deliver high performance while minimizing power consumption and die size.
This document discusses techniques for improving CPU access to data from IO devices, including Direct Cache Access (DCA), PCIe Transaction Layer Processing Hints (TPH), and Data Direct I/O (DDIO). DCA allows the CPU to access cache data from IO devices to avoid memory access, but requires driver intervention. PCIe TPH and DDIO aim to prefetch or retain IO write data in the cache without driver involvement. DDIO is specific to Intel platforms and accelerates local socket performance, while DCA does not differentiate sockets. The document provides details on how each technique functions and references further reading.
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD
The document summarizes a presentation about AMD's new "Zen" x86 CPU core architecture. The Zen architecture provides a 40% increase in instructions per clock compared to previous cores through improvements in the core engine, caches, floating point capabilities, and the addition of simultaneous multithreading. The Zen core was designed from the ground up to optimize performance and power efficiency across applications from notebooks to supercomputers.
PCIe Gen 3.0 Presentation @ 4th FPGA CampFPGA Central
PCIe Gen3 presentation by PLDA at 4th FPGA Camp in Santa Clara, CA. For more details visit http://www.fpgacentral.com/fpgacamp or http://www.fpgacentral.com
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
HP ProLiant Gen8 e-Series and p-Series | Convergence of Performance and ValueLoranWyman
From the core to the cloud, servers need to transform data center expectations and economics while delivering maximum performance, reliability and simplified day to day management. Focused on the HP ProLiant 300 p-Series and 300 e-Series family of servers, with over 150 customer-driven innovations and built-in intelligence, and built to meet the needs of any sized business.
The document provides information on the HPE ProLiant DL20 Gen10 Server, including:
- It is a 1U rack server powered by Intel Xeon E, Pentium, and Core i3 processors, offering flexibility and value.
- Standard features include Intel C242 chipset, up to 64GB memory, 1Gb Ethernet ports, and various storage options.
- It comes in various pre-configured models for entry, performance, and solution workloads.
The IBM Power System AC922 is a high-performance server designed for supercomputing and AI workloads. It features IBM's POWER9 CPUs, NVIDIA Tesla V100 GPUs connected via NVLink 2.0, and a high-speed Mellanox interconnect. The AC922 delivers high memory bandwidth, GPU computing power, and optimized hardware and software for workloads like deep learning. Several of the world's most powerful supercomputers, including Summit and Sierra, use large numbers of AC922 nodes to achieve exascale-level performance for scientific research.
TitanIC presented, "ODSA Use Case - SmartNIC," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document provides an overview of the PCI Express system architecture. It discusses the architectural perspective of PCI Express including how it maintains backwards compatibility with PCI/PCI-X while improving performance through serial point-to-point connectivity and packet-based transactions. It also covers the PCI Express transaction model and types, including memory, I/O, configuration and message transactions, as well as posted and non-posted transaction types.
This document discusses how HPC infrastructure is being transformed with AI. It summarizes that cognitive systems use distributed deep learning across HPC clusters to speed up training times. It also outlines IBM's hardware portfolio expansion for AI training, inference, and storage capabilities. The document discusses software stacks for AI like Watson Machine Learning Community Edition that use containers and universal base images to simplify deployment.
NXP presented, "ODSA Workshop: Development Effort Summary," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
This document lists over 80 world records set by AMD EPYC 7002 series processors across various computing workloads and benchmarks. These records include the highest performance and efficiency in big data analytics, cloud computing, virtualization, enterprise applications, high performance computing, and more. All records were verified as of April 14, 2020 and additional details can be found at AMD.com/worldrecords.
"OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others."
Watch the video: http://wp.me/p3RLHQ-gKz
Learn more: http://openhpc.community/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses HP Moonshot, a new server architecture from Hewlett-Packard. It is described as the world's first software-defined server, which uses dense cartridge-based servers in a single chassis for increased efficiency. The Moonshot system is said to provide up to 77% lower costs, 80% less space, 97% less complexity, and 89% less energy compared to traditional servers. It allows for flexible deployment of workloads on optimized cartridges. HP Cloud OS software is also introduced to simplify cloud service delivery on the Moonshot platform.
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise ...Filipe Miranda
New Generation of IBM Power Systems Delivering value with Red Hat Enterprise Linux - Learn about the new IBM Power8 architecture, about Red Hat Enterprise Linux 7 for Power Systems and additional information on EnterpriseDB on how to migrate from Oracle to PostgreSQL.
UPDATED!
This document provides an introduction to high-performance computing (HPC) including definitions, applications, hardware, and software. It defines HPC as utilizing parallel processing through computer clusters and supercomputers to solve complex modeling problems. The document then describes typical HPC cluster hardware such as computing nodes, a head node, switches, storage, and a KVM. It also outlines cluster management software, job scheduling, and parallel programming tools like MPI that allow programs to run simultaneously on multiple processors. An example HPC cluster at SIU called Maxwell is presented with its technical specifications and a tutorial on logging into and running simple MPI programs on the system.
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
AMD is introducing two new x86 CPU cores called "Bulldozer" and "Bobcat". Bulldozer is aimed at mainstream client and server markets, featuring improved performance and scalability. It uses a shared/dedicated resource approach to increase performance per watt. Bobcat is optimized for low power markets like cloud clients. It features a smaller, more efficient design to deliver high performance while minimizing power consumption and die size.
This document discusses techniques for improving CPU access to data from IO devices, including Direct Cache Access (DCA), PCIe Transaction Layer Processing Hints (TPH), and Data Direct I/O (DDIO). DCA allows the CPU to access cache data from IO devices to avoid memory access, but requires driver intervention. PCIe TPH and DDIO aim to prefetch or retain IO write data in the cache without driver involvement. DDIO is specific to Intel platforms and accelerates local socket performance, while DCA does not differentiate sockets. The document provides details on how each technique functions and references further reading.
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD
The document summarizes a presentation about AMD's new "Zen" x86 CPU core architecture. The Zen architecture provides a 40% increase in instructions per clock compared to previous cores through improvements in the core engine, caches, floating point capabilities, and the addition of simultaneous multithreading. The Zen core was designed from the ground up to optimize performance and power efficiency across applications from notebooks to supercomputers.
PCIe Gen 3.0 Presentation @ 4th FPGA CampFPGA Central
PCIe Gen3 presentation by PLDA at 4th FPGA Camp in Santa Clara, CA. For more details visit http://www.fpgacentral.com/fpgacamp or http://www.fpgacentral.com
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
HP ProLiant Gen8 e-Series and p-Series | Convergence of Performance and ValueLoranWyman
From the core to the cloud, servers need to transform data center expectations and economics while delivering maximum performance, reliability and simplified day to day management. Focused on the HP ProLiant 300 p-Series and 300 e-Series family of servers, with over 150 customer-driven innovations and built-in intelligence, and built to meet the needs of any sized business.
The document provides information on the HPE ProLiant DL20 Gen10 Server, including:
- It is a 1U rack server powered by Intel Xeon E, Pentium, and Core i3 processors, offering flexibility and value.
- Standard features include Intel C242 chipset, up to 64GB memory, 1Gb Ethernet ports, and various storage options.
- It comes in various pre-configured models for entry, performance, and solution workloads.
The document discusses HP's new mission-critical converged infrastructure solutions featuring Intel Itanium processors. Key announcements and enhancements include: new Integrity server blades providing up to 3x performance and 2x cores; the Integrity Superdome 2 with up to 256 cores and reduced TCO; and the Integrity rx2800 i4 server with increased performance and efficiency. HP estimates the new solutions can deliver over 30% savings in total IT costs over three years.
The document discusses the HP ProLiant DL580 Gen8 server. It provides details on the server's breakthrough 4S performance and scalability as well as its leading x86 availability and reliability. The document also summarizes the server's compelling efficiencies and enhanced capabilities over previous generations such as increased processor and memory performance, bandwidth, and storage capacities.
Hpe pro liant dl180 generation9 (gen9) Sourav Dash
The HPE ProLiant DL180 Gen9 is a 2U server designed for SMBs and enterprises needing balanced compute and storage capabilities. It offers expandability through up to two processors, sixteen DIMM slots, three PCIe slots, and sixteen hard drive bays. The server provides reliability, manageability, and flexibility through features like redundant power supply and fan options, iLO management, and various processor, memory, and storage configurations.
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
The document provides details about the new HPE ProLiant Gen11 servers featuring 4th Gen Intel Xeon Scalable processors. It summarizes the key features of four specific Gen11 server models - the HPE ProLiant DL320 Gen11 1U server optimized for edge computing; the HPE ProLiant DL360 Gen11 1U density optimized server; the HPE ProLiant DL380 Gen11 2U standard server; and the new HPE ProLiant DL380a Gen11 2U accelerator optimized server. For each model, it highlights the drive bays, memory, GPU support, storage controllers, and targeted workloads.
Inter connect2016 yps-2749_02232016_aspresentedBruce Semple
Turbo LAMP is a collaboration between IBM, Canonical, Zend, MariaDB, and Mellanox to optimize the LAMP stack (Linux, Apache, MySQL, PHP) for performance on IBM Power Systems. The partners worked to modernize and optimize the open source LAMP platform for IBM's POWER8 architecture. This provides faster and more efficient support for popular applications built on LAMP stacks, such as Magento, Drupal, SugarCRM, and WordPress. It also enables faster ROI by allowing clients and managed service providers to support more users and generate more revenue using fewer system resources.
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
In this deck, Johann Lombardi from Intel presents: DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence.
"Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel Optane DC persistent memory and Intel Optane DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI."
Unlike traditional storage stacks that were primarily designed for rotating media, DAOS is architected from the ground up to make use of new NVM technologies, and it is extremely lightweight because it operates end-to-end in user space with full operating system bypass. DAOS offers a shift away from an I/O model designed for block-based, high-latency storage to one that inherently supports fine- grained data access and unlocks the performance of next- generation storage technologies.
Watch the video: https://youtu.be/wnGBW31yhLM
Learn more: https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
This document discusses IBM's Flex System and PureSystem families of integrated infrastructure solutions. It provides an overview of the PureSystem portfolio and highlights how Flex System and PureFlex solutions deliver application platforms, infrastructure services, and data platforms in an integrated and optimized manner. It also describes key integration and design aspects of these solutions including virtualization, networking, storage, and systems management capabilities. The document includes details about IBM Flex System such as the Enterprise Chassis, its networking and storage expansion options, and supported compute nodes. It promotes the new IBM Flex System X6 as providing faster, more agile and resilient platforms optimized for analytics, virtualization, databases and other enterprise workloads.
This document discusses the benefits of using Linux on IBM Power systems servers. It claims that Power systems can reduce costs through higher performance, consolidation, and open source software like KVM and OpenStack. It seeks to dispel myths that Power systems are expensive, that virtualization is different, and that the architecture is closed. It provides examples of using Power systems with Linux to gain performance advantages for applications like SAP and databases through higher core counts, memory and bandwidth compared to x86 servers.
This document provides an overview of HP ProLiant servers, including the ML350 G5 tower server, ML370 G5 2-processor server, ML570 G4 4-processor server, DL380 G5 1U rack server, DL360 G5 1U rack server, and DL580 G4 4-processor rack server. It highlights the key features and specifications of each model, such as processor, memory, storage, expansion slots, and management capabilities. The document also reviews HP's Smart Array storage controllers and their positioning within the ProLiant server lineup.
The document discusses IBM's POWER8 technology, which features up to 12 cores per socket, 8 threads per core, larger caches, improved memory bandwidth and latency, integrated I/O subsystem and PCIe controller, and fine-grained power management. It provides details on IBM Power Systems such as the S814 and S824 servers that use POWER8, including their specifications, performance improvements over previous generations, and storage options.
Hear first about the new innovative technology coming from HP to help SMB IT professionals like you. See how this new technology can drive innovation in your own business, while delivering breakthrough energy, space and cost savings.
Speaker Bio: Julian has 30 years’ experience at HP, working with HP’s channel partners and customers through various hardware product sets ranging from the initial Digital PC’s through PDPs, MicroServers and Vaxes onto HP ProLiants, where he has been working for the past 10 years overseeing Generation 1 through to the current Generation 8 models.
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
The document discusses trends in data growth and storage technologies that are driving the need for storage modernization. It outlines Intel's role in advancing the storage industry through open source technologies and standards. A significant portion of the document focuses on Intel's work optimizing Ceph for Intel platforms, including profiling and benchmarking Ceph performance on Intel SSDs, 3D XPoint, and Optane drives.
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Community
The document discusses trends in data growth and storage technologies that are driving the need for storage modernization. It outlines Intel's role in advancing the storage industry through open source technologies and standards. Specifically, it focuses on Intel's work optimizing Ceph for Intel platforms, including performance profiling, enabling Intel optimized solutions, and end customer proofs-of-concept using Ceph with Intel SSDs, Optane, and platforms.
The PowerEdge R730 is a versatile 2U rack server that supports a wide range of demanding workloads. It features the latest Intel Xeon processors with up to 22 cores, support for up to 3TB of memory, and various storage and I/O expansion options. Dell's OpenManage systems management tools help to simplify and automate server lifecycle management tasks, improving efficiency. The R730 is suitable for applications such as databases, virtualization, and high performance computing.
I hosted a webcast with Sr. VP and GM of HP Storage David Scott. David and I talked about flash-optimized storage and the software defined data center. You can find the audio for the webcast at http://hpstorage.me/ASTB-podcasts - they are number 146 and 147.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. 2 Footer Goes Here
Agenda• Intelligent Provisionning
• HP SUM
• Active Health
• DL380 Gen8 / DL360 Gen8
• Gen8 Smart Storage
• Gen8 ROM
• ILO 4
• Insight Control
• Intelligent Power Discovery
• Location Discovery
6. 6 Footer Goes Here6
– Will have the same configuration
utilities usually available with
SmartStart
• ACU/ADU, Diagnostics, ERASE, iLO
Configuration
– New Utilities include:
• AHS download
• Firmware Update
• Quick Configs
Maintenance Page
7. 7 Footer Goes Here7
Service Pack for ProLiant overview
• FW DVD + PSP + HP SUM = SPP
• Full and subset SPP ISOs
− HP Service Pack for ProLiant
− HP Service Pack for ProLiant - BladeSystem Red Hat Enterprise Linux Pack
− HP Service Pack for ProLiant - BladeSystem SUSE Linux Enterprise Server Pack
− HP Service Pack for ProLiant - BladeSystem Microsoft Windows Server Pack
− HP Service Pack for ProLiant - ProLiant ML/DL/SL Red Hat Enterprise Linux Pack
− HP Service Pack for ProLiant - ProLiant ML/DL/SL SUSE Linux Enterprise Server Pack
− HP Service Pack for ProLiant - ProLiant ML/DL/SL Microsoft Windows Server Pack
• Supports ALL ProLiant servers*
• Predictable cadence – Aligns with HP ProLiant Server releases
• More operational stability
− Firmware and Software components are tested together as a solution stack
• No forced updates
− SPP support is now 1 year from when it is released
− Customers can update with hot fixes as needed
• Minimizes unplanned downtime – customer can determine when to perform updates and
reboot
* See Server Support Guide for supported servers
Drivers
Firmware
Testing
process
8. 8 Footer Goes Here
HP SUM 5.0.0 new features
Feature Feature
Improved GUI Superdome 2 firmware updates
Multiple Repositories OS auditing integration
Reports generated on demand Update of Linux servers from Windows workstations
Finding associated targets Enclosure level dependency management
Dependency checking between targets Ability to self-update
Scheduling of updates and reboots Reduction in ports needed to perform updates
Ability to reanalyze a target without leaving HPSUM Support for 6Gb SAS BL switches
Online VMware ESXi 5.0 firmware support Support for legacy PSP and Firmware DVDs
Online Fibre Channel HBA firmware support Support for SPPs
CloudSystem Matrix recipe lockdown Global force and reboot options
Integration of Integrity BL8x0c i2 and RX2880 servers Better testing and quality standards
9. 9 Footer Goes Here
ISS Platform Firmware
HP Confidential
Active Health System Detail
19. 19 Footer Goes Here
Improved Resiliency to manage Capacity growth
Increased Automation
and Integration
20. 20 Footer Goes Here20
HP SmartDrive
Activity
Ring
(Spinner)
Online
Fault
Locate
Backlight
Do Not
Remove
– Self-Describing Icons
– Intuitive Animation
– Increased Drive Information
Locate
Fault
Online
Activity
21. 21 Footer Goes Here
SmartSSD Wear Gauge in ACU Informational Alerts
• 5% life remaining
• 2% life remaining
• 56 days life remaining
Smart Trip Alert
• 0% life remaining
22. 22 Footer Goes Here
Predictive Spare Activation
Automatically copy data to the spare disk
RAID5 Logical Drive
Drive 1 Drive2 Drive 4 SpareDrive 3
Predictive
Failure
23. 23 Footer Goes Here
Advanced Data Mirroring (SAAP 2.0)
RAID 10 (ADM)
DATA 1 DATA 2 DATA 1 DATA 2 DATA 1 DATA 2
DATA 1 DATA 2
RAID 1 (ADM)
DATA 1 DATA 1 DATA 1
DATA 1
• Highest fault tolerance available
• Mirrors data to 3 drives
• Array must contain multiples of 3 drives
24. 24 Footer Goes Here
Move Logical drives (SAAP 2.0)
Copy logical drives to unassigned disks
– Move logical drives from (examples):
– SATA to SAS disks
– 7K RPM to 15K RPM disks
– An old JBOD to a new JBOD
– 3G SAS to 6G SAS drives
RAID5 Logical Drive
Drive 1 Drive2 UnassignedUnassignedUnassignedDrive3
25. 25 Footer Goes Here
Online Splitting of RAID 1+0 Mirror (SAAP 2.0)
Logical Drive: 0 (RAID 0)
Drive 1 Drive2
Splits a RAID 1+0 mirror into 2 RAID 0 (examples):
– Installing new software
– Cloning drives for new server deployments
2nd RAID 0 remains offline as a backup copy
…Later recombine 2 RAID 0 into a RAID 1+0
Offline Logical Drive: (RAID
0)
Drive4Drive3
Logical Drive: 0 (Online RAID 1+0)
Drive 1 Drive2 Drive4Drive3
26. 26 Footer Goes Here26
Do Not Remove Indicator
Goal: Reduce logical drive failures due to user error
On = Ejecting the drive results in a logical drive
failure
Off = Icon is hidden from view
27. 27 Footer Goes Here
Smart Array generation compare
Shipping Smart Array Gen8 Smart Array
Performance – 60,000 IOPS Performance improvement– 200,000 IOPS*
Cache size support up to 1GB FBWC Cache size support up to 2GB FBWC
PCIe 2.0 (5 GT/s) PCIe 3.0 (8 GT/s)
Up to108 drives Up to 227 drives
Failed drive spare activation
Predictive Spare Activation/Failed drive spare
activation
ACU – Cache R/W ratio 25% increment ACU – Cache R/W ratio 5% granular increment
Fast Parity Initialization
SSD Wear Gauge SSD Wear Gauge
> 3TB Support (512n mode) > 3TB Support (512n mode)
Stripe size up to 1MB
29. 29 Footer Goes Here
Early Video Progress Indicators
In previous generations,
initial video could take
more than a minute to
display, depending on the
DIMM configuration
Video is now displayed within a second of
pressing the power button providing
Perception of
much faster
boot times
Better
indication of
where we are in
the boot
process if a
system hangs
Support for
Early Fault
Detection
Messaging
30. 30 Footer Goes Here
Example display of Gen8 ProLiant System
ROM Early Video Progress Indicators
–
Indicates Boot
Progress – would
have been ―black
screen‖ in previous
generations
Status Code
useful for
service events
31. 31 Footer Goes Here
Mismatched DIMMs installed
Example
Mismatched DIMMs
installed error
32. 32 Footer Goes Here
Digital signature
Gen8 System ROM is digitally signed
using HP’s Corporate Signing
Service
This signature
• Is verified before the flash process
occurs
• Prevents malicious efforts to corrupt
System ROM
33. 33 Footer Goes Here
HP ROM Configuration Utility (HPCRU)
HP ROM Configuration Utility (HPCRU) option available for
mass deployment of configuration settings with Insight Control
• Although CONREP is still supported, HPRCU provides improved
support for mass deploying configuration settings because it is
abstracted from new or modified options supported by the System ROM
• The biggest challenge with CONREP is the requirement to keep the
CONREP.XML file in synch with the System ROM whenever a new
revision of the System ROM modifies or supports new configuration
options
35. 35 Footer Goes Here
Generational comparison
iLO 2 iLO 3
iLO Management Engine
(iLO 4)
CPU 66 Mhz NEC v850 400 Mhz ARM 926 400 Mhz ARM 926
Memory
4MB Flash
16MB SDRAM
8MB Flash
128MB DDR3
(56MB after ECC & video)
16MB Flash
256MB DDR 3
(112 MB after ECC & video)
Network 10/100 Mbps 10/100 Mbps 1Gbps
USB 1.1 2.0 2.0
Video
ATI ES1000 (external)
• 1280 x 1024 (32 bpp)
• 1600 x 1200 (16 bpp)
Matrox G200 (embedded)
• 1280 x 1024 (32 bpp)
• 1920 x 1200 (16 bpp)
Matrox G200 (embedded)
• 1280 x 1024 (32 bpp)
• 1920 x 1200 (16 bpp)
DVR Color 12 bpp (4096 colors) 15 bpp (32,768 colors) 15 bpp (32,768 colors)
NAND Storage 4GB
36. 36 Footer Goes Here36
The preferred IT Administrator in every ProLiant server
– Immediate access from
anywhere, anytime with the
new iLO Mobile App
– Dedicated 1Gb network port
– Homogeneous server
management experience
across the datacenter
HP iLO – what's new? Building on the legacy of HP iLO
innovations, HP iLO 4 adds:
37. 37 Footer Goes Here37
All core management out-of-band for increased security and stability
HP Agentless Management
– Robust hardware monitoring and
alerting capability without the
complexity of OS-based agents
– SNMP agents and alerts now
running from the HP iLO
architecture; no impact on system
performance
– OS config info and more coverage
with Agentless Management
Service
– Your choice! OS-based agents are
still supported
37
!
38. 38 Footer Goes Here38
Register from HP iLO to Insight Remote Support
HP Confidential
–Seamless connection to HP Insight
Online for anytime, anywhere view
of remotely monitored devices
–Easy sign-up and activation
process
–No OS-installed agent
–Automated support case creation
–24x7 phone-home capabilities
embedded Remote Support
!
NEW
NEW
NEW
40. 40 Footer Goes Here
40
• High reporting resolution: 0.01A/0.1V per core and managed ext.
bar outlet monitoring, thresholds.
• Voltage accuracy within 1% 200V-240V
• Current accuracy within:
• 1% >1A
• 3% 400mA - 1A
• 5% 100mA - 400mA
• 10% <100mA
• VA, Watts and PF accuracy within:
• Data acquisition
• 0.5 second V, A, VA, Watts, PF data collection.
• Match IPM daily data collection (5 min. interval collection in
24 hour buffer, NTP time synchronized).
• 177MHz ARM9 embedded Web, SNMP, XML(RIBCL)
• Add optional managed outlet controlled ext. bars to core.
• Remote outlet control and individual outlet power
measurement.
• Can be mixed with standard ext. bar on same core.
HP Intelligent PDU
Display
Intelligent Extension
Bar – 5 C13
Monitoring and
Control Outlets in 5U
Power
Alarm
Reset
Best in class power monitoring
Embedded remote
management
PDU Core 6 or 12 C19 Outlets
Standard Ext. Bar
41. 41 Footer Goes Here
41
C20
Inline
Conne
ctor
C13-C14
Cable
C14 Power
Supply Inlet
Dual C19 PDU Core
Outlets
Backward compatible with existing C13/C14/C19/C20.
C14 Inline
Connecto
r
C13 Inline
Connecto
r
IPD Connectors
42. 42 Footer Goes Here
42
Managed Ext. Bars Cable Management
Power Cords
With Cable Management
Arms:
• Left Side 4.5’
• Right Side 6’
Without Cable Management
Arms:
• 2.5’ or 10’
43. 43 Footer Goes Here
43
High Efficiency Power Supplies
– Common slot architecture for flexibility
– 94% Platinum level high efficiency
– Right sized for application
• 460W, 750W, and 1200W
– Bright Blue Power Line Communications connectors
for use with HP Intelligent PDU
PLC C-14 Connectors
PLC Jumper Cable
44. 44 Footer Goes Here
44
Location Discovery Rack View
Hover over
to view
server
information
Click Image
to launch
iLO session
Click Image
to view
power
connection
information
45. 45 Footer Goes Here
45
Control
Outlet status
and control
Power cycle
switch
UID LED
Switch
Redundancy
status
Redundant
outlets
Feed A Feed B
Device
name
Disable unused outlets
<10mA load
Redundancy
Management
For Automatic
Discovery and
Assisted Discovery
Manual Entry
Devices
Up to 20 Outlets
per Device
46. 46 Footer Goes Here
Location Discovery
Hewlett-Packard Confidential
47. 47 Footer Goes Here47
Where this is going
HP Data Center Smart Grid
HP Confidential
Power Discovery
Location Discovery
1-wire
Thermal Discovery
Sea of Sensors Everywhere
•3D datacenter visualization
•Datacenter wide power capping
•Thermal and power aware
application placement
•Automated problem mitigation
48. 48 Footer Goes Here
Server EarRack Interface Bus
Ear Contacts Ear Latch
Hewlett-Packard Confidential
49. 49 Footer Goes Here49
proto
42 U Rack Strip 7U Rack Module & Servers Installed in Rack
ModuleHewlett-Packard Confidential
50. 50 Footer Goes Here50
proto
Server EarsRack Contacts
Hewlett-Packard Confidential
51. 51 Footer Goes Here
51
Location Discovery Rack View
Hover over
to view
server
information
Click Image
to launch
iLO session
Click Image
to view
power
connection
information
Converged infrastructure is the optimal architecture of virtual pools of servers, storage, and networking to run your applications.virtualized pools servers, storage, networking and I/O which provides you the ability to match performance, throughput and capacity for you application resiliency built into the hardware, software and operating environment.orchestrated so you can organize you workloads according optimized by design on industry standardsmodular to provide size, scale and performance as needed.rapidly deploys new resources, increases availability and provides IT Converged infrastructure works both in your installed base and for new green field opportunities. an architecture can be built by you onsite, built by HP onsite,Converged infrastructure improves application quality, performance, security and availability.31% Reduction in outages due to production defects (Quality Center) 10% Reduction in time to market for application and business processes (Quality Center) 28% Reduction in number and duration of performance degradations (Performance Center) 41 % Reduction in time for problem management due to application defects (Quality Center)
See- Backup Slide for Intel Performance projections for E5-2600 SeriesNotes supporting Customer Benefits (number footnoted above in slide)Based on Intel Published NDA benchmarks for Sandy Bridge X5690 vs E5-2690 processors(Intel Note: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.) Supported with 32G DDR3 LR-DIMM kits x 24 DIMM = 768G Max Memory for DL380p Gen8, (on G7 - 32G DIMM max was 12 DIMMs = 384GB Memory)Additional internal storage capacities over G7 (total LFF Storage Capacity= 24TB SAS, 24TB SATA, 3.2TB SSD)Supporting SSD/SAS/SATA in SFF/LFFMore I/O bandwidth to the processor resulting in lower latency (Gen8 = 40 lanes/processor, G7 = 24 lanes/processor)HP Gen8 Smart Arrays bring significant enhancement, starting with a 2X increase in performance (final results awaited)“5x faster to diagnose root cause”: Current gen = ~420 pieces of unique data available via RPS reporting. On average takes 24 minutes on the phone with a support tech to get the RPS report run. With Gen8 = ~1086 pieces of unique data and with Active Health System, only takes 4 minutes. We can run/report more than twice the data in a 1/5th of the time. (source: Scott Harsany)
See- Backup Slide for Intel Performance projections for E5-2600 SeriesNotes supporting Customer Benefits (number footnoted above in slide)Based on Intel Published NDA benchmarks for Sandy Bridge X5690 vs E5-2690 processors(Intel Note: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.) Supported with 32G DDR3 LR-DIMM kits x 24 DIMM = 768G Max Memory for DL380p Gen8, (on G7 - 32G DIMM max was 12 DIMMs = 384GB Memory)Additional internal storage capacities over G7 (total LFF Storage Capacity= 24TB SAS, 24TB SATA, 3.2TB SSD)Supporting SSD/SAS/SATA in SFF/LFFMore I/O bandwidth to the processor resulting in lower latency (Gen8 = 40 lanes/processor, G7 = 24 lanes/processor)HP Gen8 Smart Arrays bring significant enhancement, starting with a 2X increase in performance (final results awaited)“5x faster to diagnose root cause”: Current gen = ~420 pieces of unique data available via RPS reporting. On average takes 24 minutes on the phone with a support tech to get the RPS report run. With Gen8 = ~1086 pieces of unique data and with Active Health System, only takes 4 minutes. We can run/report more than twice the data in a 1/5th of the time. (source: Scott Harsany)
Big architecture change from HP iLO 2 to HP iLO 3;We brought to our customers:800% Performance Increase in Remote Console360% Performance Increase in Virtual MedialDedicated (1 Gbps - gigabit) or Shared Network Port
Now, let’s take a look at each individual functional area of HP iLO Management Engine starting with what's new with HP iLO:With HP iLO 4: customers will have a homogeneous server management experience across their datacenter with HP ProLiant Gen 8 serversWith HP iLO 4, customer will be able to turn-on, manage, and turn-off their servers “anytime;, anywhere” from their Smartphone device, (currently supporting iOS and Android operating systems)In addition to Japanese, we now provide our customer to chose simplified Chinese as their language of choice in the HP iLO GUI
Script:Let’s move to MONITORING.With HP iLO Management Engine in every HP ProLiant Gen8 server, the base hardware monitoring and alerting capability is built into the system (running on the HP iLO processor and is not dependent on the OS) and starts working the moment that a power cord and an Ethernet cable is connected to the server. It doesn’t impact system performance and supports a complete separation of system management and data processing, not just on the LAN connections, but also within the system itself. For those customers that want to enrich the hardware management with OS information and alerting, we DO provide an optional “Agentless Management Service” which is a small helper application that IS loaded into the operating system and routes the OS management information and alerting to the iLO Management Engine so that this information is also further processed by iLO and routed over the management network.Many of our customers have been asking for it for a long time, and today is the day that HP makes it happen. Value Prop: Increased security and stability, even when systems have not yet been poweredDetailed information that speeds time to issue diagnosis and resolution
Script:Finally let’s look at SUPPORT:The new HP Insight Remote Support functions builds on the Insight Remote Support application that can run either on a stand-alone central system, or as a plug-in to HP Systems Insight Manager. Key parts of that technology now are part of iLO Management Engine and are ready to start working for you, after a few keystrokes on a single activation screen.FYI – The HP Insight Remote Support Software 7.0 - is remote support software that automatically provides secure remote support for your HP ProLiant Gen8 Servers, 24 X 7…so you can spend less time solving problems and more time focused on your business. You can have your systems remotely monitored for hardware failures, using secure technology that has been proven at thousands of companies around the world. In many cases, you can avoid problems before they occur. The software allows you to view support / service status on your local HP System Insight Manager (HP SIM) screen, or on HP Insight Online, a web based customized capability with the HP Support Center.HP Insight Remote Support 7.0 software product is the ideal solution from small to mid-size environments with little or no IT staff, to IT environments with upto 500 end point devices. It is easier to install via an installation advisor, works automatically with the integrated HP System Insight Manager (HP SIM) console on a physical or virtual host server. It is included with your HP warranty, HP Care Pack Service, or HP contractual support agreement. HP Insight RS v7 helps you do more with less.
The HP Intelligent Power Discovery solution includes:The Intelligent Power Distribution Unit (PDU) core. The Intelligent PDU core has six monitored 16-amp C19 outlets. Each outlet represents one load segment. Intelligent PDU models are available in several capacities ranging from 24 amps to 48 amps. Models include North America and Japan single phase and three phase delta, and international three phase wye configurations. The core can be mounted in the 1U rack space with breakers or outlets facing out. Up to 2 cores per U can be mounted in the front and back of the rack. It can be mounted in the 0U space leaving all the rack space available for IT equipment. The Intelligent Extension Bar. The Intelligent Extension Bar has five monitored 10-amp C13 outlets. A UID indicator is located on each outlet and on the Extension Bar to easily trace device power connections. The Status Display Module. A Status Display Module mounts on the door of the rack and connects to each Intelligent PDU using a standard DB9 male to female straight through cable. The status display shows the current load for each PDU core load segment as well as the current for each outlet of an Intelligent Extension Bar. The UID indicators on the Intelligent Extension bars light when each load segment and outlet is viewed. It has a DB9 connector for serial setup and configuration. The display module shows the current for the load segments and the current for each Intelligent Extension Bar C13 outlet. Pressing the Load Segment button on the display module changes the load segment number, shows its current value and lights the Intelligent Extension Bar's UID indicator. The Intelligent Extension bar outlets current values are shown in the Outlet position on the status display module. Pressing the Outlet button on the display module changes the outlet number, shows its current value and lights its UID indicator. The status display is modular so that it can be mounted on the door of the rack for easy viewing. It shows current load for each core load segment and each Intelligent Extension Bar outlet. There is an alarm LED on the face of the display and on the back of the display so that it can be seen even when the rack doors are closed. An operator can see a problem in a row of racks without opening the doors. If there is an alarm the operator can open the door and quickly find which outlet or core load segment has caused the alarm. The Standard Extension Bar. The Standard Extension Bar has five 10-amp unmonitored C13 outlets and a load segment UID indicator. The HP Intelligent PDU has best in class power monitoring. It collects data at twice per second speed and is within 1% accuracy 1 amp and above, within 3% 400mA to 1A, within 5%, 100mA to 400mA and within 10% accuracy below 100mA. It can measure accurately within 1 Watt from 1 Watt and above.
The blue connectors signify the data communication capability of the equipment for intelligent power discovery. All core and Extension Bar connectors are fully compatible with standard IEC C13, C14, C19, and C20 connectors for power delivery.
Unlike bulky monolithic power strips the core and Extension Bar design maintains clearance outside the RETMA rails to access hot swap components such as power supplies. Purchase and install only the number of Extension Bars needed for the number of C13 outlets required in a rack instead of buying unused outlets.
Power jumper cords with embedded RS-232 power line communications. The jumper cordswithblue Power Line Communication connectors are fully compatible with standard IEC C13 & C14 connectors. HP PLC-compliant Common Slot Power Supplies. HP common-slot 94% efficient power supplies are rated at 460W, 750W, and 1200W. These power supplies have a blue Power Line Communication connector that is fully compatible with standard IEC C13 connectors for power delivery. The power supply labels are marked with the letters PLC.
Select the Control option in the Menu to view the redundancy status of each device. The Control screen displays the device names and the outlets on each Feed A and Feed B load segment to which the devices are connected. The Redundancy Status column displays proper redundant connections with a green icon and improper or non-redundant connections with a red icon. Non-redundant connections include a single connection to one PDU or two connections to the same PDU. The Power Control column buttons display and control the power status—on or off—of individual or multiple redundantly controlled outlets. The Power Cycle Control column includes a button that allows power to be turned off and then on after 30 seconds simultaneously for all the outlets of each device. Cycling redundant outlets simultaneously prevents boot errors and power supply errors when the device is re-booted. The button works similarly for manually entered devices, such as a third-party device, for which the power connections are properly configured. The UID Control column contains a button that allows a UID LED to be turned on for an individual Extension Bar, for an individual outlet, or multiple redundant outlets.
Pull this slide??? It’s in ron’s deck.
Now that we’ve done a quick overview what you’ll first see in the BL460 Gen8, let’s jump into what’s new for this generation blade. The majority of the updates can be broken down into five main areas: Processors, Memory, Networking, Storage, and ManagementFor the Processors, the BL460 Gen8 will offer the Intel Xeon E5-2600 family of processors. These processors will have up to 8 cores. New to this generation of the BL460 is the ability to support processors up to 130 watts. This is a significant improvement over the 460 G7. One other processor update is that it will support the HP Smart Socket Guide technology.For memory, the Gen8 version of the 460 will now support 16 dimm slots instead of the 12 slots in the G7 version. In addition to more dimm slots, we now support dims at up to 1600 MHz. Another memory enhancement is support of the new HP SmartMemory technology. This HP SmartMemory delivers improved manageability and authentication capabilities. The third area of updates is in the storage area. For the 460 Gen8, the HP Smart Array P220i will come standard in all models. It will have 512MB of flash back write cache. The 460 Gen8 will still support two Small Form Factor hot-plug hard drives, and those drives will have an improved drive carrier and the new LEDs that I mentioned earlier. These new drives will be consistent across all Gen8 servers.The fourth area is I/O. A customer will have a choice of FlexLOMs for their two embedded I/O ports. We’ve listened to our customers who’ve requested more choices and flexibility in this area. Whether they choose to deploy an Ethernet solution, a Flex 10 offering, or a Flex Fabric solution.. they’ll be able to choose when they configure their blade server. As far as pre-configured BTO skus, we will be sticking with an Emulex Flex Fabric solution. Another I/O improvement is that the 460 Gen8 now has two x16 PCI Gen3 mezzanine slots.From a management perspective, HP’s taken a step up with the new HP iLO 4 management engine and HP Service Pack for ProLiant. As part of the new iLO4, you’ll get Intelligent Provisioning, Agent-less management, and the HP Active Health System. There is one key item to call out on the management side. For the best possible HP BladeSystem and Virtual Connect Experience, we’ll be requiring that the customer update the Onboard Administrator to version 3.50 or later and Virtual Connect to version 3.51 or later before inserting the ProLiant BL460c Gen8 server blade.Next slide please
The next area is the FlexLOM. In the BL460 G7, an integrated Dual Port FlexFabric 10Gb Adapter came standard on every unit. We heard the feedback that customers want to be able to choose what networking solution makes the most sense for them. We now provide that choice. On any of the customized BL460 Gen8 blades, you will now be able to select what network technology, speed, and OEM partner you prefer. This will make your blade deployments more ‘change ready’. And, you’ll be able to update the technology as networking demands dictate.For any pre-configured BTO unit, we’ll be staying with an Emulex flex fabric solution.Next slide please
HP introduced a new Virtual Connect interconnect module and CNAs to enable VC FlexFabric. On ProLiant server blades. The new interconnect is the HP Virtual Connect FlexFabric 10Gb/24-port Module. Remember, the CNA or converged network adapter, is replacing both the Ethernet NIC and Fibre Channel HBA. It combines Ethernet frames with encapsulated Fibre Channel storage frames into one Ethernet stream of traffic. So, NICs or HBAs will not be required anymore – just fewer CNAs.HP CNAs will be available in two forms, an embedded CNA and a mezzanine card. As mentioned previously, HP expects to once again lead the industry in providing integrated network adapters, in this case CNAs, just like we did when we introduced G6 server blades with embedded dual port 10Gb Flex-10 NICs. With the next generation HP server blades and embedded FlexFabric CNA’s, you can literally buy less and connect more. And when used with Virtual Connect, you will simply insert the blade and the connection profile for that bay will do all of the work to connect the blade using FCoE or iSCSI.In addition, HP will introduce a mezzanine CNA card, the HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter, for use on currently shipping ProLiant BL G6 server blades. The FlexFabric CNA can also be used to add more FlexFabric connections on future generation blades with embedded CNAs. Both the embedded and mezzanine CNAs will provide dual 10Gb ports with up to 4 Ethernet, FCoE and iSCSI connections.The HP Virtual Connect FlexFabric Module and embedded FlexFabric CNAs extend HP’s leadership in driving complexity and cost out of your data center infrastructure. It is a logical extension of the industry leading HP Flex-10 technology that customers are now quickly adopting in production networks to consolidate their Ethernet interconnect infrastructure. With the embedded FlexFabric CNA on next generation server blades and the VC FlexFabric Module, HP will once again show it’s dedication to no-compromise solutions….in this case, it’s convergence without compromise. NOTE: the embedded CNA and CNA mezzanine cards will be based on Emulex technology. HP expects to introduce Qlogic mezzanine cards in 2010 but customers should be advised that the initial CNA’s will be based on Emulex.NOTE: CNA mezzanine cards will not be supported on ProLiant G1 or G5 server blades with the exception of the ProLiant BL 680 c G5 server blade.
KEY POINT: Introduce the HP Moonshot platform, the world’s first software defined serverWhat you are looking at here is a huge leap forward in server design. If you look at how servers have been designed for the last eighteen years they've been designed in what we call a general purpose manner - we take the next release of server CPU on about an eighteen month cadence put it in our general purpose servers and we run general purpose applications that are shrink wrapped - that’s the way the industry has been operating for the last eighteen years. The HP Moonshot platform supports a range of solution cartridges that are tailored and optimized for specific workloads and since these workloads scale out, this gives us the ability to provide significant power, space and costs savings.The HP Moonshot 1500 Chassis which supports shared components, including power, cooling, management, and fabric for 45 individually serviceable hot-plug Solution Cartridges.Typical server design takes eighteen months from beginning to end. We knew we had to reduce that for our customers to be successful in this market which led to the invention of software defined solution cartridges. Application specific cartridges can be designed at the rate of new technology to address the rate of the explosion of new types of specialized applications. And we don’t achieve that alone - using the HP Pathfinder Innovation Network we bring together leading technology partners delivering what they do best on the Moonshot platform. For you, this means having access to the latest technology and solutions at a groundbreaking time-to-market cadence measured in months rather than years.