The document describes various high performance computing resources available for life science research through the e-BioGrid initiative including:
- The Life Science Grid which connects clusters across Dutch universities totaling over 3700 cores and distributed storage.
- A High Performance Cloud with 632 cores, 400TB storage for running virtual machines.
- The Huygens National Supercomputer with 3456 cores, 15.75TB memory for large simulations.
- The LISA cluster with 4480 cores and 12TB memory popular for PBS jobs. Hadoop resources are planned to be added soon.
Researchers should apply for access through the NWO/NCF IRIS system and contact e-
This document discusses architectural considerations for running big data workloads on OpenStack at Comcast. It provides an overview of Comcast's use of OpenStack, describes several big data use cases and application profiles at Comcast, and makes recommendations for using disaggregated or hyper-converged storage approaches for different applications like Kafka and HDFS. It also covers testing strategies, operational considerations, and choices around implementing HDFS and S3 object storage.
Prácticas recomendadas en materia de arquitectura y errores que debes evitarElasticsearch
Crece con confianza. Desde la implementación de un pequeño nodo de desarrollo para la búsqueda de aplicaciones hasta la gestión de una gran implementación de cientos de nodos, nuestros expertos en Elastic te contarán todo lo que necesites saber.
1) The Croatian National Grid Infrastructure (CRO NGI) equipment is old and chronically underfunded, with middleware that is obsolete and lacks maintenance.
2) The HR-ZOO project will upgrade Croatia's e-infrastructure by 2021 with funding from the European Regional Development Fund, providing advanced computing resources like 2 PFLOPs of HPC and 20,000 CPU cores for HTC across 6 data centers in 4 cities.
3) Plans are to build the HTC capacity to 20,000 CPU cores across 6 sites, provide 4 PB of storage, and hire additional e-scientists to support academic and research users and integrate services with EOSC-hub.
C* Summit EU 2013: Ontorion: Scalable Information ManagementDataStax Academy
Cognitum is an IT services company from Poland that specializes in data management. They have created Ontorion, a scalable semantic data platform that uses noSQL databases like Cassandra for storing and querying large ontology repositories. Ontorion provides benefits like easier and faster searching, data modeling, high performance, reliability and lower maintenance costs for managing semantic data at scale.
GlusterFS is an open-source clustered file system that aggregates disk resources from multiple servers into a single global namespace. It scales to several petabytes and thousands of clients. GlusterFS clusters storage over RDMA or TCP/IP, managing data through a unified namespace. Its stackable userspace design delivers high performance for diverse workloads.
This document discusses challenges and potential solutions for preserving 125 databases for access in the year 2080. It describes the Landesarchiv Baden-Württemberg archive and its oldest database from 1961 containing punched cards. The challenge presented is how to preserve the 125 databases from diverse origins for future use after 2080 while incurring no costs until then and keeping the contents private. Potential solutions discussed include exporting data to CSV or XML formats, taking disk/Docker images, or using a web crawler. The document analyzes the costs of each solution over time and invites others to join the discussion and workshop.
This document provides an overview of Circos, a software package for visualizing data in circular form. It discusses Circos' installation process, file distribution, and generation of Circos plots. Circos allows researchers and data analysts to represent data at different levels in a circular layout and visualize network flows.
The document describes various high performance computing resources available for life science research through the e-BioGrid initiative including:
- The Life Science Grid which connects clusters across Dutch universities totaling over 3700 cores and distributed storage.
- A High Performance Cloud with 632 cores, 400TB storage for running virtual machines.
- The Huygens National Supercomputer with 3456 cores, 15.75TB memory for large simulations.
- The LISA cluster with 4480 cores and 12TB memory popular for PBS jobs. Hadoop resources are planned to be added soon.
Researchers should apply for access through the NWO/NCF IRIS system and contact e-
This document discusses architectural considerations for running big data workloads on OpenStack at Comcast. It provides an overview of Comcast's use of OpenStack, describes several big data use cases and application profiles at Comcast, and makes recommendations for using disaggregated or hyper-converged storage approaches for different applications like Kafka and HDFS. It also covers testing strategies, operational considerations, and choices around implementing HDFS and S3 object storage.
Prácticas recomendadas en materia de arquitectura y errores que debes evitarElasticsearch
Crece con confianza. Desde la implementación de un pequeño nodo de desarrollo para la búsqueda de aplicaciones hasta la gestión de una gran implementación de cientos de nodos, nuestros expertos en Elastic te contarán todo lo que necesites saber.
1) The Croatian National Grid Infrastructure (CRO NGI) equipment is old and chronically underfunded, with middleware that is obsolete and lacks maintenance.
2) The HR-ZOO project will upgrade Croatia's e-infrastructure by 2021 with funding from the European Regional Development Fund, providing advanced computing resources like 2 PFLOPs of HPC and 20,000 CPU cores for HTC across 6 data centers in 4 cities.
3) Plans are to build the HTC capacity to 20,000 CPU cores across 6 sites, provide 4 PB of storage, and hire additional e-scientists to support academic and research users and integrate services with EOSC-hub.
C* Summit EU 2013: Ontorion: Scalable Information ManagementDataStax Academy
Cognitum is an IT services company from Poland that specializes in data management. They have created Ontorion, a scalable semantic data platform that uses noSQL databases like Cassandra for storing and querying large ontology repositories. Ontorion provides benefits like easier and faster searching, data modeling, high performance, reliability and lower maintenance costs for managing semantic data at scale.
GlusterFS is an open-source clustered file system that aggregates disk resources from multiple servers into a single global namespace. It scales to several petabytes and thousands of clients. GlusterFS clusters storage over RDMA or TCP/IP, managing data through a unified namespace. Its stackable userspace design delivers high performance for diverse workloads.
This document discusses challenges and potential solutions for preserving 125 databases for access in the year 2080. It describes the Landesarchiv Baden-Württemberg archive and its oldest database from 1961 containing punched cards. The challenge presented is how to preserve the 125 databases from diverse origins for future use after 2080 while incurring no costs until then and keeping the contents private. Potential solutions discussed include exporting data to CSV or XML formats, taking disk/Docker images, or using a web crawler. The document analyzes the costs of each solution over time and invites others to join the discussion and workshop.
This document provides an overview of Circos, a software package for visualizing data in circular form. It discusses Circos' installation process, file distribution, and generation of Circos plots. Circos allows researchers and data analysts to represent data at different levels in a circular layout and visualize network flows.
ClusterVision specialises in high performance computing (HPC) solutions. We design, build, manage, and support supercomputers.
High performance computing accelerates scientific discovery. It is with this in mind that ClusterVision provides state-of-the-art, fully tailored HPC clusters to researchers and innovators all over Europe. With in-house software development and a large team of technical specialists (60% technical staff), we work to not just participate in HPC, but to advance the technology behind it.
By providing customised solutions accompanied by an exhaustive set of services and trainings, we ensure that our customers can take maximum advantage of HPC technology in order to expand knowledge and advance their respective fields of study.
Enabling Insight to Support World-Class Supercomputing (Stefan Ceballos, Oak ...confluent
The Oak Ridge Leadership Facility (OLCF) in the National Center for Computational Sciences (NCCS) division at Oak Ridge National Laboratory (ORNL) houses world-class high-performance computing (HPC) resources and has a history of operating top-ranked supercomputers on the TOP500 list, including the world's current fastest, Summit, an IBM AC922 machine with a peak of 200 petaFLOPS. With the exascale era rapidly approaching, the need for a robust and scalable big data platform for operations data is more important than ever. In the past when a new HPC resource was added to the facility, pipelines from data sources spanned multiple data sinks which oftentimes resulted in data silos, slow operational data onboarding, and non-scalable data pipelines for batch processing. Using Apache Kafka as the message bus of the division's new big data platform has allowed for easier decoupling of scalable data pipelines, faster data onboarding, and stream processing with the goal to continuously improve insight into the HPC resources and their supporting systems. This talk will focus on the NCCS division's transition to Apache Kafka over the past few years to enhance the OLCF's current capabilities and prepare for Frontier, OLCF's future exascale system; including the development and deployment of a full big data platform in a Kubernetes environment from both a technical and cultural shift perspective. This talk will also cover the mission of the OLCF, the operational data insights related to high-performance computing that the organization strives for, and several use-cases that exist in production today.
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
The SKA Project - The World's Largest Streaming Data Processorinside-BigData.com
In this presentation from the 2014 HPC Advisory Council Europe Conference, Paul Calleja from University of Cambridge presents: The SKA Project - The World's Largest Streaming Data Processor.
"The Square Kilometre Array Design Studies is an international effort to investigate and develop technologies which will enable us to build an enormous radio astronomy telescope with a million square meters of collecting area."
Watch the video presentation: http://wp.me/p3RLHQ-cot
This document discusses Oracle's hardware strategy and engineered systems. It highlights Oracle's engineered systems like Exadata, Exalogic, and SPARC SuperCluster which provide extreme performance, efficiency and lower costs compared to traditional systems. It also summarizes new Oracle SPARC server offerings like the SPARC T5-4, T5-8, and M5-32 and their suitability for mission critical Oracle databases and applications.
e-Infrastructure available for research, using the right tool for the right jobDavid Wallom
This document provides an overview of e-infrastructure resources available for research. It describes what e-infrastructure is and its main components like data storage, software, hardware, networks, security, and people. It discusses different types of computational resources including supercomputers, parallel programming, high performance computing, distributed and shared memory models, and GPU computing. It also outlines institutional, regional, national, and international e-infrastructure resources in the UK like advanced computing centers, EPSRC regional centers, HECToR/ARCHER, and PRACE. Finally, it briefly discusses high throughput computing and examples of applications of e-infrastructure like virus analysis, fusion reactor modeling, and Alzheimer's disease research.
EPCC is a supercomputing centre at the University of Edinburgh that has been self-funded for over 28 years. It has over 110 staff and £5 million in annual turnover. EPCC supports multi-disciplinary research through access to its high performance computing facilities, training courses, and collaborative projects. It houses various supercomputing systems totaling over 150,000 CPU cores for researchers to use. EPCC also works with over 1000 companies through technology transfer and industrial collaborations in areas like simulation, data processing, and cloud computing. One example is its partnership with Rolls-Royce on a £15 million virtual gas turbine engine simulation project.
This document discusses enabling FPGA computing on hybrid platforms. It provides examples of Eurotech's HPC projects that utilize FPGAs, including APENet, JANUS-SSUE, and QPACE. APENet used FPGAs for a 3D-torus interconnect for clusters. JANUS-SSUE was a fully reconfigurable supercomputer. QPACE used FPGAs for a 3D-torus interconnect for lattice quantum chromodynamics computations on Cell processors. The document then introduces Aurora, Eurotech's solution for enabling FPGA-based acceleration in hybrid HPC systems through the Tigon node card architecture. It describes key features of Aurora including performance density, energy efficiency, program
The document provides an introduction to the National Supercomputing Centre (NSCC) high performance computing cluster. It describes the 1 petaflop system consisting of 1300 nodes and 13 petabytes of storage. The system uses PBS Pro for job scheduling and includes compilers, libraries, developer tools, and applications for engineering, science, and industry users from organizations such as A*STAR, NUS, and NTU.
IBM and ASTRON 64-Bit Microserver Prototype Prepares for Big Bang's Big Data,...IBM Research
The document summarizes the IBM/ASTRON DOME 64-bit μServer Demonstrator project. It discusses the motivation to create a highly dense 64-bit microserver module for applications like radio astronomy and business analytics. The project aims to integrate an entire server node onto a single microchip, excluding memory and power components. It provides status updates on the development of the compute node boards using PowerPC and ARM processors, as well as the cooling and packaging design to integrate over 1,000 cores into a dense 2U chassis.
The document provides information about available HPC resources at CSUC. It summarizes the hardware facilities which include the Canigó and Pirineus II clusters totaling 3,888 cores and 391 TFlops of computing power. It describes the working environment including the Slurm workload manager, storage units, and development tools available. It also outlines how users can access the services through RES projects, pricing, and the EuroCC Spain testbed for national HPC competence.
Ozden Akinci is a Chief Engineer with over 15 years of experience in IT management, data center management, and HPC systems management. He has extensive experience administrating large HPC systems, managing IT teams and projects, and developing policies and procedures for effective IT infrastructure and services. His technical skills and qualifications include expertise in Linux/Unix systems administration, cluster computing, storage solutions, and high-performance applications.
SURFsara provides an HPC cloud service using OpenNebula 3.X to offer flexible computing resources for scientists. The service runs on 30 large compute nodes and 10 light nodes with a total of 800TB of shared storage. Users can create their own isolated virtual environments and have applied the service to fields like biology, genetics, and social sciences. The implementation provides benefits like live migration, security, and ease of use compared to traditional HPC resources.
Marta de Mesa i Jesus Gironda, de Telvent, presenten les possibilitats d'aplicar el Big Data més enllà del sector privat. Per exemple, en la previsió i planificació de recursos interns a les universitats.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
National Institute of Advanced Industrial Science and Technology (AIST) in Japan is focusing on bridging innovative technological seeds to commercialization. It currently lacks cutting-edge computing infrastructure dedicated to AI and big data that is openly available. The proposed AI Bridging Cloud Infrastructure (ABCI) project aims to provide a large-scale open AI infrastructure to accelerate joint academic-industry R&D for AI in Japan. ABCI will feature 1088 compute nodes with 4352 NVIDIA Tesla V100 GPUs providing 0.550 exaflops of AI performance, connected by an InfiniBand network and utilizing liquid cooling technologies. It will provide an open platform for AI research, applications, services and infrastructure design through industry and academic
The document provides a case study of the Kajaani Datacenter project by CSC. It summarizes that CSC built a new 1.4 MW modular datacenter and 1 MW high performance computing datacenter in Kajaani, Finland to address growing capacity needs, rising costs, and sustainability goals. The project leveraged the site's access to abundant and reliable renewable energy sources. Construction involved renovating a paper warehouse and installing prefabricated modular datacenter units along with a purpose-built liquid cooling system for high-density computing. Lessons learned emphasized the importance of thorough planning and integration across technical systems.
The document discusses CSC's high performance computing capabilities and developments from 2012-2014. It summarizes that CSC upgraded its Sisu and Taito systems in 2014 with new Intel Haswell CPUs, increasing cores by 50% and reducing energy usage. The upgrades boosted Sisu's performance to 1700 TFlops, making it the 37th most powerful system in the world. CSC now provides a total of 2.54 PFlops of computing power and is the most powerful academic computing facility in the Nordic countries.
ClusterVision specialises in high performance computing (HPC) solutions. We design, build, manage, and support supercomputers.
High performance computing accelerates scientific discovery. It is with this in mind that ClusterVision provides state-of-the-art, fully tailored HPC clusters to researchers and innovators all over Europe. With in-house software development and a large team of technical specialists (60% technical staff), we work to not just participate in HPC, but to advance the technology behind it.
By providing customised solutions accompanied by an exhaustive set of services and trainings, we ensure that our customers can take maximum advantage of HPC technology in order to expand knowledge and advance their respective fields of study.
Enabling Insight to Support World-Class Supercomputing (Stefan Ceballos, Oak ...confluent
The Oak Ridge Leadership Facility (OLCF) in the National Center for Computational Sciences (NCCS) division at Oak Ridge National Laboratory (ORNL) houses world-class high-performance computing (HPC) resources and has a history of operating top-ranked supercomputers on the TOP500 list, including the world's current fastest, Summit, an IBM AC922 machine with a peak of 200 petaFLOPS. With the exascale era rapidly approaching, the need for a robust and scalable big data platform for operations data is more important than ever. In the past when a new HPC resource was added to the facility, pipelines from data sources spanned multiple data sinks which oftentimes resulted in data silos, slow operational data onboarding, and non-scalable data pipelines for batch processing. Using Apache Kafka as the message bus of the division's new big data platform has allowed for easier decoupling of scalable data pipelines, faster data onboarding, and stream processing with the goal to continuously improve insight into the HPC resources and their supporting systems. This talk will focus on the NCCS division's transition to Apache Kafka over the past few years to enhance the OLCF's current capabilities and prepare for Frontier, OLCF's future exascale system; including the development and deployment of a full big data platform in a Kubernetes environment from both a technical and cultural shift perspective. This talk will also cover the mission of the OLCF, the operational data insights related to high-performance computing that the organization strives for, and several use-cases that exist in production today.
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
The SKA Project - The World's Largest Streaming Data Processorinside-BigData.com
In this presentation from the 2014 HPC Advisory Council Europe Conference, Paul Calleja from University of Cambridge presents: The SKA Project - The World's Largest Streaming Data Processor.
"The Square Kilometre Array Design Studies is an international effort to investigate and develop technologies which will enable us to build an enormous radio astronomy telescope with a million square meters of collecting area."
Watch the video presentation: http://wp.me/p3RLHQ-cot
This document discusses Oracle's hardware strategy and engineered systems. It highlights Oracle's engineered systems like Exadata, Exalogic, and SPARC SuperCluster which provide extreme performance, efficiency and lower costs compared to traditional systems. It also summarizes new Oracle SPARC server offerings like the SPARC T5-4, T5-8, and M5-32 and their suitability for mission critical Oracle databases and applications.
e-Infrastructure available for research, using the right tool for the right jobDavid Wallom
This document provides an overview of e-infrastructure resources available for research. It describes what e-infrastructure is and its main components like data storage, software, hardware, networks, security, and people. It discusses different types of computational resources including supercomputers, parallel programming, high performance computing, distributed and shared memory models, and GPU computing. It also outlines institutional, regional, national, and international e-infrastructure resources in the UK like advanced computing centers, EPSRC regional centers, HECToR/ARCHER, and PRACE. Finally, it briefly discusses high throughput computing and examples of applications of e-infrastructure like virus analysis, fusion reactor modeling, and Alzheimer's disease research.
EPCC is a supercomputing centre at the University of Edinburgh that has been self-funded for over 28 years. It has over 110 staff and £5 million in annual turnover. EPCC supports multi-disciplinary research through access to its high performance computing facilities, training courses, and collaborative projects. It houses various supercomputing systems totaling over 150,000 CPU cores for researchers to use. EPCC also works with over 1000 companies through technology transfer and industrial collaborations in areas like simulation, data processing, and cloud computing. One example is its partnership with Rolls-Royce on a £15 million virtual gas turbine engine simulation project.
This document discusses enabling FPGA computing on hybrid platforms. It provides examples of Eurotech's HPC projects that utilize FPGAs, including APENet, JANUS-SSUE, and QPACE. APENet used FPGAs for a 3D-torus interconnect for clusters. JANUS-SSUE was a fully reconfigurable supercomputer. QPACE used FPGAs for a 3D-torus interconnect for lattice quantum chromodynamics computations on Cell processors. The document then introduces Aurora, Eurotech's solution for enabling FPGA-based acceleration in hybrid HPC systems through the Tigon node card architecture. It describes key features of Aurora including performance density, energy efficiency, program
The document provides an introduction to the National Supercomputing Centre (NSCC) high performance computing cluster. It describes the 1 petaflop system consisting of 1300 nodes and 13 petabytes of storage. The system uses PBS Pro for job scheduling and includes compilers, libraries, developer tools, and applications for engineering, science, and industry users from organizations such as A*STAR, NUS, and NTU.
IBM and ASTRON 64-Bit Microserver Prototype Prepares for Big Bang's Big Data,...IBM Research
The document summarizes the IBM/ASTRON DOME 64-bit μServer Demonstrator project. It discusses the motivation to create a highly dense 64-bit microserver module for applications like radio astronomy and business analytics. The project aims to integrate an entire server node onto a single microchip, excluding memory and power components. It provides status updates on the development of the compute node boards using PowerPC and ARM processors, as well as the cooling and packaging design to integrate over 1,000 cores into a dense 2U chassis.
The document provides information about available HPC resources at CSUC. It summarizes the hardware facilities which include the Canigó and Pirineus II clusters totaling 3,888 cores and 391 TFlops of computing power. It describes the working environment including the Slurm workload manager, storage units, and development tools available. It also outlines how users can access the services through RES projects, pricing, and the EuroCC Spain testbed for national HPC competence.
Ozden Akinci is a Chief Engineer with over 15 years of experience in IT management, data center management, and HPC systems management. He has extensive experience administrating large HPC systems, managing IT teams and projects, and developing policies and procedures for effective IT infrastructure and services. His technical skills and qualifications include expertise in Linux/Unix systems administration, cluster computing, storage solutions, and high-performance applications.
SURFsara provides an HPC cloud service using OpenNebula 3.X to offer flexible computing resources for scientists. The service runs on 30 large compute nodes and 10 light nodes with a total of 800TB of shared storage. Users can create their own isolated virtual environments and have applied the service to fields like biology, genetics, and social sciences. The implementation provides benefits like live migration, security, and ease of use compared to traditional HPC resources.
Marta de Mesa i Jesus Gironda, de Telvent, presenten les possibilitats d'aplicar el Big Data més enllà del sector privat. Per exemple, en la previsió i planificació de recursos interns a les universitats.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
National Institute of Advanced Industrial Science and Technology (AIST) in Japan is focusing on bridging innovative technological seeds to commercialization. It currently lacks cutting-edge computing infrastructure dedicated to AI and big data that is openly available. The proposed AI Bridging Cloud Infrastructure (ABCI) project aims to provide a large-scale open AI infrastructure to accelerate joint academic-industry R&D for AI in Japan. ABCI will feature 1088 compute nodes with 4352 NVIDIA Tesla V100 GPUs providing 0.550 exaflops of AI performance, connected by an InfiniBand network and utilizing liquid cooling technologies. It will provide an open platform for AI research, applications, services and infrastructure design through industry and academic
The document provides a case study of the Kajaani Datacenter project by CSC. It summarizes that CSC built a new 1.4 MW modular datacenter and 1 MW high performance computing datacenter in Kajaani, Finland to address growing capacity needs, rising costs, and sustainability goals. The project leveraged the site's access to abundant and reliable renewable energy sources. Construction involved renovating a paper warehouse and installing prefabricated modular datacenter units along with a purpose-built liquid cooling system for high-density computing. Lessons learned emphasized the importance of thorough planning and integration across technical systems.
The document discusses CSC's high performance computing capabilities and developments from 2012-2014. It summarizes that CSC upgraded its Sisu and Taito systems in 2014 with new Intel Haswell CPUs, increasing cores by 50% and reducing energy usage. The upgrades boosted Sisu's performance to 1700 TFlops, making it the 37th most powerful system in the world. CSC now provides a total of 2.54 PFlops of computing power and is the most powerful academic computing facility in the Nordic countries.
High Memory Bandwidth Demo @ One Intel StationIntel IT Center
Revolutionizing System Memory Bandwidth
The document discusses the growing need for memory bandwidth in applications such as HPC, 8K video, networking, and radar. It notes that current discrete solutions cannot meet the bandwidth needs of next-generation applications. The document then introduces Intel's Stratix 10 MX DRAM System-in-Package (SiP) which integrates DRAM with an Intel Stratix 10 FPGA. This solution provides up to 512 GB/s of peak memory bandwidth, addressing the bandwidth challenge and making it widely applicable in fields like HPC, military, and communications.
Disrupt Hackers With Robust User AuthenticationIntel IT Center
This document discusses the growing concern of security breaches and the need for robust user authentication. It argues that hardware-based security can better protect against hacking by securing identity, data, and threat prevention in hardware below the software layer. The document presents Intel's solution, Intel Authenticate, as a hardware-based, IT policy-managed multi-factor authentication approach that protects authentication factors, credentials, and policies in hardware to provide comprehensive identity and access protection.
Strengthen Your Enterprise Arsenal Against Cyber Attacks With Hardware-Enhanc...Intel IT Center
Jim Gordon of Intel discusses how data has become the most valuable asset for companies across industries. He notes that while investment in areas like healthcare, manufacturing, and cybersecurity often yield positive returns, returns on investments in cybersecurity remain negative due to rising costs of cybercrime. However, hardware-enhanced security solutions from Intel can help change this result by providing more effective protection for devices, networks, user identities and data.
Harness Digital Disruption to Create 2022’s Workplace TodayIntel IT Center
The document discusses trends in the modern workplace including increased remote work, mobility, and collaboration facilitated by technology. It highlights how 64% of millennials work somewhere other than their primary job site and 80-90% of the US workforce would like to work remotely at least part-time. It also notes that the PC remains the heart of organizations, with 95% of respondents saying it would be their preferred device. The document advocates that businesses harness digital disruption to create today's workplace by focusing on remote workers, idea spaces, smart offices, changing work styles, and security for a mobile world.
Don't Rely on Software Alone.Protect Endpoints with Hardware-Enhanced Security.Intel IT Center
Learn how security solutions built into Intel® Core™ vPro™ processors address top threat vectors. Our comprehensive approach to hardware-enhanced security starts with identity protection with Intel® Authenticate delivering customizable multi factor authentication options, and supports remote remediation with Intel® Active Management Technology.
Achieve Unconstrained Collaboration in a Digital WorldIntel IT Center
Technology is at the center of every digitally-savvy workplace, yet organizations are constrained with bridging current tools to more modern solutions. This session from Gartner Digital Workplace Summit will cover a new way to facilitate employee collaboration that is easy, engaging and gives IT an uncompromised security and management experience.
Intel® Xeon® Scalable Processors Enabled Applications Marketing GuideIntel IT Center
The Future-Ready Data Center platform is here. Whether you navigate in the High Performance Computing, Enterprise, Cloud, or Communications spheres, you will find an Intel® Xeon® processor that is ready to power your data center now and well into the future. An innovative approach to platform design in the Intel® Xeon® Scalable processor platform unlocks the power of scalable performance for today’s data centers—from the smallest workloads to your most mission-critical applications. Powerful convergence and capabilities across compute, storage, memory, network and security deliver unprecedented scale and highly optimized performance across a broad range of workloads—from high performance computing (HPC) and network functions virtualization, to advanced analytics and artificial intelligence (AI). Many examples here show how our software partner ecosystem has optimized their applications and/or taken advantage of inherent platform enhancements to deliver dramatic performance gains, that can translate into tangible business benefits.
#NABshow: National Association of Broadcasters 2017 Super Session Presentatio...Intel IT Center
At NAB, this session covered how technology will transform the way content is created and distributed and accelerate the rate of innovation in the industry. Intel, a revolutionary leader in technology and in transforming industries since 1968, works with other industry partners to enable the transition to new paradigms, infrastructures and technologies.
Join Jim Blakley, General Manager of Intel's Visual Cloud Division, and guests including Dave Ward (Chief Technology Officer, Cisco), AR Rahman (two-time Academy and Grammy Award winner), and Dave Andersen (School of Computer Science, Carnegie Mellon University) to learn more about how this revolution will make amazing visual cloud experiences possible for every person on Earth.
Making the digital workplace a reality requires a modern and strategic approach to identity protection. You will discover ways to build an IAM program that moves you from defense to offense. This presentation will offer practical guidance on how a hardware-based multi-factor authentication strategy is the future for identity protection.
Three Steps to Making a Digital Workplace a RealityIntel IT Center
The workplace is undergoing a dramatic evolution. Work styles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. You'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Three Steps to Making The Digital Workplace a Reality - by Intel’s Chad Const...Intel IT Center
The workplace is undergoing a dramatic evolution. Workstyles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. In this presentation, you'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Intel® Xeon® Processor E7-8800/4800 v4 EAMG 2.0Intel IT Center
This set of Intel® Xeon® processor E7-8800/4800 v4 family proof points spans several key business segments. The Intel® Xeon® processor E7-8800/4800 v4 product family delivers the horsepower for real-time, high-capacity data analysis that can help businesses derive rapid actionable insights to deliver innovative new services and customer experiences. With high performance, industry’s largest memory, robust reliability, and hardware-enhanced security features, the E7-8800/4800 v4 is optimal for scale-up platforms, delivering rapid in-memory computing for today’s most demanding real-time data and transaction-intensive workloads.
Intel® Xeon® Processor E5-2600 v4 Enterprise Database Applications ShowcaseIntel IT Center
The Intel Xeon processor E5-2600 v4 product family delivers the high performance, increased memory, and I/O bandwidth required for all forms of enterprise databases, is ideal for next-generation application workloads, and is the powerhouse for software-defined infrastructure (SDI) environments where automation and orchestration capabilities are foundational. See how database solutions deployed on the Intel® Xeon® processor E5 v4 product family can deliver increased performance and throughput, as demonstrated by key software partners.
Intel® Xeon® Processor E5-2600 v4 Core Business Applications ShowcaseIntel IT Center
Designed for architecting next-generation, software-defined data centers, the Intel® Xeon® processor E5-2600 v4 product family is supercharged for efficiency, performance, and agile services delivery across cloud-native and traditional applications. Intel® Intelligent Power Technology automatically regulates power consumption to combine industry-leading energy efficiency with intelligent performance that adapts to your workloads.
Intel® Xeon® Processor E5-2600 v4 Financial Security Applications ShowcaseIntel IT Center
The Intel® Xeon® processor E5-2600 v4 product family delivers efficient resource utilization, service tiering, and optimal quality of service (QoS) levels for financial applications by processing faster transactions and delivering exceptional uptime and availability and reduced latency, providing a high-performing, highly scalable system for your most demanding workloads. Enhanced cryptographic speed with two new instructions for Intel® AES-NI for improved security, and the Intel® SSD Data Center Family for NVMe represents optimized management for the future software-defined data centers with industry standard software and drivers.
Intel® Xeon® Processor E5-2600 v4 Telco Cloud Digital Applications ShowcaseIntel IT Center
Cloud and telecommunication companies can deliver better end user experiences while improving cost models across their data centers with the Intel® Xeon® processor E5-2600 v4 product family. See how innovative technologies can deliver high throughput, low latency and more agile delivery of network services to the software-defined data center. Additionally, unparalleled versatility across diverse workloads, such as 4K video processing, editing, and decoding and encoding where improved bandwidth and reduced latency provide noticeable performance improvements.
Intel® Xeon® Processor E5-2600 v4 Tech Computing Applications ShowcaseIntel IT Center
Where breakthrough performance is expected, the Intel® Xeon® processor E5-2600 v4 product family, a key ingredient of the Intel® Scalable System Framework and the software-defined data center, is designed to deliver better performance and performance per watt than ever before. The combination of Intel Xeon processors, Intel® Omni-Path Architecture, Intel Solutions for Lustre* software, and storage technologies improves bandwidth and reduces latency, providing a high-performing, highly scalable system for your most demanding workloads.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
3. dd-Mmm-2011June-2013ClusterVision
The University of Paderborn
• Originally founded 1614
- restablished in 1972
- Universität Paderborn in 2002
• 18,800 students
• 60 under/post-graduate degrees
• 5 faculties
- Arts & Humanities
- Business administration
- Science
- Mechanical Engineering
- Electrical Engineering
- & Mathematics
• PC² Center for Parallel Computing ..
4. dd-Mmm-2011June-2013ClusterVision
PC² Center for Parallel Computing
• Founded in 1991
• Integral inter-disciplinary institute
of the University of Paderborn
• Distributed & parallel high performance computing
• HPC research, development & applications
- Custom computing & many cores
- Middleware & system software
- Scalable storage
- Testbeds & benchmarking
• 5 HPC systems
• OCcuLUS part of 4m Euro investment
• “This system is a powerful compute resource for all
researchers in the region of East Westphalia and Lippe,
and our partners in Germany and Europe,” Prof.Dr.
Holger Karl, head of the PC² board. ..
5. dd-Mmm-2011June-2013ClusterVision
• 614 compute nodes, 10,000 cores
- 552 ASUS E7 rack-mount compute servers
- each with 2 Intel Xeon E5-2670 (16 cores) 2.6GHz processors
- Smaller compute nodes have 64 GByte of main memory per node
- larger nodes are enhanced with 256 GByte of main memory
• 40 GPU Nodes, ES4000
Type-1 Intel Xeon E5-2670 processors with 32 NVIDIA Tesla K20 GPUs.
Type-2 Intel Xeon E5-2670 processors with 8 Intel Xeon Phi co-processors
• 20 Intel Servers (256G).
• 2 SMP nodes
- Dell PowerEdge R820 servers with four 8-Core Xeon E5-4650 processors
• 6 front-end and management and administration nodes
- Supermicro Superserver 825TQ-R740WB and 1027GR-TRFchassis
• Fast QDR InfiniBand, 40 Gbit/s system from Mellanox Technologies.
• 14 42U Emerson/Knürr server racks,
- 12 of which incorporate Knürr’s backdoor chilled water cooling technology.
• Bright Cluster Manager
• FraunhoferFS (FhGFS) parallel file-system …
ClusterVision Project : 90208
8. dd-Mmm-2011June-2013ClusterVision
Build
• Off-site engineering Nov-2012
• 6-man on-site build
• Handover Feb-2013
• Official inauguration
- ZKI Supercomputing Mar-2013
• Post-delivery services
- support/Maintenance for BCM/FhGFS
- multi-year critical hardware warranties …
“It is always exciting for our
company to work on Top500 class
systems like the new HPC cluster at
Paderborn. Large scale, complex
systems like this understandably
represent a showcase of possibility
to the HPC community, both in
academia and commercial
enterprise, and enable our team to
draw upon and demonstrate all of
their experience in system design
and their expertise in build and
configuration,” Christopher Huggins,
Commercial Director at
ClusterVision.
9. dd-Mmm-2011June-2013ClusterVision
ISC’13 Exhibitor Forum (02)
Tuesday 18 June, 11:00-11:30 hrs
Dipl.Bjoern Olausson
HPC System Architect, ClusterVision
Dr.Jens Simon
Technical Manager & Senior Research Assistant at PC²
ISC’13 Stand 520 www.clustervision.com