The document discusses plans to establish an institutional high performance computing (HPC) facility at North-West University. It outlines the technical goals of building a Beowulf cluster to link existing departmental clusters and integrate with national and international computational grids. It also discusses management principles for the new HPC facility to ensure sustainability, efficiency, reliability, availability and high performance.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
CFD acceleration with FPGA (byteLAKE's presentation from PPAM 2019)byteLAKE
byteLAKE's presentation from the PPAM 2019 conference.
Abstract:
The goal of this work is to adapt 4 CFD kernels to the Xilinx ALVEO U250 FPGA, including first-order step of the non-linear iterative upwind advection MPDATA schemes (non-oscillatory forward in time), the divergence part of the matrix-free linear operator formulation in the iterative Krylov scheme, tridiagonal Thomas algorithm for vertical matrix inversion inside preconditioner for the iterative solver, and computation of the psuedovelocity for the second pass of upwind algorithm in MPDATA. All the kernels use 3-dimensional compute domain consisted from 7 to 11 arrays. Since all kernels belong to the group of memory bound algorithms, our main challenge is to provide the highest utilization of global memory bandwidth. Our adaptation allows us to reduce the execution time upto 4x.
Find out more at: www.byteLAKE.com/en/CFD
Foot note:
This is the presentation about the non-AI version of byteLAKE's CFD kernels, highly optimized for Alveo FPGA. Based on this research project and many others in the CFD space, we decided to shift the course of the CFD Suite product development and leverage AI to accelerate computations and enable new possibilities. Instead of adapting CFD solvers to accelerators, we use AI and work on a cross-platform solution. More on the latest: www.byteLAKE.com/en/CFDSuite.
-
Update for 2020: byteLAKE is currently developing CFD Suite as AI for CFD Suite, a collection of AI/ Artificial Intelligence Models to accelerate and enable new features for CFD simulations. It is a cross-platform solution (not only for FPGAs). More: www.byteLAKE.com/en/CFDSuite.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
CFD acceleration with FPGA (byteLAKE's presentation from PPAM 2019)byteLAKE
byteLAKE's presentation from the PPAM 2019 conference.
Abstract:
The goal of this work is to adapt 4 CFD kernels to the Xilinx ALVEO U250 FPGA, including first-order step of the non-linear iterative upwind advection MPDATA schemes (non-oscillatory forward in time), the divergence part of the matrix-free linear operator formulation in the iterative Krylov scheme, tridiagonal Thomas algorithm for vertical matrix inversion inside preconditioner for the iterative solver, and computation of the psuedovelocity for the second pass of upwind algorithm in MPDATA. All the kernels use 3-dimensional compute domain consisted from 7 to 11 arrays. Since all kernels belong to the group of memory bound algorithms, our main challenge is to provide the highest utilization of global memory bandwidth. Our adaptation allows us to reduce the execution time upto 4x.
Find out more at: www.byteLAKE.com/en/CFD
Foot note:
This is the presentation about the non-AI version of byteLAKE's CFD kernels, highly optimized for Alveo FPGA. Based on this research project and many others in the CFD space, we decided to shift the course of the CFD Suite product development and leverage AI to accelerate computations and enable new possibilities. Instead of adapting CFD solvers to accelerators, we use AI and work on a cross-platform solution. More on the latest: www.byteLAKE.com/en/CFDSuite.
-
Update for 2020: byteLAKE is currently developing CFD Suite as AI for CFD Suite, a collection of AI/ Artificial Intelligence Models to accelerate and enable new features for CFD simulations. It is a cross-platform solution (not only for FPGAs). More: www.byteLAKE.com/en/CFDSuite.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
TitanIC presented, "ODSA Use Case - SmartNIC," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
"OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others."
Watch the video: http://wp.me/p3RLHQ-gKz
Learn more: http://openhpc.community/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
NXP presented, "ODSA Workshop: Development Effort Summary," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
PCIe Gen 3.0 Presentation @ 4th FPGA CampFPGA Central
PCIe Gen3 presentation by PLDA at 4th FPGA Camp in Santa Clara, CA. For more details visit http://www.fpgacentral.com/fpgacamp or http://www.fpgacentral.com
In this deck, Ronald P. Luijten from IBM Research in Zurich presents: DOME 64-bit μDataCenter.
I like to call it a datacenter in a shoebox. With the combination of power and energy efficiency, we believe the microserver will be of interest beyond the DOME project, particularly for cloud data centers and Big Data analytics applications."
The microserver’s team has designed and demonstrated a prototype 64-bit microserver using a PowerPC based chip from Freescale Semiconductor running Linux Fedora and IBM DB2. At 133 × 55 mm2 the microserver contains all of the essential functions of today’s servers, which are 4 to 10 times larger in size. Not only is the microserver compact, it is also very energy-efficient.
Watch the video: http://wp.me/p3RLHQ-gJM
Learn more: https://www.zurich.ibm.com/microserver/
Sign up for our insideHPC Newsletter: http://insideHPC/newsletter
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
TitanIC presented, "ODSA Use Case - SmartNIC," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
"OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others."
Watch the video: http://wp.me/p3RLHQ-gKz
Learn more: http://openhpc.community/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
NXP presented, "ODSA Workshop: Development Effort Summary," at the ODSA Workshop. The charter of the ODSA (Open Domain Specification Architecture) Workgroup is to define an open specification that enables building of Domain Specific Accelerator silicon using best-of-breed components from the industry made available as chiplet dies that can be integrated together as Lego blocks on an organic substrate packaging layer. The resulting multi-chip module (MCM) silicon can be produced at significantly lower development and manufacturing costs, and will deliver much needed performance per watt and performance per dollar efficiencies in networking, security, machine learning and other applications. The ODSA Workgroup also intends to deliver implementations of the specification as board-level prototypes, RTL code and libraries.
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
PCIe Gen 3.0 Presentation @ 4th FPGA CampFPGA Central
PCIe Gen3 presentation by PLDA at 4th FPGA Camp in Santa Clara, CA. For more details visit http://www.fpgacentral.com/fpgacamp or http://www.fpgacentral.com
In this deck, Ronald P. Luijten from IBM Research in Zurich presents: DOME 64-bit μDataCenter.
I like to call it a datacenter in a shoebox. With the combination of power and energy efficiency, we believe the microserver will be of interest beyond the DOME project, particularly for cloud data centers and Big Data analytics applications."
The microserver’s team has designed and demonstrated a prototype 64-bit microserver using a PowerPC based chip from Freescale Semiconductor running Linux Fedora and IBM DB2. At 133 × 55 mm2 the microserver contains all of the essential functions of today’s servers, which are 4 to 10 times larger in size. Not only is the microserver compact, it is also very energy-efficient.
Watch the video: http://wp.me/p3RLHQ-gJM
Learn more: https://www.zurich.ibm.com/microserver/
Sign up for our insideHPC Newsletter: http://insideHPC/newsletter
Design Considerations, Installation, and Commissioning of the RedRaider Cluster at the Texas Tech University
High Performance Computing Center
Outline of this talk
HPCC Staff and Students
Previous clusters
• History, Performance, usage Patterns, and Experience
Motivation for Upgrades
• Compute Capacity Goals
• Related Considerations
Installation and Benchmarks Conclusions and Q&A
Building and operating HPC-based AI computing environment inside Gwangju Institute of Science and Technology
For using the part of the slide, you need to cite "Narantuya Jargalsaikhan, GIST AI-X Computing Cluster, 2021".
Thank you!
In this deck from the HPC User Forum in Austin, Yutaka Ishikawa from Riken AICS presents: Japan's post K Computer.
Watch the video presentation: http://wp.me/p3RLHQ-fJ6
Learn more: http://hpcuserforum.com
Summit 16: Deploying Virtualized Mobile Infrastructures on OpenstackOPNFV
Service Provider is evolving and competing with leaner Over the Top Providers (OTP) providers such as Google and Amazon to provide mobile services. Furture SP network has ot be agile, resilient and auto salable. SPs are leaning towards using COTS infra, open networking (OPNFV, ONOS) and VNF to run routers, switches, mobile gateways, firewall, NAT, DPI functions. Session covers design and deployment of virtualizing the mobile infra such as Virtual Evolved Packet Core, GiLAN and VoLTE as well as 5G core. We will also cover performance fine tuning using DPDK, SR-IOV etc. WE will present case study using Cisco (VNF Manager and NFVO), Redhat (NFVI), Openstack and block storage using CEPH technology. Participants will be able to understand complexities of mobile packet core, evolution NFV based solution and architecture framework for 5G mobile packet core.
Heterogeneous Computing : The Future of SystemsAnand Haridass
Charts from NITK-IBM Computer Systems Research Group (NCSRG)
- Dennard Scaling,Moore's Law, OpenPOWER, Storage Class Memory, FPGA, GPU, CAPI, OpenCAPI, nVidia nvlink, Google Microsoft Heterogeneous system usage
Presentation given by Jens Hagemeyer (Bielefeld University) at the ‘Low-Energy Heterogeneous Computing Workshop’ on 16 October 2020 within HiPEAC CSW Autumn 2020
We leave in the era where the atomic building elements of silicon computers, e.g., transistors and wires, are no longer visible using traditional optical microscopes and their sizes are measured in just tens of Angstroms. In addition, power dissipation per unit volume is bounded by the laws of Physics that all resulted among others in stagnating processor clock frequencies. Adding more and more processor cores that perform simpler and simpler tasks in an attempt to efficiently fill the available on-chip area seems to be the current trend taken by the Industry.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. Attie Juyn & Wilhelm van Belkum igh H erformance P omputing C & Computing GRID
2.
3.
4. To establish an Institutional HPC Level 1 : (Entry Level) Personal workstation Level 2 : Departmental Compute Cluster Level 3 : Institutional HPC Level 4 Nat./Int. HPC
5.
6.
7.
8.
9.
10.
11.
12. The New World Order Source 2006 UC Regents Mainframe Mini Computer PC Cluster & Grids Vector Supercomputer
13. Technical goals Build a Institutional H igh P erformance C omputing facility, based on Beowulf cluster principals, coexisting and linking existing departmental cluster, the National and International computational Grids
14.
15.
16. The Evolved Cluster Compute Nodes Admin User Job Queue Source Cluster Resources, Inc. Resource Manager Scheduler License Manager Myrinet Identity Manager Allocation Manager Resource Manager Scheduler Departmental Cluster
18. Grid/Cluster Stack or Framework EGEE Chinese USA EU MPI PVM LAM MPICH Parallel Serial Application Resource Manager Rocks Oscar MPI PVM LAM MPICH Parallel Serial Application Resource Manager Oscar Torque Rocks Hardware (Cluster or SMP) CentOS Solaris RedHat UNICOS AIX Scientific Linux Windows Mac OS X HP UX Other Operating System Security GLOBUS CROWNGrid gLite UNICORE Grid Workload Manager: Scheduler, Policy Manager, Integration Platform Load Leveler PBSpro PBS SGE Condor(G) LSF SLURM Cluster Workload Manager: Scheduler, Policy Manager, Integration Platform Nimrod MOAB MAUI Portal CLI GUI Application Users Admin
21. The #1 and #13 in world (2007) BlueGene/L - eServer Blue Gene Solution (IBM/212992 Power cores) DOE/NNSA/LLNL - USA MareNostrum - BladeCenter JS21 Cluster, PPC 970, 2.3 GHz, Myrinet (IBM 10240 Power cores) Barcelona Supercomputer Centre – Spain (63.83 teraFLOP) 478.2 trillion floating operations per second (teraFLOPS) on LINPACK The #4 and #40 in world (2008)
22. As of November 2008 #1 : Roadrunner Roadrunner - BladeCenter QS22/LS21 Cluster, 12,240 x PowerXCell 8i 3.2 Ghz 6,562 Dual-Core Opteron 1.8 GHz DOE/NNSA/LANL - United States 1.105 PetaFlop
25. Introducing - Utility Computing Swap & migrating of Hardware (First Phase) Dynamic load shifting on RM level (Second Phase) Grid Workload Manager Condor, MOAB Utility Computing Data Center RM HPC RM
26. Grid/Cluster Stack or Framework EGEE Chinese USA EU Hardware (Cluster or SMP) MPI PVM LAM MPICH Parallel Serial Application Resource Manager Rocks Oscar MPI PVM LAM MPICH Parallel Serial Application Resource Manager Oscar Torque Rocks CentOS Solaris RedHat UNICOS AIX Scientific Linux Windows Mac OS X HP UX Other Operating System GLOBUS CROWNGrid gLite UNICORE Grid Workload Manager: Scheduler, Policy Manager, Integration Platform Load Leveler PBSpro PBS SGE Condor(G) LSF SLURM Cluster Workload Manager: Scheduler, Policy Manager, Integration Platform Nimrod MOAB MAUI Security Portal CLI GUI Application Users Admin
27. HP BL460c 8*3GHz Xeon 12G L2, 1333Mhz FSB 10G memory (96GFlop) HP Modular Cooling System G2 Up to 4 HP C7000, 512 CPU cores 5.12 TFlop HP Blc Virtual Connect Ethernet D-Link X-stack DSN3200 10.5TB RAID5, 80 000 I/O per second HP C7000 Up to 16 HP2x220c (3.072TFlop) 1024 CPU cores HP2x220c (12.288TFlop) BL2x220c 16*3GHz Xeon 192GFlop HP C7000 Up to 16 HP460c (1.536TFlop)
28.
29.
30.
31. HP ProLiant BL2x220c G5 Internal View Two Mezzanine Slots Two x8 (both reside on bottom board) 2 x Optional SATA HDDs Top and bottom PCA, side by side 2 x 2 CPUs 2 x 4 DIMM Slots DDR2 533/667MHz 2 x Embedded 1Gb Ethernet Dual-Port NICs Server Board Connectors
38. SEACOM TE-North is a new cable currently being laid across the Mediterranean Sea Cable Laying to start Oct. 08 Final splicing April 09 Service launch June 09
41. igh H erformance P omputing C & GRID Computing orth U U est U U niversity Sustainable Efficient Reliable High Availability & Performance @ >3TFlop Scientific Linux
42.
Editor's Notes
In summary we determined that the following would need to be address in any HPC to be successful.
In the beginning their where only one big shark (The Mainframe) The next era of supper computing came with the introduction vector Supercomputer the likes of Cray etc.. The next step was compacting into Mini Computer Al the previous approaches where based on SMP closely coupled in one box And then came the modes Personal Computer. Not very strong on it’s own, but connecting a lot of them together make one Big fish So we suited-up got our best fishing rods and decides to go fishing... For one of these new Big Fish Becoming part of the New World Order.
At the previous HPC conference we came, we saw, we determined that, as Institutional IT, the time was right. The University wanted to become a major player in the New World order. This would not be the first try at this, in 1991 we implemented the SP, but the time was not right (Previous part of the presentation) In the mean time the University also ventured into clustering with three Departmental clusters (FSK, Chemistry, BWI) So what do we want to do Technically that would be different. We what to implement the H in HPC > 1 TFlop configuration. Beowulf approach -> open source , commodity of the shelf hardware,
So what is a Beowulf Cluster ?
How did the first Beowulf cluster look like Note the amount of time it took the assemble the cluster 8 months, taking into account Moore’s law this would makeable influenced the effective production life of the cluster.
The light dotted lines show the originator of software. The issue for us is choice of Cluster software as to allow integration into grids The major issues is on scheduler level and making the HPC appear as a CE in the grid.
Concept framework source Cluster Resources, Inc. Show what did we decide, representing the previous slides in a layer approach simmalar to ISO layers We started with hardware. -> ? HW, OS Resource Manager, Cluster schedulers and finally the Grid workload manager.
Based the Barcelona picture we did put I a requisition for a new building to house the new NWU HPC.. But we are still waiting… OK the real reason is. Reason for showing #13. When slides where setup Barcelona was #5 dropped down in #13 in less than 6 months We needs to build have a strategy that is sustainable with fast upgraded path.
We started looking around to determine what is the major issues that HPC have and found that Reliability and Availability is a major factor.
In summary we determined that the following would need to be address in any HPC to be successful.
The first strategy that we will used to extend the capacity and lifecycle of HPC technology will be to: Utilize the characteristics of Data center vs. that of HPC Implement new high performance CPU in HPC and migrate technology to data center As first phase to manual hardware load management through swapping of blades between HPC and data center to match peak demands extend concept to later do this dynamically on Resource Manger level in the long run (also referred to as utility computing) We needed to a strategy to make the HPC cost effective
So looking at the technologies that we were already using in the data center Why start here ? Cost effectiveness, training people on new technology that is only used in HPC would reduce cost effectiveness. Take note Modular fast extension with less work
HP Confidential – may only be shown to customers under NDA and may not be left behind with a customer under any circumstance. [Enter any extra notes here; leave the item ID line at the bottom] Avitage Item ID: {{C97F4853-0C33-430E-AE0B-9F33E6E58879}}
HP Confidential – may only be shown to customers under NDA and may not be left behind with a customer under any circumstance. [Enter any extra notes here; leave the item ID line at the bottom] Avitage Item ID: {{A59D56E3-21C4-498D-B6C4-605439F2D290}}
Show how does the NWU HPC configuration look like.
What is the Spec’s 256 Cores
Addressing the Reliability and Availability
Institutional facility how do we like this. The limitation still is speed. Brining on SANREN.
Monday, 31 March 2008 : The four sites are the main campuses of Wits, UJ, and two of UJ’s satellite campuses, Bunting and Doornfontein says Christiaan Kuun , SANReN Project Manager at the Meraka Institute
How will SANREN be used for the National GRID But what about International Grid. -> SEACOM
SEACOM PROJECT UPDATE - 14 Aug 2008 Construction on-schedule with major ground and sea-based activities proceeding over the next eight months 14 August 2008 – The construction of SEACOM’s 15,000 km fibre optic undersea cable, linking southern and east Africa, Europe and south Asia, is on schedule and set to go live as planned in June 2009 . Some 10,000 km of cable has been manufactured to date at locations in the USA and Japan and Tyco Telecommunications (US) Inc., the project contractors, will begin shipping terrestrial equipment this month with the cable expected to be loaded on the first ship in September 2008. Laying of shore end cables for each landing stations will also proceed from September. This process will comprise the cable portions at shallow depths ranging from 15 to 50m where large vessels are not able to operate. From October 2008, the first of three Reliance Class vessels will start laying the actual cable. The final splicing, which involves connecting all cable sections together, will happen in April 2009, allowing enough time for testing of the system before the commercial launch in June 2009. The final steps of the Environmental Social Impact Assessment (ESIA) process are well advanced and all small archeological, marine and ecological studies, which required scuba diving analysis, have been completed, as well as social consultations with the affected parties. The cable, including repeaters necessary to amplify the signal, will be stored in large tanks onboard the ships. The branching units necessary to divert the cable to the planned landing stations will be connected into the cable path on the ship just prior to deployment into the sea. The cable will then be buried under the ocean bed with the help of a plow along the best possible route demarcated through the marine survey. The connectivity from Egypt to Marseille, France, will be provided through Telecom Egypt’s TE-North fibre pairs that SEACOM has purchased on the system. TE-North is a new cable currently being laid across the Mediterranean Sea. Brian Herlihy, SEACOM President, said: “ We are very happy with the progress made over the past five months. Our manufacturing and deployment schedule is on target and we are confident that we will meet our delivery promises in what is today an incredibly tight market underpinned by sky-rocketing demand for new cables resulting in worldwide delivery delays. “The recently announced executive appointments combined with the project management capabilities already existent within SEACOM position us as a fully fledged telecoms player. We are able to meet the African market’s urgent requirements for cheap and readily available bandwidth within less than a year. ” The cable will go into service long before the 2010 FIFA World Cup kicks-off in South Africa and SEACOM has already been working with key broadcasters to meet their broadband requirements. The team is also trying to expedite the construction in an attempt to assist with the broadcasting requirements of the FIFA Confederations Cup scheduled for June 2009. SEACOM, which is privately funded and over three quarter African owned, will assist communication carriers in south and east Africa through the sale of wholesale international capacity to global networks via India and Europe. The undersea fibre optic cable system will provide African retail carriers with equal and open access to inexpensive bandwidth, removing the international infrastructure bottleneck and supporting east and southern African economic growth. SEACOM will be the first cable to provide broadband to countries in east Africa which, at the moment, rely entirely on expensive satellite connections.
The result of SEACOM and SANREN…
The Timeline vision in terms of production quality National & International GRID