This document discusses trends in high performance computing (HPC), grid computing, and cloud computing. It provides an overview of HPC cluster performance and interconnects. Grid computing enabled large-scale scientific collaboration through infrastructures like EGEE. The LHC requires petascale computing capabilities. Cloud computing hype is discussed alongside observations of performance and virtualization challenges. The future of computing may involve more sophisticated tools and dynamic, small computing elements.
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
06.07.26
Invited Talk
Cyberinfrastructure for Humanities, Arts, and Social Sciences, A Summer Institute, SDSC
Title: The OptIPuter and Its Applications
La Jolla, CA
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
06.07.26
Invited Talk
Cyberinfrastructure for Humanities, Arts, and Social Sciences, A Summer Institute, SDSC
Title: The OptIPuter and Its Applications
La Jolla, CA
This is a presentation by Prof. Anne Elster at the International Workshop on Open Source Supercomputing held in conjunction with the 2017 ISC High Performance Computing Conference.
In this deck from the HPC User Forum at Argonne, Andrew Siegel from Argonne presents: ECP Application Development.
"The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This role goes far beyond the limited scope of a physical computing system. ECP’s work encompasses the development of an entire exascale ecosystem: applications, system software, hardware technologies and architectures, along with critical workforce development."
Watch the video: https://wp.me/p3RLHQ-kSL
Learn more: https://www.exascaleproject.org
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Using Photonics to Prototype the Research Campus Infrastructure of the Future...Larry Smarr
08.02.21
Presentation
Philip Papadopoulos, Larry Smarr, Joseph Ford, Shaya Fainman, and Brian Dunne
University of California, San Diego
Title: Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project
La Jolla, CA
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
With the HPC Cloud facility, SURFsara offers self-service, dynamically scalable and fully configurable HPC systems to the Dutch academic community. Users have, for example, a free choice of operating system and software.
The HPC Cloud offers full control over a HPC cluster, with fast CPUs and high memory nodes and it is possible to attach terabytes of local storage to a compute node. Because of this flexibility, users can fully tailor the system for a particular application. Long-running and small compute jobs are equally welcome. Additionally, the system facilitates collaboration: users can share control over their virtual private HPC cluster with other users and share processing time, data and results. A portal with wiki, fora, repositories, issue system, etc. is offered for collaboration projects as well.
Scale-out AI Training on Massive Core System from HPC to Fabric-based SOCinside-BigData.com
In this video from Arm HPC Asia 2019, Fu Li from Quantum Cloud presents: Scale out AI Training on Massive Core System from HPC to Fabric based SOC.
"The purpose of these workshops has been to bring together the leading Arm vendors, end users and open source development community to discuss the latest products, developments and open source software support in HPC on Arm."
Learn more: https://www.linaro.org/events/workshop/arm-hpc-asia-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Real-Time Pedestrian Detection Using Apache Storm in a Distributed Environment csandit
In general, a distributed processing is not suitable for dealing with image data stream due to the
network load problem caused by communications of frames. For this reason, image data stream
processing has operated in just one node commonly. However, we need to process image data
stream in a distributed environment in a big data era due to increase in quantity and quality of
multimedia data. In this paper, we shall present a real-time pedestrian detection methodology in
a distributed environment which processes image data stream in real-time on Apache Storm
framework. It achieves sharp speed up by distributing frames onto several nodes called bolts,
each of which processes different regions of image frames. Moreover, it can reduce the
overhead caused by synchronization by computation bolts which returns only the processing
results to the merging bolts.
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Frederic Desprez
The increasing complexity of available infrastructures (hierarchical, parallel, distributed, etc.) with specific features (caches, hyper-threading, dual core, etc.) makes it extremely difficult to build analytical models that allow for a satisfying prediction. Hence, it raises the question on how to validate algorithms and software systems if a realistic analytic study is not possible. As for many other sciences, the one answer is experimental validation. However, such experimentations rely on the availability of an instrument able to validate every level of the software stack and offering different hardware and software facilities about compute, storage, and network resources.
Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of large HPC facilities, Grid’5000 has evolved in order to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. We now target new processors features such as hyperthreading, turbo boost, and power management or large applications managing big data. In this keynote we will both address the issue of experiments in HPC and computer science and the design and usage of the Grid'5000 platform for various kind of applications.
This is a presentation by Prof. Anne Elster at the International Workshop on Open Source Supercomputing held in conjunction with the 2017 ISC High Performance Computing Conference.
In this deck from the HPC User Forum at Argonne, Andrew Siegel from Argonne presents: ECP Application Development.
"The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This role goes far beyond the limited scope of a physical computing system. ECP’s work encompasses the development of an entire exascale ecosystem: applications, system software, hardware technologies and architectures, along with critical workforce development."
Watch the video: https://wp.me/p3RLHQ-kSL
Learn more: https://www.exascaleproject.org
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Using Photonics to Prototype the Research Campus Infrastructure of the Future...Larry Smarr
08.02.21
Presentation
Philip Papadopoulos, Larry Smarr, Joseph Ford, Shaya Fainman, and Brian Dunne
University of California, San Diego
Title: Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project
La Jolla, CA
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
With the HPC Cloud facility, SURFsara offers self-service, dynamically scalable and fully configurable HPC systems to the Dutch academic community. Users have, for example, a free choice of operating system and software.
The HPC Cloud offers full control over a HPC cluster, with fast CPUs and high memory nodes and it is possible to attach terabytes of local storage to a compute node. Because of this flexibility, users can fully tailor the system for a particular application. Long-running and small compute jobs are equally welcome. Additionally, the system facilitates collaboration: users can share control over their virtual private HPC cluster with other users and share processing time, data and results. A portal with wiki, fora, repositories, issue system, etc. is offered for collaboration projects as well.
Scale-out AI Training on Massive Core System from HPC to Fabric-based SOCinside-BigData.com
In this video from Arm HPC Asia 2019, Fu Li from Quantum Cloud presents: Scale out AI Training on Massive Core System from HPC to Fabric based SOC.
"The purpose of these workshops has been to bring together the leading Arm vendors, end users and open source development community to discuss the latest products, developments and open source software support in HPC on Arm."
Learn more: https://www.linaro.org/events/workshop/arm-hpc-asia-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Real-Time Pedestrian Detection Using Apache Storm in a Distributed Environment csandit
In general, a distributed processing is not suitable for dealing with image data stream due to the
network load problem caused by communications of frames. For this reason, image data stream
processing has operated in just one node commonly. However, we need to process image data
stream in a distributed environment in a big data era due to increase in quantity and quality of
multimedia data. In this paper, we shall present a real-time pedestrian detection methodology in
a distributed environment which processes image data stream in real-time on Apache Storm
framework. It achieves sharp speed up by distributing frames onto several nodes called bolts,
each of which processes different regions of image frames. Moreover, it can reduce the
overhead caused by synchronization by computation bolts which returns only the processing
results to the merging bolts.
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Frederic Desprez
The increasing complexity of available infrastructures (hierarchical, parallel, distributed, etc.) with specific features (caches, hyper-threading, dual core, etc.) makes it extremely difficult to build analytical models that allow for a satisfying prediction. Hence, it raises the question on how to validate algorithms and software systems if a realistic analytic study is not possible. As for many other sciences, the one answer is experimental validation. However, such experimentations rely on the availability of an instrument able to validate every level of the software stack and offering different hardware and software facilities about compute, storage, and network resources.
Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of large HPC facilities, Grid’5000 has evolved in order to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. We now target new processors features such as hyperthreading, turbo boost, and power management or large applications managing big data. In this keynote we will both address the issue of experiments in HPC and computer science and the design and usage of the Grid'5000 platform for various kind of applications.
Semua orang dapat menyelesaikan skripsi, namun menyelesaikan skripsi tepat waktu membutuhkan usaha yang lebih lagi. Presentasi ini disajikan pada acara "Pengarahan Lab oratorium Tugas Akhir dan Tips dan Trik Tugas Akhir".
Those who out-compute can many times out-compete. The cloud gives you access to a massive amount of compute power when you need it. This talk will present an introduction to HPC in the cloud, including, the benefits of HPC in the cloud, how to get started, some tools to use, and how you can manage data. We will showcase several examples of HPC in the cloud by a number of public sector and commercial customers.
Created by: Dr. Jeff Layton, Principal, Solutions Architect
Visit http:aws.amazon.com/hpc for more information about HPC on AWS.
High Performance Computing (HPC) allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running high performance computing in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments. You have access to a full-bisection, high bandwidth network for tightly-coupled, IO-intensive workloads, which enables you to scale out across thousands of cores for throughput-oriented applications.
4 TeraGrid Sites Have Focal Points:
SDSC – The Data Place
Large-scale and high-performance data analysis/handling
Every Cluster Node is Directly Attached to SAN
NCSA – The Compute Place
Large-scale, Large Flops computation
Argonne – The Viz place
Scalable Viz walls
Caltech – The Applications place
Data and flops for applications – Especially some of the GriPhyN Apps
Specific machine configurations reflect this
El Barcelona Supercomputing Center (BSC) fue establecido en 2005 y alberga el MareNostrum, uno de los superordenadores más potentes de España. Somos el centro pionero de la supercomputación en España. Nuestra especialidad es la computación de altas prestaciones - también conocida como HPC o High Performance Computing- y nuestra misión es doble: ofrecer infraestructuras y servicio de supercomputación a los científicos españoles y europeos, y generar conocimiento y tecnología para transferirlos a la sociedad. Somos Centro de Excelencia Severo Ochoa, miembros de primer nivel de la infraestructura de investigación europea PRACE (Partnership for Advanced Computing in Europe), y gestionamos la Red Española de Supercomputación (RES). Como centro de investigación, contamos con más de 456 expertos de 45 países, organizados en cuatro grandes áreas de investigación: Ciencias de la computación, Ciencias de la vida, Ciencias de la tierra y aplicaciones computacionales en ciencia e ingeniería.
How HPC and large-scale data analytics are transforming experimental scienceinside-BigData.com
In this deck from DataTech19, Debbie Bard from NERSC presents: Supercomputing and the scientist: How HPC and large-scale data analytics are transforming experimental science.
"Debbie Bard leads the Data Science Engagement Group NERSC. NERSC is the mission supercomputing center for the USA Department of Energy, and supports over 7000 scientists and 700 projects with supercomputing needs. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. She obtained her PhD at Edinburgh University, and has worked at Imperial College London as well as the Stanford Linear Accelerator Center (SLAC) in the USA, before joining the Data Department at NERSC, where she focuses on data-intensive computing and research, including supercomputing for experimental science and machine learning at scale."
Watch the video: https://wp.me/p3RLHQ-kLV
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
Positioning University of California Information Technology for the Future: S...Larry Smarr
05.02.15
Invited Talk
The Vice Chancellor of Research and Chief Information Officer Summit
“Information Technology Enabling Research at the University of California”
Title: Positioning University of California Information Technology for the Future: State, National, and International IT Infrastructure Trends and Directions
Oakland, CA
Grid optical network service architecture for data intensive applicationsTal Lavian Ph.D.
Integrated SW System Provide the “Glue”
Dynamic optical network as a fundamental Grid service in data-intensive Grid application, to be scheduled, to be managed and coordinated to support collaborative operations
From Super-computer to Super-network
In the past, computer processors were the fastest part
peripheral bottlenecks
In the future optical networks will be the fastest part
Computer, processor, storage, visualization, and instrumentation - slower "peripherals”
eScience Cyber-infrastructure focuses on computation, storage, data, analysis, Work Flow.
The network is vital for better eScience
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIinside-BigData.com
Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe.
"With rapid rise and increase of Big Data and AI as a new breed of high-performance workloads on supercomputers, we need to accommodate them at scale, and thus the need for R&D for HW and SW Infrastructures where traditional simulation-based HPC and BD/AI would converge, in a BYTES-oriented fashion. Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community.
Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure. Currently, we are conducting research to scale deep learning to more than 100,000 nodes on Post-K, where we would obtain near top GPU-class performance on each node."
Watch the video: https://wp.me/p3RLHQ-k6G
Learn more: https://en.wikichip.org/wiki/supercomputers/post-k
and
http://hpcuserforum.com
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Hpc, grid and cloud computing - the past, present, and future challenge
1. HPC, Grid and Cloud Computing
- The Past, Present and Future
Jason Shih
Academia Sinica Grid computing
FBI 極簡主義, Nov 3rd, 2010
2. Outline
Trend in HPC
Grid: eScience Research @ PetaScale
Cloud Hype and Observation
Future Exploration Path of Computing
Summary
3. Max CERN/T1-ASGC Point2Point
About ASGC
Inbound : 9.3 Gbps!
1. Most Reliable T1: 98.83%!
2. Very Highly Performing and
most Stable Site in CCRC08!
Asia Pacific Regional
Operation Center
A Worldwide Grid
Infrastructure
>280 sites,
>45 countries
>80,000 CPUs,
>20 PetaBytes 100 meters underground
>14,000 users,
>200 VOs 27km of circumstances;
>250,000 jobs/day
locate in Geneva
Best Demo Award of EGEE’07!
Grid Application Platform Avian Flu Drug Discovery
Lightweight Problem Solving Large Hadron Collider (LHC)
Framework!
21
7. Ugly? Performance of HPC Cluster
272 (52%) of world fastest clusters have efficiency lower
than 80% (Rmax/Rpeak)
Only 115 (18%) could drive over 90% of theoretical peak
Sampling from Top500 HPC cluster
Trend of Cluster Efficiency 2005-2009
8. Performance and Efficiency
20% of Top-performed clusters contribute 60% of Total
Computing Power (27.98PF)
5 Clusters Eff. < 30
9. Impact Factor: Interconnectivity
- Capacity and Cluster Efficiency
Over 52% of Cluster base on GbE
With efficiency around 50% only
InfiniBand adopt by ~36% HPC Clusters
10. HPC Cluster - Interconnect Using IB
SDR, DDR and QDR in Top500
Promising efficiency >= 80%
Majority of IB ready cluster adopt
DDR (87%) (2009 Nov)
Contribute 44% of total computing
power
~28 Pflops
Avg efficiency ~78%
12. Common semantics
Programmer productivity
Easy of deployment
HPC filesystem are more mature, wider feature set:
High concurrent read and write
In the comfort zone of programmers (vs cloudFS)
Wide support, adoption, acceptance possible
pNFS working to be equivalent
Reuse standard data management tools
Backup, disaster recovery and tiering
15. Some Observations & Looking for Future (I)
Computing Paradigm
(Almost) Free FLOPS
(Almost) Logic Operation
Data Access (Memory) Is A Major Bottleneck
Synchronization Is the Most Expensive
Data Communication Is A Big Factor in Performance
I/O Still A Major Programming Consideration
MPI Coding Is the Motherhood of Large Scale Computing
Computing in Conjunction of Massive Data Management
Finding Parallelism Is Not A Whole Issue In Programming
Data Layout
Data Movement
Data Reuse
Frequency of Interconnected Data Communication
16. Some Observations & Looking for Future (II)
Emerging New Possibility
Massive “Small” Computing Elements with On Board Memory
Computing Node Can Be Caonfigured Dynamically (including Failure
recovery)
Network Switch (within on site complex) Will Nearly Match Memory
Performance
Parallel I/O Support for Massive Parallel System
Asynchronous Computing/Communication Operation
Sophisticate Data Pre-fetch Scheme (Hardware/Algorithm)
Automate Dynamic Load Balance Method
Very High Order Difference Scheme (also Implicit Method)
Full Coupling of Formerly Split Operators
Fine Numerical Computational Grid (grid number > 10,000)
Full Simulation of Protein
Full Coupling of Computational Model
Grid Computing for All
17. Some Observations & Looking for Future (3)
System will get more complicate &
Computing Tool will get more sophisticated:
Vendor Support & User Readiness?
19. WLCG Computing Model
- The Tier Structure
Tier-0 (CERN)
Data recording
Initial data reconstruction
Data distribution
Tier-1 (11 countries)
Permanent storage
Re-processing
Analysis
Tier-2 (~130 countries)
Simulation
End-user analysis
20. Enabling Grids for E-sciencE
Archeology
Astronomy
Astrophysics
Civil Protection
Comp. Chemistry
Earth Sciences
Finance
Fusion
Geophysics
High Energy Physics
Life Sciences
Multimedia
Material Sciences
…
EGEE-II INFSO-RI-031688 EGEE07, Budapest, 1-5 October 2007 4
21. Objectives
Building sustainable research and collaboration
infrastructure
Support research by e-Science, on data intensive
sciences and applications require cross disciplinary
distributed collaboration
22. ASGC Milestone
Operational from the deployment of LCG0 since 2002
ASGC CA establish on 2005 (IGTF in same year)
Tier-1 Center responsibility start from 2005
Federated Taiwan Tier-2 center (Taiwan Analysis Facility, TAF)
is also collocated in ASGC
Rep. of EGEE e-Science Asia Federation while joining EGEE
from 2004
Providing Asia Pacific Regional Operation Center (APROC)
services to regional-wide WLCG/EGEE production infrastructure
from 2005
Initiate Avian Flu Drug Discovery Project and collaborate with
EGEE in 2006
Start of EUAsiaGrid Project from April 2008
23. LHC First Beam – Computing at the Petascale
General Purpose, pp, heavy ions
LHCb: B-physics, CP Violation ALICE: Heavy ions, pp
CMS: General Purpose, pp, heavy ions
ATLAS: General Purpose, pp, heavy ions
24. Size of LHC Detector
ATLAS
Bld. 40
7,000 Tons ATLAS Detector
CMS
25 Meters in Height
45 Meters in Length
25. Standard Cosmology
Good model from 0.01 sec
after Big Bang
Energy, Density, Temperature
Supported by considerable
observational evidence
Time
Elementary Particle Physics
From the Standard Model into the
unknown: towards energies of
1 TeV and beyond: the Terascale
Towards Quantum Gravity
From the unknown into the
unknown...
http://www.damtp.cam.ac.uk/user/gr/public/bb_history.html
UNESCO Information 25
Preservation debate, April 2007 -
Jamie.Shiers@cern.ch
27. Petabyte Scale Data Challenges
Why Petabyte?
Experiment Computing Model
Comparing with conventional data management
Challenges
Performance: LAN and WAN activities
Sufficient B/W between CPU Farm
Eliminate Uplink Bottleneck (Switch Tires)
Fast responding of Critical Events
Fabric Infrastructure & Service Level Agreement
Scalability and Manageability
Robust DB engine (Oracle RAC)
KB and Adequate Administration (Training)
33. Storage Farm
~ 110 raid subsystem deployed since 2003.
Supporting both Tier1 and 2 storage fabric
DAS connection to front-end blade server
Flexible switching front end server upon
performance requirement
4-8G fiber channel connectivity
35. Throughput of WLCG Experiments
Throughput defined as Job Eff. x # Jobs running
Characteristic of 4 LHC Exp. depicting in-efficiency
is due to poor coding.
41. Type of Infrastructure
roprietary solutions by public providers
P
Turnkey solutions developed internally as they own
the software and hardware solution/tech.
loud specific support
C
Developers of specific hardware and/or software
solutions that are utilized by service providers or used
internally when building private cloud
raditional providers
T
Leverage or tweak their existing
43. Cloud Computing:
“X” as a Service
ype of Cloud
T
ayered Service Model
L
eference Model
R
44. Virtualization is not Cloud computing
Performance Overhead
FV vs. PV
Disk I/O and network throughput (VM scalability)
Ref: Linux-based virtualization for HPC clusters.
45. Cloud Infrastructure
Best practical & Real world performance
tart Up: 60 ~ 44s
S
estart : 30 ~ 27s
R
eletion: 60 ~ <5s
D
igrate
M
30 VM ~ 26.8s
60 VM ~ 40s
20 VM ~ 89s
1
top
S
30VM ~ 27.4s
60VM ~ 26s
20VM ~ 57s
1
46. Cloud Infrastructure
Best practical
Real World Performance
tart Up: 60 ~ 44s
S
estart : 30 ~ 27s
R
eletion: 60 ~ <5s
D
igrate
M
30 VM ~ 26.8s
60 VM ~ 40s
20 VM ~ 89s
1
top
S
30VM ~ 27.4s
60VM ~ 26s
20VM ~ 57s
1
51. Conclusion: My Opinion
Future of Computing: Technology-Push & Demand-
Pull
Emerging of new science paradigm
Virtualization: Promising Technology but being
overemphasized
Green: Cloud Service Transparency & Common
Platform
More Computing Power ~ Power Consumption
Challenge
Private Clouds Will be predominant way
Commercial Cloud (Public) expect not evolving fast
52. Acknowledgment
Thanks valuable discussion/inputs from TCloud
(Cloud OS: Elaster)
Professional Technical Support from Silvershine
Tech. at beginning of the collaboration.
The interesting thing about Cloud Computing is that we’ve
defined Cloud Computing to include everything that we
already do….. I don’t understand what we would do
differently in the light of Cloud Computing other than
change the wording of some of our ads.
Larry Ellison, quote in the Wall Street Journal, Sep 26, 2008
53. Issues
Scalability?
Infrastructure operation vs. performance
Assessment
Application aware – Cloud service
Cost analysis
Data center power usage – PUE
Cloud Myth
Top 10 Cloud Computing Trend
http://www.focus.com/articles/hosting-bandwidth/
top-10-cloud-computing-trends/
Use Cases & Best Practical
54. Issues (II)
Volunteer computing (boinc)?
Total capacity & performance
successful stories & research Despines
What’s hindering cloud adoption? Try human.
http://gigaom.com/cloud/whats-hindering-cloud-
adoption-how-about-humans/
Future projection?
service readiness? Service level? Technical barriers?