In this video from the Disruptive Technologies Session at the 2015 HPC User Forum, Nick New from Optalysis describes the company's optical processing technology.
"Optalysys technology uses light, rather than electricity, to perform processor intensive mathematical functions (such as Fourier Transforms) in parallel at incredibly high-speeds and resolutions. It has the potential to provide multi-exascale levels of processing, powered from a standard mains supply. The mission is to deliver a solution that requires several orders of magnitude less power than traditional High Performance Computing (HPC) architectures."
Watch the video presentation: http://wp.me/p3RLHQ-ewz
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
Sensors - The Sparkplug in the Engine of the Internet of ThingsRECAP Project
This is a presentation on sensors and the Internet of Things (IoT) delivered by Prof. Theo Lynn (DCU) at the Siemens-IRDG Conference in Dublin, Ireland on 13 June 2018.
This a RECAP project overview slide deck prepared by Thang Le Duc (UMU), P-O Östberg (UMU) and Tomas Brännström (Tieto). It starts with an introduction and continues with a section on challenges for a self-orchestrated, self-remediated cloud system. It then presents the RECAP vision and use cases and finishes with a conclusion.
RECAP’s coordinator, Jörg Domaschka, presented the slides at the 'Added Value of EU-funded Collaborative Research' session at the YERUN Launch Event in Brussels, Belgium on 7 November 2017.
The Young European Research University Network (YERUN) is an organisation to strengthen and facilitate cooperation in the areas of scientific research, academic education and services of use to society among a cluster of highly-ranked young universities in Europe.
Learn more: https://www.yerun.eu/events/yerunlaunchevent/
SRDS2019: Abeona: an Architecture for Energy-Aware Task Migrations from the E...LEGATO project
This paper presents our preliminary results with ABEONA, an edge-to-cloud architecture that allows migrating tasks from low-energy, resource-constrained devices on the edge up to the cloud. Our preliminary results on artificial and real world datasets show that it is possible to execute workloads in a more efficient manner energy-wise by scaling horizontally at the edge, without negatively affecting the execution runtime.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
Sensors - The Sparkplug in the Engine of the Internet of ThingsRECAP Project
This is a presentation on sensors and the Internet of Things (IoT) delivered by Prof. Theo Lynn (DCU) at the Siemens-IRDG Conference in Dublin, Ireland on 13 June 2018.
This a RECAP project overview slide deck prepared by Thang Le Duc (UMU), P-O Östberg (UMU) and Tomas Brännström (Tieto). It starts with an introduction and continues with a section on challenges for a self-orchestrated, self-remediated cloud system. It then presents the RECAP vision and use cases and finishes with a conclusion.
RECAP’s coordinator, Jörg Domaschka, presented the slides at the 'Added Value of EU-funded Collaborative Research' session at the YERUN Launch Event in Brussels, Belgium on 7 November 2017.
The Young European Research University Network (YERUN) is an organisation to strengthen and facilitate cooperation in the areas of scientific research, academic education and services of use to society among a cluster of highly-ranked young universities in Europe.
Learn more: https://www.yerun.eu/events/yerunlaunchevent/
SRDS2019: Abeona: an Architecture for Energy-Aware Task Migrations from the E...LEGATO project
This paper presents our preliminary results with ABEONA, an edge-to-cloud architecture that allows migrating tasks from low-energy, resource-constrained devices on the edge up to the cloud. Our preliminary results on artificial and real world datasets show that it is possible to execute workloads in a more efficient manner energy-wise by scaling horizontally at the edge, without negatively affecting the execution runtime.
This work is about how both private enterprise and government wish to improve their data value and how they deal with this issue. The talk summarizes the way of thinking about Big Data, Open Data and their use by organizations or individuals. Big Data is explained from collecting, storing, analyzing and put in value. This data is collected from numerous sources including sensor networks, government data holdings, company market databases, and public profiles on social networking sites. Organizations use many data analytical techniques to study both structured and unstructured data. Due to the volume, velocity and variety of data, some specific techniques have been developed. MapReduce, Hadoop and other related as RHadoop are trending topic nowadays.
Data which come from government must be open. Every day more and more cities and countries are opening their data. Open Data is then presented as a specific case of public data with a special role in Smartcity. The main goal of Big and Open Data in Smartcity is to develop systems which can be useful for citizens. In this sense RMap (Mapa de Recursos) is shown as an Open Data application, an open system for Madrid City Council, avalaible for smarthphones and totally developed by the researching group G-TeC (www.tecnologiaUCM.es).
HNSciCloud has shared with the IT experts in scientific computing belonging to the HEPiX forum, the status of the ongoing Pre-Commercial Procurement of innovative cloud services and what the expected results might be.
Global C4IR-1 MasterClass Cambridge - Bose Intellisense 2017Justin Hayward
Next generation automation for the mining industry enabled by industry4.0 4IR technologies.
The global C4IR(TM) event series is available to franchise via www.cir-strategy.com as C4IRx.
A Study of Virtual Machine Placement Optimization in Data Centers (CLOSER'2017)Stéphanie Challita
In recent years, cloud computing has shown a valuable way for accommodating and providing services over the Internet such that data centers rely increasingly on this platform to host a large amount of applications (web hosting, e-commerce, social networking, etc.). Thus, the utilization of servers in most data centers can be improved by adding virtualization and selecting the most suitable host for each Virtual Machine (VM).
The problem of VM placement is an optimization problem aiming for multiple goals. It can be covered through various approaches. Each approach aims to simultaneously reduce power consumption, maximize resource utilization and avoid traffic congestion. The main goal of this literature survey is to provide a better understanding of existing approaches and algorithms that ensure better VM placement in the context of cloud computing and to identify future directions.
This presentation, given by Bob Jones, CERN & HNSciCloud Coordinator, at the ESA-ESPI Workshop on “Space Data & Cloud Computing Infrastructures: Policies and Regulations”, describes what are the challenges and needs of the cloud users and explains how an hybrid cloud model can support them.
Towards the Intelligent Internet of EverythingRECAP Project
In this presentation, Prof. Theo Lynn (DCU) was talking about observations on Multi-disciplinary Challenges in Intelligent Systems Research, at the RECAP consortium meeting in Dublin, Ireland on 06 November 2018.
Presentation at the European Grid Infrastructure (EGI) Conference 2015, Business Track "Examples of SMEs as consumers and providers".
EGI is focused on supporting SMEs and the full innovation chain between business and academia to create opportunities of economic impact through open data generated and the technical services both offered and required to support research and innovation.
Terradue Cloud Platform delivers Cloud bursting for Earth Science Applications and Services, illustrated in this presentation by many use cases and collaborations fostering the reuse of Earth Observation open data and open services.
BDE SC3.3 Workshop - Options for Wind Farm performance assessment and Power f...BigData_Europe
Options for Wind Farm performance assessment and Power forecasting (Mr. A. Kyritsis, ALTSOL/TERNA) at the BigDataEurope Workshop, Amsterdam, Novermber 2017.
Coordinated and adaptive information collecting in target tracking wireless s...LogicMindtech Nologies
NS2 Projects for M. Tech, NS2 Projects in Vijayanagar, NS2 Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, NS2 IEEE projects in Bangalore, IEEE 2015 NS2 Projects, WSN and MANET Projects, WSN and MANET Projects in Bangalore, WSN and MANET Projects in Vijayangar
The SC conference attracts scientists and engineers, software developers, policy makers, corporate managers, CIOs, and IT administrators from universities, industry, and government agencies. Over the past twentyfive years, SC has grown to become truly an international conference attracting 13,000 attendees from around the world who come to see the latest innovations in HPC, networking, storage, and related fields.
Learn more: http://SC17.supercomputing.org
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Application Profiling at the HPCAC High Performance Centerinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the 2017 Stanford HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance from tests conducted at the HPC Advisory Council High Performance Center."
Watch the video presentation: http://wp.me/p3RLHQ-gpY
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This work is about how both private enterprise and government wish to improve their data value and how they deal with this issue. The talk summarizes the way of thinking about Big Data, Open Data and their use by organizations or individuals. Big Data is explained from collecting, storing, analyzing and put in value. This data is collected from numerous sources including sensor networks, government data holdings, company market databases, and public profiles on social networking sites. Organizations use many data analytical techniques to study both structured and unstructured data. Due to the volume, velocity and variety of data, some specific techniques have been developed. MapReduce, Hadoop and other related as RHadoop are trending topic nowadays.
Data which come from government must be open. Every day more and more cities and countries are opening their data. Open Data is then presented as a specific case of public data with a special role in Smartcity. The main goal of Big and Open Data in Smartcity is to develop systems which can be useful for citizens. In this sense RMap (Mapa de Recursos) is shown as an Open Data application, an open system for Madrid City Council, avalaible for smarthphones and totally developed by the researching group G-TeC (www.tecnologiaUCM.es).
HNSciCloud has shared with the IT experts in scientific computing belonging to the HEPiX forum, the status of the ongoing Pre-Commercial Procurement of innovative cloud services and what the expected results might be.
Global C4IR-1 MasterClass Cambridge - Bose Intellisense 2017Justin Hayward
Next generation automation for the mining industry enabled by industry4.0 4IR technologies.
The global C4IR(TM) event series is available to franchise via www.cir-strategy.com as C4IRx.
A Study of Virtual Machine Placement Optimization in Data Centers (CLOSER'2017)Stéphanie Challita
In recent years, cloud computing has shown a valuable way for accommodating and providing services over the Internet such that data centers rely increasingly on this platform to host a large amount of applications (web hosting, e-commerce, social networking, etc.). Thus, the utilization of servers in most data centers can be improved by adding virtualization and selecting the most suitable host for each Virtual Machine (VM).
The problem of VM placement is an optimization problem aiming for multiple goals. It can be covered through various approaches. Each approach aims to simultaneously reduce power consumption, maximize resource utilization and avoid traffic congestion. The main goal of this literature survey is to provide a better understanding of existing approaches and algorithms that ensure better VM placement in the context of cloud computing and to identify future directions.
This presentation, given by Bob Jones, CERN & HNSciCloud Coordinator, at the ESA-ESPI Workshop on “Space Data & Cloud Computing Infrastructures: Policies and Regulations”, describes what are the challenges and needs of the cloud users and explains how an hybrid cloud model can support them.
Towards the Intelligent Internet of EverythingRECAP Project
In this presentation, Prof. Theo Lynn (DCU) was talking about observations on Multi-disciplinary Challenges in Intelligent Systems Research, at the RECAP consortium meeting in Dublin, Ireland on 06 November 2018.
Presentation at the European Grid Infrastructure (EGI) Conference 2015, Business Track "Examples of SMEs as consumers and providers".
EGI is focused on supporting SMEs and the full innovation chain between business and academia to create opportunities of economic impact through open data generated and the technical services both offered and required to support research and innovation.
Terradue Cloud Platform delivers Cloud bursting for Earth Science Applications and Services, illustrated in this presentation by many use cases and collaborations fostering the reuse of Earth Observation open data and open services.
BDE SC3.3 Workshop - Options for Wind Farm performance assessment and Power f...BigData_Europe
Options for Wind Farm performance assessment and Power forecasting (Mr. A. Kyritsis, ALTSOL/TERNA) at the BigDataEurope Workshop, Amsterdam, Novermber 2017.
Coordinated and adaptive information collecting in target tracking wireless s...LogicMindtech Nologies
NS2 Projects for M. Tech, NS2 Projects in Vijayanagar, NS2 Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, NS2 IEEE projects in Bangalore, IEEE 2015 NS2 Projects, WSN and MANET Projects, WSN and MANET Projects in Bangalore, WSN and MANET Projects in Vijayangar
The SC conference attracts scientists and engineers, software developers, policy makers, corporate managers, CIOs, and IT administrators from universities, industry, and government agencies. Over the past twentyfive years, SC has grown to become truly an international conference attracting 13,000 attendees from around the world who come to see the latest innovations in HPC, networking, storage, and related fields.
Learn more: http://SC17.supercomputing.org
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Application Profiling at the HPCAC High Performance Centerinside-BigData.com
Pak Lui from the HPC Advisory Council presented this deck at the 2017 Stanford HPC Conference.
"To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance from tests conducted at the HPC Advisory Council High Performance Center."
Watch the video presentation: http://wp.me/p3RLHQ-gpY
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the 2017 HPC Advisory Council Stanford Conference, Mahdi Esmaily from Stanford presents: Best Practices: Multi-Physics Methods, Modeling, Simulation & Analysis.
"The cycle of modeling high impact applications to finding new solutions is completed by the use of high-performance computing. I this talk, I will discuss two particular applications which have highly benefitted from HPC. The surgical operation performed on single ventricle heart patients has not been modified in last few decades despite a high rate of mortality. Through multiscale simulation of the circulatory system, it is now possible to model this surgery and optimize it using the state of the art optimization techniques. In-silico analysis has allowed us to test new surgical design without posing any risk to patient's life. I will show the outcome of this study, which is a novel surgical option that may revolutionize current clinical practice. The second application that I will discuss in this talk is related to renewable energy. The particle-based solar receivers operate by collecting radiative energy volumetrically through dispersed particles rather than the conventional approach of absorption via a surface. I will discuss our recent work on the investigation of the operating modes of these devices, where we are exploring the interaction of particles with turbulence, solid boundaries and radiation."
Watch the video: http://wp.me/p3RLHQ-gp0
Learn more: http://www.hpcadvisorycouncil.com/events/2017/stanford-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
"This deck is from the opening session of the "Introduction to Programming Pascal (P100) with CUDA 8" workshop at CSCS in Lugano, Switzerland. The three-day course is intended to offer an introduction to Pascal computing using CUDA 8."
Watch the video: http://wp.me/p3RLHQ-gsQ
Learn more: http://www.cscs.ch/events/event_detail/index.html?tx_seminars_pi1%5BshowUid%5D=155
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Jem Davies (VP Engineering and ARM Fellow) gives a brief introduction to Machine Learning and explains how it is used in devices such as smartphones, autos, and drones. "I do think that machine learning altogether is probably going to be one of the biggest shifts in computing that we'll see in quite a few years. I'm reluctant to put a number on it like -- the biggest thing in 25 years or whatever," said Jem Davies in a recent investor call. "But this is going to be big. It is going to affect all of us. It affects quite a lot of ARM, in fact."
Watch the video presentation: http://insidehpc.com/2017/03/slidecast-arm-steps-machine-learning/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Scott Callaghan from the Southern California Earthquake Center presented this deck in a recent Blue Waters Webinar.
"I will present an overview of scientific workflows. I'll discuss what the community means by "workflows" and what elements make up a workflow. We'll talk about common problems that users might be facing, such as automation, job management, data staging, resource provisioning, and provenance tracking, and explain how workflow tools can help address these challenges. I'll present a brief example from my own work with a series of seismic codes showing how using workflow tools can improve scientific applications. I'll finish with an overview of high-level workflow concepts, with an aim to preparing users to get the most out of discussions of specific workflow tools and identify which tools would be best for them."
Watch the video: http://wp.me/p3RLHQ-gtH
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Open Compute Summit, Walter Hinton, Senior Global Director of Enterprise & Client Compute Solutions Marketing at Western Digital presents: The State of the Solid State Drive SSD.
"In 2013, Western Digital acquired flash storage hardware and software supplier, Virident, for $685 million in cash. They followed that up in May 2016, with the acquisition of SanDisk Corporation. The addition of SanDisk makes Western Digital Corporation a comprehensive storage solutions provider with global reach, and an extensive product and technology platform that includes deep expertise in both rotating magnetic storage and non-volatile memory (NVM)."
Watch the video presentation: http://wp.me/p3RLHQ-guI
Learn more: https://www.wdc.com/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
TundraSystems Global LTD is an SME with the vision and mission to design and deliver new quantum technology solutions. The first phase of development is to develop Tundra Quantum Photonics Technology library. This forms part of Tundra System's strategy, in its quest to develop a complete Quantum Photonics Microprocessor the TundraProcessor. This library should also facilitate the development of the eco-system of Photonic Integrated Circuits to enable the building of complete HPC Systems surrounding the TundraProcessor.
Practical IEC 61850 for Substation Automation for Engineers & TechniciansLiving Online
COPY THIS LINK INTO YOUR BROWSER FOR MORE INFORMATION: bit.ly/11AM1oL
Older (‘legacy’) substation automation protocols and hardware/software architectures provided basic functionality for power system automation, and were designed to accommodate the technical limitations of the technologies available at the time. However, in recent years there have been vast improvements in technology, especially on the networking side. This has opened the door for dramatic improvements in the approach to power system automation in substations.
The latest developments in networking such as high-speed, deterministic, redundant Ethernet, as well as other technologies including TCP/IP, high-speed Wide Area Networks and high-performance embedded processors, are providing capabilities that could hardly be imagined when most legacy substation automation protocols were designed.
IEC61850 is a part of the International Electro-technical Commission (IEC) Technical Committee 57 (TC57) architecture for electric power systems. It is an important new international standard for substation automation, and it will have a significant impact on how electric power systems are designed and built in future. The model-driven approach of IEC61850 is an innovative approach and requires a new way of thinking about substation automation. This will result in significant improvements in the costs and performance of electric power systems.
This workshop provides comprehensive coverage of IEC 61850 and will provide you with the tools and knowledge to tackle your next substation automation project with confidence.
WHO SHOULD ATTEND?
This workshop is designed for personnel with a need to understand the techniques required to use and apply IEC 61850 to substation automation, hydro power plants, wind turbines and distributed energy resources as productively and economically as possible. This includes engineers and technicians involved with:
Consulting
Control and instrumentation
Control systems
Design
Maintenance supervisors
Electrical installations
Process control
Process development
Project management
SCADA and telemetry systems
COPY THIS LINK INTO YOUR BROWSER FOR MORE INFORMATION: bit.ly/11AM1oL
Dell High-Performance Computing solutions: Enable innovations, outperform exp...Dell World
Businesses and organizations depend on high-performance computing (HPC) solutions to help engineers, data analysts, researchers, developers and designers more effectively drive innovation and increase overall performance and competitiveness. Learn how Dell’s latest powerful and comprehensive HPC solutions for healthcare and life sciences, manufacturing and engineering, energy, finance, research and big-data analytics can provide your team with new ways to get more done—faster and better than ever before.
Enabling Insight to Support World-Class Supercomputing (Stefan Ceballos, Oak ...confluent
The Oak Ridge Leadership Facility (OLCF) in the National Center for Computational Sciences (NCCS) division at Oak Ridge National Laboratory (ORNL) houses world-class high-performance computing (HPC) resources and has a history of operating top-ranked supercomputers on the TOP500 list, including the world's current fastest, Summit, an IBM AC922 machine with a peak of 200 petaFLOPS. With the exascale era rapidly approaching, the need for a robust and scalable big data platform for operations data is more important than ever. In the past when a new HPC resource was added to the facility, pipelines from data sources spanned multiple data sinks which oftentimes resulted in data silos, slow operational data onboarding, and non-scalable data pipelines for batch processing. Using Apache Kafka as the message bus of the division's new big data platform has allowed for easier decoupling of scalable data pipelines, faster data onboarding, and stream processing with the goal to continuously improve insight into the HPC resources and their supporting systems. This talk will focus on the NCCS division's transition to Apache Kafka over the past few years to enhance the OLCF's current capabilities and prepare for Frontier, OLCF's future exascale system; including the development and deployment of a full big data platform in a Kubernetes environment from both a technical and cultural shift perspective. This talk will also cover the mission of the OLCF, the operational data insights related to high-performance computing that the organization strives for, and several use-cases that exist in production today.
CloudLightning - Project and Architecture OverviewCloudLightning
This is a PowerPoint presentation delivered by Prof John Morrison (UCC) on 9 December 2016 at the IC4 and Host in Ireland Workshop: Data Centres in Ireland.
CloudLighting - A Brief Overview presented by Prof John Morrison at the Fifth National Conference on Cloud Computing and Commerce (NC4 2016).
The presentation covered project's funding and consortium, specific challenge, typical IaaS cloud usage, project's goals and ambitions, the CloudLighting architecture, beneficiaries and challenges ahead.
In this video, Prof. John Morrison from University College Cork describes the CloudLightning project. CloudLightning’s vision is a European economy that thrives and leads the world in the provision and adoption of high performance cloud computing services. Funded by the European Commission’s Horizon 2020 Program, CloudLightning brings together eight project partners from five countries across Europe.
Learn more: http://cloudlightning.eu
Watch the video presentation: http://wp.me/p3RLHQ-fsb
eduPERT és una estructura federada d'equips de resposta per millorar el rendiment de la xarxa, promoguda pels membres del projecte Géant, amb l'ànim de promoure la compartició de coneixement en la comunitat educativa i de recerca. En el marc de la comunitat eduPERT, aquesta presentació presenta breument quin és l'estat actual i quins són els reptes principals que afronta la computació d'altes prestacions, la gestió de sistemes amb gran volum de dades (Big Data) i d'altra banda les noves arquitectures de computació que estan sorgint per afrontar el repte de la computació a l'Exaescala. La presentació es centra en les noves eines necessàries per tal de mesurar el rendiment dels programes en aquesta nova família de supercomputadors híbrids, que no depenen només dels algoritmes o el codi utilitzats, sinó també de l'arquitectura i de les llibreries numèriques i de paral·lelització. S'expliquen també les eines avaluades dins del projecte europeu NUMEXAS, on participa el CSUC.
RECAP at ETSI Experiential Network Intelligence (ENI) MeetingRECAP Project
This presentation was delivered by Johan Forsman (Tieto), Jörg Domaschka (UULM) and Paolo Casari (IMDEA Networks) at the ETSI Experiential Network Intelligence (ENI) Meeting in Warsaw, Poland, on April 12th, 2019. ETSI Experiential Networked Industry Specification Group (ENI ISG) work on defining a Cognitive Network Management architecture using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. The intention is that the use of Artificial Intelligence techniques in the network management system should solve some of the problems of future network deployment and operations. For more information, see https://www.etsi.org/technologies/experiential-networked-intelligence.
Give Your Organization Better, Faster Insights & Answers with High Performanc...Dell World
From modeling and simulating new products to analyzing ‘Big Data’ for insights into customer behaviors, achieving better results faster can be crucial for competitive advantages and success. High performance computing (HPC), long used for academic/government research, has gone mainstream, and is now used by companies and organizations in all fields—from finance to pharmaceuticals, from marketing to manufacturing, from e-commerce to engineering, from healthcare to homeland defense. Dell is a leader in HPC and can help you get better, faster insights and answers, no matter what your organization desires to achieve.
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Delivering Carrier Grade OCP for Virtualized Data CentersRadisys Corporation
This webinar explores the requirements for carrier grade Open Compute Project (OCP) infrastructure for virtualized telecom data centers delivering SDN and NFV for digital services.
Similar to Optalysis: Disruptive Optical Processing Technology for HPC (20)
In this deck from the Stanford HPC Conference, Shahin Khan from OrionX describes major market Shifts in IT.
"We will discuss the digital infrastructure of the future enterprise and the state of these trends."
"We work with clients on the impact of Digital Transformation (DX) on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help."
Watch the video: https://wp.me/p3RLHQ-lPP
Learn more: http://orionx.net
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Greg Wahl from Advantech presents: Transforming Private 5G Networks.
Advantech Networks & Communications Group is driving innovation in next-generation network solutions with their High Performance Servers. We provide business critical hardware to the world's leading telecom and networking equipment manufacturers with both standard and customized products. Our High Performance Servers are highly configurable platforms designed to balance the best in x86 server-class processing performance with maximum I/O and offload density. The systems are cost effective, highly available and optimized to meet next generation networking and media processing needs.
“Advantech’s Networks and Communication Group has been both an innovator and trusted enabling partner in the telecommunications and network security markets for over a decade, designing and manufacturing products for OEMs that accelerate their network platform evolution and time to market.” Said Advantech Vice President of Networks & Communications Group, Ween Niu. “In the new IP Infrastructure era, we will be expanding our expertise in Software Defined Networking (SDN) and Network Function Virtualization (NFV), two of the essential conduits to 5G infrastructure agility making networks easier to install, secure, automate and manage in a cloud-based infrastructure.”
In addition to innovation in air interface technologies and architecture extensions, 5G will also need a new generation of network computing platforms to run the emerging software defined infrastructure, one that provides greater topology flexibility, essential to deliver on the promises of high availability, high coverage, low latency and high bandwidth connections. This will open up new parallel industry opportunities through dedicated 5G network slices reserved for specific industries dedicated to video traffic, augmented reality, IoT, connected cars etc. 5G unlocks many new doors and one of the keys to its enablement lies in the elasticity and flexibility of the underlying infrastructure.
Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence."
Watch the video: https://wp.me/p3RLHQ-lPQ
* Company website: https://www.advantech.com/
* Solution page: https://www2.advantech.com/nc/newsletter/NCG/SKY/benefits.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
In this deck from the Stanford HPC Conference, Nicole Xu from Stanford University describes how she transformed a common jellyfish into a bionic creature that is part animal and part machine.
"Animal locomotion and bioinspiration have the potential to expand the performance capabilities of robots, but current implementations are limited. Mechanical soft robots leverage engineered materials and are highly controllable, but these biomimetic robots consume more power than corresponding animal counterparts. Biological soft robots from a bottom-up approach offer advantages such as speed and controllability but are limited to survival in cell media. Instead, biohybrid robots that comprise live animals and self- contained microelectronic systems leverage the animals’ own metabolism to reduce power constraints and body as an natural scaffold with damage tolerance. We demonstrate that by integrating onboard microelectronics into live jellyfish, we can enhance propulsion up to threefold, using only 10 mW of external power input to the microelectronics and at only a twofold increase in cost of transport to the animal. This robotic system uses 10 to 1000 times less external power per mass than existing swimming robots in literature and can be used in future applications for ocean monitoring to track environmental changes."
Watch the video: https://youtu.be/HrmJFyvInj8
Learn more: https://sanfrancisco.cbslocal.com/2020/02/05/stanford-research-project-common-jellyfish-bionic-sea-creatures/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Gilad Shainer from the HPC AI Advisory Council describes how this organization fosters innovation in the high performance computing community.
"The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) and Artificial Intelligence (AI) use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products."
Watch the video: https://wp.me/p3RLHQ-lNz
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today RIKEN in Japan announced that the Fugaku supercomputer will be made available for research projects aimed to combat COVID-19.
"Fugaku is currently being installed and is scheduled to be available to the public in 2021. However, faced with the devastating disaster unfolding before our eyes, RIKEN and MEXT decided to make a portion of the computational resources of Fugaku available for COVID-19-related projects ahead of schedule while continuing the installation process.
Fugaku is being developed not only for the progress in science, but also to help build the society dubbed as the “Society 5.0” by the Japanese government, where all people will live safe and comfortable lives. The current initiative to fight against the novel coronavirus is driven by the philosophy behind the development of Fugaku."
Initial Projects
Exploring new drug candidates for COVID-19 by "Fugaku"
Yasushi Okuno, RIKEN / Kyoto University
Prediction of conformational dynamics of proteins on the surface of SARS-Cov-2 using Fugaku
Yuji Sugita, RIKEN
Simulation analysis of pandemic phenomena
Nobuyasu Ito, RIKEN
Fragment molecular orbital calculations for COVID-19 proteins
Yuji Mochizuki, Rikkyo University
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
In this deck from GTC Digital, William Beaudin from DDN presents: HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD.
Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We'll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems.
The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure. DDN A³I with the EXA5 parallel file system is a turnkey, AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration. The combined solution delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems.
Watch the video: https://wp.me/p3RLHQ-lIV
Learn more: https://www.ddn.com/download/nvidia-superpod-ddn-a3i-ai400-appliance-with-the-exa5-filesystem/
and
https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
Today Xilinx announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.
Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to- market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Learn more: https://insidehpc.com/2020/03/xilinx-announces-versal-premium-acap-for-network-and-cloud-acceleration/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
In this video from the Rice Oil & Gas Conference, Chin Fang from Zettar presents: Moving Massive Amounts of Data across Any Distance Efficiently.
The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally. Both projects have real world motivations, e.g. the ambitious data transfer requirements of Linac Coherent Light Source II (LCLS-II) [1], a premier preparation project of the U.S. DOE Exascale Computing Initiative (ECI) [2]. Their immediate goals are described and explained, together with the solution used for each. Findings and early results are reported. Possible future works are outlined.
Watch the video: https://wp.me/p3RLHQ-lBX
Learn more: https://www.zettar.com/
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Rice Oil & Gas Conference, Bradley McCredie from AMD presents: Scaling TCO in a Post Moore's Law Era.
"While foundries bravely drive forward to overcome the technical and economic challenges posed by scaling to 5nm and beyond, Moore’s law alone can provide only a fraction of the performance / watt and performance / dollar gains needed to satisfy the demands of today’s high performance computing and artificial intelligence applications. To close the gap, multiple strategies are required. First, new levels of innovation and design efficiency will supplement technology gains to continue to deliver meaningful improvements in SoC performance. Second, heterogenous compute architectures will create x-factor increases of performance efficiency for the most critical applications. Finally, open software frameworks, APIs, and toolsets will enable broad ecosystems of application level innovation."
Watch the video:
Learn more: http://amd.com
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
In this deck from the ECSS Symposium, Abe Stern from NVIDIA presents: CUDA-Python and RAPIDS for blazing fast scientific computing.
"We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started. Finally, we will briefly highlight several other relevant libraries for GPU programming."
Watch the video: https://wp.me/p3RLHQ-lvu
Learn more: https://developer.nvidia.com/rapids
and
https://www.xsede.org/for-users/ecss/ecss-symposium
Sign up for our insideHPC Newsletter: http://insidehp.com/newsletter
In this deck from FOSDEM 2020, Colin Sauze from Aberystwyth University describes the development of a RaspberryPi cluster for teaching an introduction to HPC.
"The motivation for this was to overcome four key problems faced by new HPC users:
* The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.
* A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.
* That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.
* That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.
The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered."
Learn more: https://github.com/colinsauze/pi_cluster
and
https://fosdem.org/2020/schedule/events/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. 2
Optalysys Optical Processing Technology
Turbo Charges Existing Desktop and HPC Systems
• Means to calculate Fourier Transform and Linear Algebra Operations Optically
• Massively Parallel Calculations performed at the Speed of Light
• OFT Analogous to 2D FFT, but resolution may be scaled without affecting process time
• High Resolution, fast (eg 4Kx2K, resolution, >2kHz) Liquid Crystal Microdisplays
(SLMs) enter Numerical Data and Focus the light Around the System
• Addresses fundamental limitations of high-end electronic processing:
• Power Consumption
• Speed
• Resolution
• Data Management
• Disruptive Pricing
• 2 Main Application Areas:
• Big Data Volume analysis (correlation)
• Model generation (partial derivatives)
• Complimentary, Modular, Rugged, Reconfigurable, Scalable
• Based upon well established principles
• Standalone/Integrated Co-Processor: First Product end 2017