Presented at the Oak Ridge Interconnects Workshop in 1999. A fun historical perpsective on where the HPC industry in 1999 thought we would be going forward into the petascale industry.
(ATS6-GS04) Performance Analysis of Accelrys Enterprise Platform 9.0 on IBM’s...BIOVIA
IBM recently completed a benchmarking study of several key modules of the Accelrys Enterprise Platform (AEP) 9.0, using IBM’s iDataPlex and General Parallel File System (GPFS). The results show that the performance of IO intensive workloads, such as Next Generation Sequencing (NGS), can be improved significantly by using GPFS. NGS workloads can also benefit from better load balancing implemented on AEP 9.0. Best practices for scalable IT solutions will also be discussed.
Updates related on Grid since last meeting in December 2008: Service, resourc...Go Iwai
This talk has been given in the 4th ILC Detector Workshop held at KEK, Tsukuba, Japan, 2-4 Dec 2009. http://kds.kek.jp/conferenceDisplay.py?confId=3363
Alexis Dacquay – is CCIE with over 10 years experience in the networking industry. He has in the past been designing, deploying, and supporting some large corporate LAN/WAN networks. He has in the last 4 years specialised in high performance datacenter networking to satisfy the needs of cloud providers, web2.0, big data, HPC, HFT, and any other enterprise for which high performing network is critical to their business. Originally from Bretagne, privately a huge fan of polish cuisine.
Topic of Presentation: Handling high-bandwidth-consumption applications in a modern DC design
Language: English
Abstract: Modern Data Centre requires proper handling of high-bandwidth consuming applications, like BigData or IP Storage. To achieve this, next generation Ethernet speeds of 25, 50 and 100Gbps are being pursued. We are to show _why_ these new Ethernet speeds are vital from technology standpoint and _how_ to cope with the those sparkling new requirements by networking hardware enablements. We are to share ethernet switches’ design considerations, with the biggest emphasis put on the importance of big buffers and how they accommodate this bursty traffic. Throughout the presentation we will additionally elaborate on the evolution of variety of modern applications, and how we can handle those with the properly designed hardware, software, and Data Centre itself.
In this deck from the DDN User Group at SC19, Gael Delbray from CEA presents: Optimizing Flash at Scale. CEA, a major player in research and innovation, has been recognized as an expert in HPC through the momentum of the "Simulation Programme" supported by its Direction des Applications Militaires (CEA / DAM) and implemented by the Department of Simulation Sciences and Information (DSSI).
"The major challenges that the HPC will face in the coming years are manifold, such as the development of hardware and software architectures able to deliver very high computing power, modelling methods combining different scales and physical models and the management of huge volumes of numerical data.
High performance computing for numerical simulation has become an essential tool in scientific and technological research, as well as for industrial applications. Simulation can replace experiments that are too dangerous (accidental situations), beyond reach in terms of time or scale (climate, astrophysics) or banned (nuclear tests). Simulation is also time-saving and leverages productivity in many situations."
Watch the video: https://wp.me/p3RLHQ-li0
Learn more: https://www.ddn.com/company/events/user-group-sc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
(ATS6-GS04) Performance Analysis of Accelrys Enterprise Platform 9.0 on IBM’s...BIOVIA
IBM recently completed a benchmarking study of several key modules of the Accelrys Enterprise Platform (AEP) 9.0, using IBM’s iDataPlex and General Parallel File System (GPFS). The results show that the performance of IO intensive workloads, such as Next Generation Sequencing (NGS), can be improved significantly by using GPFS. NGS workloads can also benefit from better load balancing implemented on AEP 9.0. Best practices for scalable IT solutions will also be discussed.
Updates related on Grid since last meeting in December 2008: Service, resourc...Go Iwai
This talk has been given in the 4th ILC Detector Workshop held at KEK, Tsukuba, Japan, 2-4 Dec 2009. http://kds.kek.jp/conferenceDisplay.py?confId=3363
Alexis Dacquay – is CCIE with over 10 years experience in the networking industry. He has in the past been designing, deploying, and supporting some large corporate LAN/WAN networks. He has in the last 4 years specialised in high performance datacenter networking to satisfy the needs of cloud providers, web2.0, big data, HPC, HFT, and any other enterprise for which high performing network is critical to their business. Originally from Bretagne, privately a huge fan of polish cuisine.
Topic of Presentation: Handling high-bandwidth-consumption applications in a modern DC design
Language: English
Abstract: Modern Data Centre requires proper handling of high-bandwidth consuming applications, like BigData or IP Storage. To achieve this, next generation Ethernet speeds of 25, 50 and 100Gbps are being pursued. We are to show _why_ these new Ethernet speeds are vital from technology standpoint and _how_ to cope with the those sparkling new requirements by networking hardware enablements. We are to share ethernet switches’ design considerations, with the biggest emphasis put on the importance of big buffers and how they accommodate this bursty traffic. Throughout the presentation we will additionally elaborate on the evolution of variety of modern applications, and how we can handle those with the properly designed hardware, software, and Data Centre itself.
In this deck from the DDN User Group at SC19, Gael Delbray from CEA presents: Optimizing Flash at Scale. CEA, a major player in research and innovation, has been recognized as an expert in HPC through the momentum of the "Simulation Programme" supported by its Direction des Applications Militaires (CEA / DAM) and implemented by the Department of Simulation Sciences and Information (DSSI).
"The major challenges that the HPC will face in the coming years are manifold, such as the development of hardware and software architectures able to deliver very high computing power, modelling methods combining different scales and physical models and the management of huge volumes of numerical data.
High performance computing for numerical simulation has become an essential tool in scientific and technological research, as well as for industrial applications. Simulation can replace experiments that are too dangerous (accidental situations), beyond reach in terms of time or scale (climate, astrophysics) or banned (nuclear tests). Simulation is also time-saving and leverages productivity in many situations."
Watch the video: https://wp.me/p3RLHQ-li0
Learn more: https://www.ddn.com/company/events/user-group-sc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
In this deck from the 2019 OpenFabrics Workshop in Austin, Ariel Almog from Mellanox presents: To HDR and Beyond.
"Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. These technologies exposing various new physical interfaces for copper and optical interfaces and type of transceiver like SFP-DD. Supporting these speeds also toughen the task to get low BER (Bit Error Rate) through FEC (Forward Error Correction) algorithm. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."
Watch the video: https://wp.me/p3RLHQ-k0B
Learn more: http://mellanox.com
and
https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...Continuent
Deployment of MySQL multi-master topologies with Tungsten Replicator has been constantly improving. Yet, earlier there were some heavy operations to sustain, and unfriendly commands to perform. The latest version of Tungsten Replicator delivers all the topologies of its predecessors, with an improved installation tool that cuts down the deployment time to half in simple topologies, and to 1/10th in complex ones. Now you can install master/slave, multi-master, fan-in, and star topologies in less than a minute.
But there is more. Thanks to a versatile Tungsten Replicator installation tool, you can define your own deployment on-the-fly, and get creative: you can have stars with satellites, all-masters with fan-in slaves, and other customized clusters.
We will also cover other enhancements in Tungsten Replicator 2.1.1, such as full integration with MySQL 5.6, enhanced output from administrative tools, a few more goodies.
Tungsten Replicator is an innovative and reliable tool that can solve your most complex replication problems. In this webinar we will introduce Replicator installation and show you how to use key Replicator features effectively with MySQL.
Course Topics:
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tpm utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. This webinar gives you the background that makes replication easier to set up and allows you to take full advantage of the Tungsten Replicator benefits.
Talk by Brendan Gregg for YOW! 2021. "The pursuit of faster performance in computing is the driving reason for many new technologies and updates. This talk discusses performance improvements now underway that you will likely be adopting soon, for processors (including 3D stacking and cloud vendor CPUs), memory (including DDR5 and high-bandwidth memory [HBM]), disks (including 3D Xpoint as a 3D NAND accelerator), networking (including QUIC and eXpress Data Path [XDP]), runtimes, hypervisors, and more. The future of performance is increasingly cloud-based, with hardware hypervisors and custom processors, meaningful observability of everything down to cycle stalls (even as cloud guests), and high-speed syscall-avoiding applications that use eBPF, FPGAs, and io_uring. The talk also discusses where future performance improvements might be expected, with predictions for new technologies."
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareRyan Aydelott
In this document I use some basic off the shelf commodity servers and build an Openstack deployment using Mirantis Fuel with Nova Network and Ceph RBD for Glance/Cinder storage.
I then perform some basic disk/network benchmarking to give the reader an idea of what kind of performance they can expect out of a basic off the shelf deployment of Openstack Juno.
I also discuss some of the shortcomings I had with the release of OVS with Neutron/VLAN's when setting up this testbed.
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
In this deck from the 2019 OpenFabrics Workshop in Austin, Ariel Almog from Mellanox presents: To HDR and Beyond.
"Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. These technologies exposing various new physical interfaces for copper and optical interfaces and type of transceiver like SFP-DD. Supporting these speeds also toughen the task to get low BER (Bit Error Rate) through FEC (Forward Error Correction) algorithm. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."
Watch the video: https://wp.me/p3RLHQ-k0B
Learn more: http://mellanox.com
and
https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...Continuent
Deployment of MySQL multi-master topologies with Tungsten Replicator has been constantly improving. Yet, earlier there were some heavy operations to sustain, and unfriendly commands to perform. The latest version of Tungsten Replicator delivers all the topologies of its predecessors, with an improved installation tool that cuts down the deployment time to half in simple topologies, and to 1/10th in complex ones. Now you can install master/slave, multi-master, fan-in, and star topologies in less than a minute.
But there is more. Thanks to a versatile Tungsten Replicator installation tool, you can define your own deployment on-the-fly, and get creative: you can have stars with satellites, all-masters with fan-in slaves, and other customized clusters.
We will also cover other enhancements in Tungsten Replicator 2.1.1, such as full integration with MySQL 5.6, enhanced output from administrative tools, a few more goodies.
Tungsten Replicator is an innovative and reliable tool that can solve your most complex replication problems. In this webinar we will introduce Replicator installation and show you how to use key Replicator features effectively with MySQL.
Course Topics:
- Checking host and MySQL prerequisites
- Downloading code from http://code.google.com/p/tungsten-replicator/
- Installation using the tpm utility
- Transaction filtering using standard filters as well as customized filters you write yourself
- Enabling and managing parallel replication
- Configuring multi-master and fan-in using multiple replication services
- Backup and restore integration
- Troubleshooting replication problems
- Logging bugs and participating in the Tungsten Replicator community
Replication is a powerful technology that takes knowledge and planning to use effectively. This webinar gives you the background that makes replication easier to set up and allows you to take full advantage of the Tungsten Replicator benefits.
Talk by Brendan Gregg for YOW! 2021. "The pursuit of faster performance in computing is the driving reason for many new technologies and updates. This talk discusses performance improvements now underway that you will likely be adopting soon, for processors (including 3D stacking and cloud vendor CPUs), memory (including DDR5 and high-bandwidth memory [HBM]), disks (including 3D Xpoint as a 3D NAND accelerator), networking (including QUIC and eXpress Data Path [XDP]), runtimes, hypervisors, and more. The future of performance is increasingly cloud-based, with hardware hypervisors and custom processors, meaningful observability of everything down to cycle stalls (even as cloud guests), and high-speed syscall-avoiding applications that use eBPF, FPGAs, and io_uring. The talk also discusses where future performance improvements might be expected, with predictions for new technologies."
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareRyan Aydelott
In this document I use some basic off the shelf commodity servers and build an Openstack deployment using Mirantis Fuel with Nova Network and Ceph RBD for Glance/Cinder storage.
I then perform some basic disk/network benchmarking to give the reader an idea of what kind of performance they can expect out of a basic off the shelf deployment of Openstack Juno.
I also discuss some of the shortcomings I had with the release of OVS with Neutron/VLAN's when setting up this testbed.
Casa atalaia cond. imperial atalaia patríciaElias Almeida
Casa cond. Imperial – ATALAIA
Casa com 4/4:
Térrea:
com 3/4 1 Suíte, 2 banheiros sociais, cozinha, 2 salas, 1 closet, Varanda Gourmet, área de serviço, 3 vagas de garagens
Andar:
com 1 Sala ampla e 1/4.
Posição: Sul, área Construída 200m²
Área de Lazer: Salão de festa, salão de jogos, piscina Adulto e infantil, churrasqueira.
Valor: R$ 6900.000,00.
TX. Cond. 400,00 e até vencimento 360,00
OBS: Aceita apartamento de menor valor.
Elias Almeida Imóveis 394 PJ
79 99945-4692 / 3044-6040
Dynamics Day 2017 Brisbane: Dynamics 365 Enterprise OperationsEmpired
Learn about the latest release of Microsoft Dynamics AX, hear about business leaders who are using ERP in the cloud to be successful and what you can expect from Microsoft's enterprise ERP team in the coming months.
Dynamics Day 2017 Brisbane: Dynamics 365 Field and Project ServicesEmpired
Does your organisation need to increase project profitability or deliver highly optimised field services? This session will showcase how Microsoft's newest Dynamics 365 capabilities for project service automation and mobile field services can optimise your project delivery and field service capability.
Dynamics Day 2017 Brisbane: Dynamics 365 making it realEmpired
The launch of Microsoft Dynamics 365, has brought new, modern, enterprise-ready apps that enable companies to start with what they need, get productivity where they need it, leverage intelligence built-in, and remain ready for growth. In this session, gain valuable insights into what Dynamics 365 is and how it will transform your business. Explore topics such as product capabilities and licensing.
Médico Especialista Álvaro Miguel Carranza Montalvo, soy Médico General Alto, Rubio, de Piel Blanca, ojos claros , soy Atlético Simpático, me esmero a seguir Adelante solucionando los Problemas de las demás Personas para salvar su Vida en Salud y en Enfermedades. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la VIDA es una VIRTUD que cada Humano, Persona tiene es Valeroso y Digno lograr SALVAR la VIDA de una Persona que está en Peligro, cada Persona es una sóla Unidad único no hay nadie como esa persona somos distintos. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, la NATURALEZA es Bella y Linda Vivirla al Aire Libre, con Agua, la Vegetación, los Bellos Animales en el Ecosistema la Biodiversidad hay que Valorar y Gozar lo que hay en el Mundo Vivirla y Disfrutarla. Internet, Networds….
Médico Especialista Álvaro Miguel Carranza Montalvo, ME GUSTA LO QUE SOY MI FORMA DE SER ME ENCANTA LO QUE SOY YÓ MI FÍSICO, MENTE, PENSAMIENTOS, ALMA Y CUERPO, FÍSICO. Y VIVIR LA VIDA, NATURALEZA LA BELLEZA. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, Me gusta la Naturaleza y la Vida. VIVIR LA VIDA RESPETANDO A LOS DEMÁS CHICAS Y CHICOS A TODAS LAS PERSONAS LES RESPETO Y ADMIRO PORQUE TIENEN SUS VALORES Y DONES. HACER EL BIEN NUNCA EL MAL A LA PERSONA TRATAR COMO A UNO LE GUSTARÍA QUE LE TRATEN. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, "creo que las artes marciales mixtas sirven principalmente para desarrollar la energía. A veces es necesario darse cuenta de un peligro y conocer el medio para salvar la vida. Web, Redes Sociales….
Médico Especialista Álvaro Miguel Carranza Montalvo, La Energía es Vital para lograr una Meta con Fuerza y Salud es lo más Importante en la Vida. ", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "es necesario realizar ejercicios determinados en la columna, para proporcionar oxígeno al cerebro y ayudarle a descansar totalmente", Web, Internet….
Médico Especialista Álvaro Miguel Carranza Montalvo, "hay tres palabras que aprendemos a gritar que llevan consigo descanso y energía; fuerza, valor y convicción", Web, Internet….
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Architecting a 35 PB distributed parallel file system for scienceSpeck&Tech
ABSTRACT: Perlmutter is the newest supercomputer at Berkeley Lab, California, and features a whopping 35 PB all-flash Lustre file system. Let's dive into its architecture, showing some early performance figures and unique performance considerations, using low-level Lustre tests that achieve over 90% of the theoretical bandwidth of the SSDs, to showcase how Perlmutter achieves the performance of a burst buffer and the resilience of a scratch file system. Lastly, some performance considerations unique to an all-flash Lustre file system, along with tips on how better I/O patterns can make the most of such powerful architectures.
BIO: Alberto Chiusole studied Data Science and Scientific Computing in Trieste when he had the opportunity to spend some months at CERN, in Geneva, benchmarking their Ceph file system against a classic Lustre file system from eXact lab, the HPC consulting company in Trieste he was working for at the time. After Trieste, he worked as a Storage and I/O Software Engineer at Berkeley Lab, California, a national scientific laboratory, where he assisted scientists with improving their I/O and data needs. He now works for Seqera Labs as an HPC DevOps Engineer, focusing on infrastructure support.
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...Avere Systems
Of all the applications in the oil and gas industry's upstream workflow, those involved in seismic processing place the greatest demand on storage. Pre-stack and post-stack migration, velocity modeling, and other processing steps are challenging even the highest performance NAS systems. In this Webinar, we discuss meeting these demands with accelerated performance, reduced cost, and a streamlined workflow.
In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC.
Watch the video: https://wp.me/p3RLHQ-hA4
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
Abstract:
We present a novel approach to parallel materialisation (i.e.,
fixpoint computation) of datalog programs in centralised,
main-memory, multi-core RDF systems. Our approach comprises an algorithm that evenly distributes the workload to cores, and an RDF indexing data structure that supports efficient, ‘mostly’ lock-free parallel updates. Our empirical evaluation shows that our approach parallelises computation very well: with 16 physical cores, materialisation can be up to 13.9 times faster than with just one core.
I presented a whirlwind tour of the most common benchmark tools used to measure parallel file system performance and reviewed case studies of how these have been used in the procurement of NERSC's large file systems at the 2022 Lustre User Group.
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...Glenn K. Lockwood
Comparing the burst buffers of today, such as the Cray DataWarp-based burst buffer implemented on NERSC Cori, to the proto-burst buffer deployed on SDSC's Gordon supercomputer in 2012.
Overview of myHadoop 0.30, a framework for deploying Hadoop on existing high-performance computing infrastructure. Discussion of how to install it, spin up a Hadoop cluster, and use the new features.
myHadoop 0.30's project page is now on GitHub (https://github.com/glennklockwood/myhadoop) and the latest release tarball can be downloaded from my website (glennklockwood.com/files/myhadoop-0.30.tar.gz)
A brief overview of a case study where 438 human genomes underwent read mapping and variant calling in under two months. Architectural requirements for the multi-stage pipeline are covered.
SR-IOV: The Key Enabling Technology for Fully Virtualized HPC ClustersGlenn K. Lockwood
How well does InfiniBand virtualized with SR-IOV really perform? SDSC carried out some initial application benchmarking studies and compared to the best-available commercial alternative to determine whether or not SR-IOV was a viable technology for closing the performance gap of virtualized HPC. The results were promising, and this technology will be used in Comet, SDSC's two-petaflop supercomputer being deployed in 2015.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
ASCI Terascale Simulation Requirements and Deployments
1. 1
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551
ASCI-00-003.1
ASCI Terascale Simulation
Requirements and
Deployments
David A. Nowak
ASCI Program Leader
Mark Seager
ASCI Terascale Systems Principal Investigator
Lawrence Livermore National Laboratory
University of California
*
Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.
2. 2
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Overview
ASCI program background
Applications requirements
Balanced terascale computing
environment
Red Partnership and CPLANT
Blue-Mountain partnership
Sustained Stewardship TeraFLOP/s
(SST)
Blue-Pacific partnership
Sustained Stewardship TeraFLOP/s
(SST)
3. 3
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
A successful Stockpile Stewardship
Program requires a successful ASCI
B61 W62 W76 W78 W80 B83 W84 W87 W88
PrimaryPrimary
CertificationCertification
MaterialsMaterials
DynamicsDynamics
AdvancedAdvanced
RadiographyRadiography
SecondarySecondary
CertificationCertification
EnhancedEnhanced
SuretySurety
ICFICF
EnhancedEnhanced
SurveillanceSurveillance
HostileHostile
EnvironmentsEnvironments
WeaponsWeapons
SystemSystem
EngineeringEngineering
Directed Stockpile Work
4. 4
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
ASCI’s major technical elements meet
Stockpile Stewardship requirements
Defense Applications
& Modeling
University
Partnerships
Integration
Simulation
& Computer
Science
Integrated
Computing Systems
Applications
Development
Physical
& Materials
Modeling
V&V
DisCom2
PSE
Physical
Infrastructure
& Platforms
Operation of
Facilities
Alliances
Institutes
VIEWS
PathForward
5. 5
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Example terascale computing
environment in CY00 with ASCI White at
LLNL
Programs
Application
Performance
Computing Speed
Parallel I/O Archive BW
Cache BW
Archival
Storage
TFLOPS
PetaBytes
GigaBytes/sec GigaBytes/sec
10 3 10 5
0.1
1
10
100
1.6
16
160
TeraBytes/sec
1600
0.1
1
100
10
0.2
2
20
200
10
1
0.01
‘96
‘97
‘00
2004
Year
Disk
TeraBytes
5.0
50
500
5000
Memory
Memory BW
10
0
1
0
1
0.130
30
0 3
0.3
TeraBytes
TeraBytes/sec
1
Interconnect
TeraBytes/sec
0.05
5
50
10 2
0.1
0.5
ASCI is achieving programmatic objectives, but the computing
environment will not be in balance at LLNL for the ASCI White platform.
ASCI is achieving programmatic objectives, but the computing
environment will not be in balance at LLNL for the ASCI White platform.
Computational Resource Scaling for
ASCI Physics Applications
1 FLOPS Peak Compute
16 Bytes/s / FLOPS Cache BW
1 Byte / FLOPS Memory
3 Bytes/s / FLOPS Memory BW
0.5 Bytes/s / FLOPS Interconnect BW
50 Bytes / FLOPS Local Disk
0.002 Byte/s / FLOPS Parallel I/O BW
0.0001 Byte/s / FLOPS Archive BW
1000 Bytes / FLOPS Archive
Computational Resource Scaling for
ASCI Physics Applications
1 FLOPS Peak Compute
16 Bytes/s / FLOPS Cache BW
1 Byte / FLOPS Memory
3 Bytes/s / FLOPS Memory BW
0.5 Bytes/s / FLOPS Interconnect BW
50 Bytes / FLOPS Local Disk
0.002 Byte/s / FLOPS Parallel I/O BW
0.0001 Byte/s / FLOPS Archive BW
1000 Bytes / FLOPS Archive
6. 6
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
SNL/Intel ASCI Red
• Good interconnect bandwidth
• Poor memory/NIC bandwidth
• Good interconnect bandwidth
• Poor memory/NIC bandwidth CPU CPUNIC 256 MB
CPU CPUNIC 256 MB
Kestrel Compute Board ~2332 in system
CPUNIC 128 MB
CPU 128 MB
Eagle I/O Board ~100 in system
128 MB
128 MB
Terascale Operating System (TOS)
cc on node, not cc across board
Cougar Operating System
38 switches
32 switches
node
node
6.25 TB
6.25 TB
Storage
Classified
Unclassified
2 GB each Sustained
(total system)
800 MB
bi-directional per link
Shortest hop ~13 µs, Longest hop ~16 µs
Aggregate link bandwidth = 1.865 TB/sAggregate link bandwidth = 1.865 TB/s
Bottleneck
Good
7. 7
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
SNL/Compaq ASCI C-Plant
S
S S
S S
C-Plant is located at Sandia National Laboratory
Currently a Hypercube, C-Plant will be reconfigured as a mesh in 2000
C-Plant has 50 scalable units
LINUX Operating System
C-Plant is a “Beowulf” configuration
•16 “boxes”
•2 16 port Myricom switches
•160 MB each direction
•320 MB total
Scalable Unit:
“Box:”
•500 MHz Compaq eV56 processor
•192 MB SDRAM
•55 MB NIC
•Serial port
•Ethernet port
•______ Disk spaceHypercube
S S
S
This is a research project — a long way from being a production system.This is a research project — a long way from being a production system.
8. 8
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LANL/SGI/Cray ASCI Blue Mountain
3.072 TeraOPS Peak
48x128-CPU SMP= 3 TF
48x32 GB/SMP= 1.5TB
48x1.44 TB/SMP=70 TB
Inter-SMP Compute Fabric Bandwidth=48 x 12 HIPPI/SMP=115,200 MB/s bi-directional
20(14) HPSS Movers=40(27) HPSS Tape drives of 10-20 MB/s each=400-800 MB/s Bandwidth
LAN
HPSS Meta
Data Server
Bandwidth=48 X 1 HIPPI/SMP=9,600 MB/s bi-directional
Link to LAN
Link to
Legacy Mxs
Bandwidth=8 HIPPI= 1,600 MB/s bi-directional
HPSS Bandwidth=4 HIPPI= 800 MB/s bi-directional
144
120
12
2
1
RAID Bandwidth=48 X 10 FC/SMP=96,000 MB/s bi-directional
Aggregate link bandwidth = 0.115 TB/sAggregate link bandwidth = 0.115 TB/s
9. 9
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Blue Mountain
Planned GSN Compute Fabric
9 Separate 32x32 X-Bar Switch Networks
1 2 3 4 5 6 7 8 9
1 2 3
Expected Improvements
Throughput 115,200 MB/s => 460,800 MB/s, 4x
Link Bandwidth 200 MB/s => 1,600 MB/s, 8x
Round Trip Latency 110 µs => ~ 10 µs ,11x
3 Groups of 16 Computers each
Aggregate link bandwidth = 0.461 TB/sAggregate link bandwidth = 0.461 TB/s
10. 10
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LLNL/IBM Blue-Pacific
3.889 TeraOP/s Peak
Sector S
Sector Y
Sector K
24
24
24
Each SP sector comprised of
• 488 Silver nodes
• 24 HPGN Links
System Parameters
• 3.89 TFLOP/s Peak
• 2.6 TB Memory
• 62.5 TB Global disk
HPGN
HiPPI
2.5 GB/node Memory
24.5 TB Global Disk
8.3 TB Local Disk
1.5 GB/node Memory
20.5 TB Global Disk
4.4 TB Local Disk
1.5 GB/node Memory
20.5 TB Global Disk
4.4 TB Local Disk
FDDI
SST Achieved
>1.2TFLOP/s
on sPPM and Problem
>70x Larger
Than Ever Solved Before!
6
6
12
Aggregate link bandwidth = 0.439 TB/sAggregate link bandwidth = 0.439 TB/s
11. 11
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
I/O Hardware Architecture of
SST
System Data and Control Networks
488 Node IBM SP Sector
56 GPFS
Servers
432 Silver Compute Nodes
Each SST Sector
• Has local and global I/O file system
• 2.2 GB/s delivered global I/O performance
• 3.66 GB/s delivered local I/O performance
• Separate SP first level switches
• Independent command and control
• Link bandwidth = 300 Mb/s Bi-directional
Full system mode
• Application launch over full 1,464 Silver nodes
• 1,048 MPI/us tasks, 2,048 MPI/IP tasks
• High speed, low latency communication between all nodes
• Single STDIO interface
GPFS GPFS GPFS GPFS GPFS GPFS GPFS GPFS
24 SP Links to
Second Level
Switch
12. 12
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Partisn (SN-Method) Scaling
1
10
100
1000
10000
1 10 100 1000 10000
Number of Processors
RED
Blue Pac (1p/n)
Blue Pac (4p/n)
Blue Mtn (UPS)
Blue Mtn (MPI)
Blue Mtn(MPI.2s2r)
Blue Mtn (UPS.offbox)
Constant Number of
Cells per Processor
0
10
20
30
40
50
60
70
80
90
100
1 10 100 1000 10000
Number of Processors
RED
Blue Pac (1p/n)
Blue Pac (4p/n)
Blue Mtn (UPS)
Blue Mtn (MPI)
Blue Mtn(MPI.2s2r)
Blue Mtn(UPS.offbox)
Constant Number of
Cells per Processor
13. 13
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
The JEEP calculation adds to our
understanding the performance of insensitive
high explosives
• This calculation involved 600 atoms (largest
number ever at such a high resolution) with
1,920 electrons, using about 3,840 processors
• This simulation provides crucial insight into the
detonation properties of IHE at high pressures
and temperatures.
• This calculation involved 600 atoms (largest
number ever at such a high resolution) with
1,920 electrons, using about 3,840 processors
• This simulation provides crucial insight into the
detonation properties of IHE at high pressures
and temperatures.
• Relevant experimental data
(e.g., shock wave data) on
hydrogen fluoride (HF) are
almost nonexistent because
of its corrosive nature.
• Quantum-level simulations,
like this one, of HF- H2O
mixtures can substitute for
such experiments.
14. 14
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Silver Node delivered memory
bandwidth is around 150-200
MB/s/process
0
50
100
150
200
250
300
350
1P 2P 4P 8P
E4000 E6000 Silver AS4100 AS8400 Octane Onyx2
Silver Peak Bytes:FLOP/S Ratio is 1.3/2.565
= 0.51
Silver Delivered B:F Ratio is 200/114 = 1.75
15. 15
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
MPI_SEND/US delivers low latency and aggregate
high bandwidth, but counter intuitive behavior per
MPI task
0
20
40
60
80
100
120
0.00E+00 5.00E+05 1.00E+06 1.50E+06 2.00E+06 2.50E+06 3.00E+06 3.50E+06
Message Size (B)
Performance(MB/s)
One Pair
Two Pair
Four Pair
Two Agg
Four Agg
16. 16
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
LLNL/IBM White
10.2 TeraOPS Peak
Aggregate link bandwidth = 2.048 TB/s
Five times better than the SST; Peak is three times better
Ratio of Bytes:FLOPS is improving
Aggregate link bandwidth = 2.048 TB/s
Five times better than the SST; Peak is three times better
Ratio of Bytes:FLOPS is improving
MuSST (PERF) System
• 8 PDEBUG nodes w/16 GB SDRAM
• ~484 PBATCH nodes w/8 GB SDRAM
• 12.8 GB/s delivered global I/O performance
• 5.12 GB/s delivered local I/O performance
• 16 GigaBit Ethernet External Network
• Up to 8 HIPPI-800
Programming/Usage Model
• Application launch over ~492 NH-2 nodes
• 16-way MuSPPA, Shared Memory, 32b MPI
• 4,096 MPI/US tasks
• Likely usage is 4 MPI tasks/node with
4 threads/MPI task
• Single STDIO interface
17. 17
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Interconnect issues for future
machines
— Why Optical? —
Need to increase Bytes:FLOPS ratio
Memory bandwidth (cache line) utilization will be dramatically lower for
codes that utilize arbitrarily connected meshes and adaptive refinement
indirect addressing.
Interconnect bandwidth must be increased and latency must be reduced
to allow a broader range of applications and packages to scale well
To get very large configurations (30 70 100 TeraOPS) larger
SMPs will be deployed
For fixed B:F interconnect ratio this means that more bandwidth coming
out of an SMP
Multiple pipes/planes will be used Optical reduces cable count
Machjne footprint is growing 24,000 square feet may require
optical
Network interface paradigm
Virtual memory direct memory access
Low-latency remote get/put
Reliability Availability and Serviceability (RAS)
18. 18
Oak Ridge Interconnects Workshop - November 1999
ASCI-00-003.
Lawrence Livermore National Laboratory , P. O. Box 808, Livermore, CA 94551
ASCI-00-003.1
ASCI Terascale Simulation
Requirements and
Deployments
David A. Nowak
ASCI Program Leader
Mark Seager
ASCI Terascale Systems Principal Investigator
Lawrence Livermore National Laboratory
University of California
*
Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.