The document discusses the challenges of exascale computing architectures and how active storage strategies can help address them. Exascale systems will require three orders of magnitude more processing power, storage, bandwidth, and other resources compared to today's systems. Simply scaling up current architectures may not be feasible or cost effective. Active storage, where computation is brought to the data rather than moving large amounts of data, can help optimize the use of resources. The Blue Gene architecture demonstrates advantages like high memory and network bandwidth capacity that are well-suited for active storage approaches needed at the exascale.
Blue Gene_SM
Introduction
The word "supercomputer" entered the mainstream lexicon in 1996 and 1997 when IBM's Deep Blue supercomputer challenged the world chess champion in two tournaments broadcast around the world.
Since then, IBM has been busy improving its supercomputer technology and tackling much deeper problems.
Their latest project, code named Blue Gene, is poised to shatter all records for computer and network performance.
What is a Super Computer
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience.
Why we need Super Computers
Supercomputers are very useful in highly calculation-intensive tasks such as
Problems involving quantum physics,
Weather forecasting,
Climate research,
Molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals),
Physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Why we need Super Computers
Also, they are useful for a particular class of problems, known as Grand Challenge problems, full solution for such problems require semi-infinite computing resources.
NASA™s Linux-based Super Computer
Why Supercomputers are Fast
Several elements of a supercomputer contribute to its high level of performance:
Numerous high-performance processors (CPUs) for parallel processing
Specially-designed high-speed internal networks
Specially-designed or tuned operating systems
What is Blue gene
Blue Gene is a computer architecture project designed to produce several supercomputers that are designed to reach operating speeds in the PFLOPS (petaFLOPS = 1015) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS = 1012).
It is a cooperative project among IBM(particularly IBM Rochester and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia.
Why Blue Gene
Blue Gene is an IBM Research project dedicated to exploring the
frontiers in supercomputing:
in computer architecture,
in the software required to program and control massively parallel systems, and
in the use of computation to advance the understanding of important biological processes such as protein folding.
Learning more about biomolecular mechanisms is expected to give medical researchers better understanding of diseases, as well as potential cures.
Why the name Blue gene
Blue - The corporate color of IBM
Gene - The intended use of the Blue Gene clusters was for Computational biology.
Blue Gene Projects
There
Blue Gene is a massively parallel computer being developed at the IBM Thomas J. Watson Research Center .Blue Gene represents a hundred-fold improvement on performance compared with the fastest supercomputers of today. It will achieve 1 PetaFLOP /sec through unprecedented levels of parallelism in excess of 4,0000,000 threads of execution.
Blue Gene_SM
Introduction
The word "supercomputer" entered the mainstream lexicon in 1996 and 1997 when IBM's Deep Blue supercomputer challenged the world chess champion in two tournaments broadcast around the world.
Since then, IBM has been busy improving its supercomputer technology and tackling much deeper problems.
Their latest project, code named Blue Gene, is poised to shatter all records for computer and network performance.
What is a Super Computer
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience.
Why we need Super Computers
Supercomputers are very useful in highly calculation-intensive tasks such as
Problems involving quantum physics,
Weather forecasting,
Climate research,
Molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals),
Physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Why we need Super Computers
Also, they are useful for a particular class of problems, known as Grand Challenge problems, full solution for such problems require semi-infinite computing resources.
NASA™s Linux-based Super Computer
Why Supercomputers are Fast
Several elements of a supercomputer contribute to its high level of performance:
Numerous high-performance processors (CPUs) for parallel processing
Specially-designed high-speed internal networks
Specially-designed or tuned operating systems
What is Blue gene
Blue Gene is a computer architecture project designed to produce several supercomputers that are designed to reach operating speeds in the PFLOPS (petaFLOPS = 1015) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS = 1012).
It is a cooperative project among IBM(particularly IBM Rochester and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia.
Why Blue Gene
Blue Gene is an IBM Research project dedicated to exploring the
frontiers in supercomputing:
in computer architecture,
in the software required to program and control massively parallel systems, and
in the use of computation to advance the understanding of important biological processes such as protein folding.
Learning more about biomolecular mechanisms is expected to give medical researchers better understanding of diseases, as well as potential cures.
Why the name Blue gene
Blue - The corporate color of IBM
Gene - The intended use of the Blue Gene clusters was for Computational biology.
Blue Gene Projects
There
Blue Gene is a massively parallel computer being developed at the IBM Thomas J. Watson Research Center .Blue Gene represents a hundred-fold improvement on performance compared with the fastest supercomputers of today. It will achieve 1 PetaFLOP /sec through unprecedented levels of parallelism in excess of 4,0000,000 threads of execution.
A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exas...inside-BigData.com
In this video from Linaro Connect 2019, Satoshi Matsuoka from Riken presents: A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exascale Performance.
"Fugaku is the flagship next generation national supercomputer being developed by Riken R-CCS and Fujitsu in collaboration. Fugaku will have hyperscale datacenter class resource in a single exascale machine, with more than 150,000 nodes of sever-class Fujitsu A64fx many-core Arm CPUs with the new SVE (Scalable Vector Extension) with low precision math for the first time in the world, accelerating both HPC and AI workloads, augmented with HBM2 memory paired with each CPU, exhibiting nearly a Terabyte/s memory bandwidth for both HPC and AI rapid data movements."
Watch the video: https://wp.me/p3RLHQ-kYn
Learn more: https://postk-web.r-ccs.riken.jp/
and
https://connect.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Perth HPC Conference, Rob Farber from TechEnablement presents: AI is Impacting HPC Everywhere.
"The convergence of AI and HPC has created a fertile venue that is ripe for imaginative researchers — versed in AI technology — to make a big impact in a variety of scientific fields. From new hardware to new computational approaches, the true impact of deep- and machine learning on HPC is, in a word, “everywhere”. Just as technology changes in the personal computer market brought about a revolution in the design and implementation of the systems and algorithms used in high performance computing (HPC), so are recent technology changes in machine learning bringing about an AI revolution in the HPC community. Expect new HPC analytic techniques including the use of GANs (Generative Adversarial Networks) in physics-based modeling and simulation, as well as reduced precision math libraries such as NLAFET and HiCMA to revolutionize many fields of research. Other benefits of the convergence of AI and HPC include the physical instantiation of data flow architectures in FPGAs and ASICs, plus the development of powerful data analytic services."
Learn more: http://www.techenablement.com/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Yuichiro Ajima from Fujitsu presents: The Tofu Interconnect D.
"Through the development of post-K, which will be equipped with this CPU, Fujitsu will contribute to the resolution of social and scientific issues in such computer simulation fields as cutting-edge research, health and longevity, disaster prevention and mitigation, energy, as well as manufacturing, while enhancing industrial competitiveness and contributing to the creation of Society 5.0 by promoting applications in big data and AI fields."
Learn more: https://insidehpc.com/2018/08/fujitsu-unveils-details-post-k-supercomputer-processor-powered-arm/
and
http://www.fujitsu.com/jp/solutions/business-technology/tc/catalog/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
This slide explains about the detailed view hardware architecture which includes CPUs, GPUs, Interconnect networks and applications used by the summit supercomputer
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exas...inside-BigData.com
In this video from Linaro Connect 2019, Satoshi Matsuoka from Riken presents: A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exascale Performance.
"Fugaku is the flagship next generation national supercomputer being developed by Riken R-CCS and Fujitsu in collaboration. Fugaku will have hyperscale datacenter class resource in a single exascale machine, with more than 150,000 nodes of sever-class Fujitsu A64fx many-core Arm CPUs with the new SVE (Scalable Vector Extension) with low precision math for the first time in the world, accelerating both HPC and AI workloads, augmented with HBM2 memory paired with each CPU, exhibiting nearly a Terabyte/s memory bandwidth for both HPC and AI rapid data movements."
Watch the video: https://wp.me/p3RLHQ-kYn
Learn more: https://postk-web.r-ccs.riken.jp/
and
https://connect.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Perth HPC Conference, Rob Farber from TechEnablement presents: AI is Impacting HPC Everywhere.
"The convergence of AI and HPC has created a fertile venue that is ripe for imaginative researchers — versed in AI technology — to make a big impact in a variety of scientific fields. From new hardware to new computational approaches, the true impact of deep- and machine learning on HPC is, in a word, “everywhere”. Just as technology changes in the personal computer market brought about a revolution in the design and implementation of the systems and algorithms used in high performance computing (HPC), so are recent technology changes in machine learning bringing about an AI revolution in the HPC community. Expect new HPC analytic techniques including the use of GANs (Generative Adversarial Networks) in physics-based modeling and simulation, as well as reduced precision math libraries such as NLAFET and HiCMA to revolutionize many fields of research. Other benefits of the convergence of AI and HPC include the physical instantiation of data flow architectures in FPGAs and ASICs, plus the development of powerful data analytic services."
Learn more: http://www.techenablement.com/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Yuichiro Ajima from Fujitsu presents: The Tofu Interconnect D.
"Through the development of post-K, which will be equipped with this CPU, Fujitsu will contribute to the resolution of social and scientific issues in such computer simulation fields as cutting-edge research, health and longevity, disaster prevention and mitigation, energy, as well as manufacturing, while enhancing industrial competitiveness and contributing to the creation of Society 5.0 by promoting applications in big data and AI fields."
Learn more: https://insidehpc.com/2018/08/fujitsu-unveils-details-post-k-supercomputer-processor-powered-arm/
and
http://www.fujitsu.com/jp/solutions/business-technology/tc/catalog/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
This slide explains about the detailed view hardware architecture which includes CPUs, GPUs, Interconnect networks and applications used by the summit supercomputer
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
OMI - The Missing Piece of a Modular, Flexible and Composable Computing WorldAllan Cantle
These slides are part of a "Trends in Memory Desegregation" Webinar published in March 2021. You can see the webinar recording here https://youtu.be/g0QEX5qE8kE.
The presentation slides show how the Open Memory Interface, OMI , is a critical System Architecture building block towards our industry being able to easily build Domain Specific Architectures of the future as defined by the gods of Computing Architecture John Hennessy and David Patterson.
Infoboom webcast, August 23, 2011. This session covers the futures of IT storage, including the shifting roles of SSD, disk and tape; convergence of LAN and SAN networks into a data center network; and the emergence of storage for Cloud Computing.
The Future of IT Storage, by Tony Pearson -- Charts used during Infoboom webcast August 23, 2011. This session covers three topics, the changing shift in storage roles for SSD, Disk and Tape; the convergence of networks into a common data center network (Converged Enhanced Ethernet); and Cloud Computing implications for storage.
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
Slides from the High Performance Cloud Computing tutorial at Supercomputing 2011 in Seattle. Additional materials available from: cloudsupercomputing.net.
IBM Cloud Object Storage: How it works and typical use casesTony Pearson
This session covers the general concepts of object storage and in particular the IBM Cloud Object Storage offerings. Presented at IBM TechU in Johannesburg, South Africa September 2019
Da Vinci - A scaleable architecture for neural network computing (updated v4)Heiko Joerg Schick
Introduction
- Computation in brains and machines
- The hype roller coaster of artificial intelligence | Neural networks beat human performance
- Two distinct eras of compute usage in training AI systems
- Microprocessor trends | Rich variety of computing architectures
- Comparison of processors for deep learning | Preferred architectures for compute are shifting
- Data structure of digital images | Kernel convolution example | - - - Architecture of LeNet-5
Applicability of artificial intelligence
- Ubiquitous and future AI computation requirements
- Artificial intelligence in modern medicine
Product realisation
- Scalable across devices
- Focus on innovation, continuous dedication and backward compatibility
- HiSilicon Ascend 310 | HiSilicon Ascend 910 | HiSilicon Kungpeng 920
Da Vinci architecture
- Building blocks and compute intensity
- Advantages of special compute units
- Da Vinci core architecture | Micro-architectural configurations
End-to-end lifecycle
- Implementation of end-to-end lifecycle in AI projects
- The Challenges to AI implementations
Software stack
- Ascend AI software stack | Logical architecture
- Software flow for model conversion and deployment | Framework manager | Digital vision pre-processing
- Mind Studio | Model Zoo (excerpt)
Gain more practical experiences
- Atlas 200 DK developer board | Application examples | Getting started | Environment deployment
- Ascend developer community
Getting started with Atlas 200 DK developer board
- Preparing the Ubuntu-based development environment
- Environment deployment
- Hardware and software requirements | About version 1.73.0.0
- Install environment dependencies
- Install the toolkit packages
- Install the media module device driver
- Install Mind Studio
Create and write SD card image
- Setting up the operating environment
Boot and connect to the Atlas 200 DK developer board
- Power on the Atlas 200 DK developer board
Install third-party packages
- Installation of additional packages (FFmpeg, OpenCV and Python)
Huawei empowers healthcare industry with AI technologyHeiko Joerg Schick
- Microprocessor trends
- Two distinct eras of compute usage in training AI systems
- Rich variety of computing architectures
- Focus on innovation, continuous dedication and backward compatibility
- Artificial intelligence applications in healthcare systems
- Retinal blood vessel segmentation in the eyeground
- Prediction of protein subcellular localization
- The challenges to AI implementations in projects
IBM Corporate Service Corps - Helping Create Interactive Flood MapsHeiko Joerg Schick
This presentation will provide an overview and insights into the IBM Corporate Service Corps program, the flood prediction system and the real-time flood simulation. We first present the flood level simulation of Metro Manila. We then describe the architecture of the proof-of-concept in some detail. In particular, we discuss the long-term goal by combining several on-the-shelf technologies together, analyzing rainfall data from rain gauges and cloud moistures in satellite images to finally use a simulation model to predict the flood level.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.